Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Tags:
image
Libraries:
Datasets
Dask
License:
padge commited on
Commit
f9175ad
1 Parent(s): 757982f

Updating README.

Browse files
Files changed (2) hide show
  1. tutorials/images.md +3 -8
  2. tutorials/metadata.md +4 -9
tutorials/images.md CHANGED
@@ -4,14 +4,14 @@ Once you have the URLs or S3 file keys from the metadata files, you can download
4
  #### cURL
5
  Download an image from a url to a local image file with the name `image.png`:
6
  ```bash
7
- curl -O image.png https://pd12m.s3.us-west-2.amazonaws.com/image.png
8
  ```
9
  #### Python
10
  Download an image from a url to a local image file with the name `image.png`:
11
  ```python
12
  import requests
13
 
14
- url = "https://pd12m.s3.us-west-2.amazonaws.com/image.png"
15
  response = requests.get(url)
16
  with open('image.png', 'wb') as f:
17
  f.write(response.content)
@@ -19,11 +19,6 @@ with open('image.png', 'wb') as f:
19
  #### img2dataset
20
  You can also use the `img2dataset` tool to quickly download images from a metadata file. The tool is available [here](https://github.com/rom1504/img2dataset). The example below will download all the images to a local `images` directory.
21
  ```bash
22
- img2dataset download --url_list pd12m-metadata.001.parquet --input_format parquet --url_col url --caption_col caption --output-dir images/
23
  ```
24
 
25
- #### S3 CLI
26
- Download an image from an S3 bucket to an image with the name `image.png`:
27
- ```bash
28
- aws s3 cp s3://pd12m/image.png image.png
29
- ```
 
4
  #### cURL
5
  Download an image from a url to a local image file with the name `image.png`:
6
  ```bash
7
+ curl -O image.png https://pd12m.s3.us-west-2.amazonaws.com/images/image.png
8
  ```
9
  #### Python
10
  Download an image from a url to a local image file with the name `image.png`:
11
  ```python
12
  import requests
13
 
14
+ url = "https://pd12m.s3.us-west-2.amazonaws.com/images/image.png"
15
  response = requests.get(url)
16
  with open('image.png', 'wb') as f:
17
  f.write(response.content)
 
19
  #### img2dataset
20
  You can also use the `img2dataset` tool to quickly download images from a metadata file. The tool is available [here](https://github.com/rom1504/img2dataset). The example below will download all the images to a local `images` directory.
21
  ```bash
22
+ img2dataset download --url_list pd12m.01.parquet --input_format parquet --url_col url --caption_col caption --output-dir images/
23
  ```
24
 
 
 
 
 
 
tutorials/metadata.md CHANGED
@@ -4,17 +4,17 @@ The metadata files are in parquet format, and contain the following attributes:
4
  - `url`: The URL of the image.
5
  - `s3_key`: The S3 file key of the image.
6
  - `caption`: A caption for the image.
7
- - `md5_hash`: The MD5 hash of the image file.
8
- - `mime_type`: The MIME type of the image file.
9
  - `width`: The width of the image in pixels.
10
  - `height`: The height of the image in pixels.
11
- - `license_type`: The URL of the license.
 
12
 
13
  #### Open a metadata file
14
  The files are in parquet format, and can be opened with a tool like `pandas` in Python.
15
  ```python
16
  import pandas as pd
17
- df = pd.read_parquet('pd15m-metadata.001.parquet')
18
  ```
19
 
20
  #### Get URLs from metadata
@@ -23,8 +23,3 @@ Once you have opened a maetadata file with pandas, you can get the URLs of the i
23
  urls = df['url']
24
  ```
25
 
26
- #### Get S3 File Keys from metadata
27
- You can also get the S3 file keys, which can be used to download the images using the S3 CLI:
28
- ```python
29
- s3_keys = df['s3_key']
30
- ```
 
4
  - `url`: The URL of the image.
5
  - `s3_key`: The S3 file key of the image.
6
  - `caption`: A caption for the image.
7
+ - `hash`: The MD5 hash of the image file.
 
8
  - `width`: The width of the image in pixels.
9
  - `height`: The height of the image in pixels.
10
+ - `mime_type`: The MIME type of the image file.
11
+ - `license`: The URL of the license.
12
 
13
  #### Open a metadata file
14
  The files are in parquet format, and can be opened with a tool like `pandas` in Python.
15
  ```python
16
  import pandas as pd
17
+ df = pd.read_parquet('pd12m.01.parquet')
18
  ```
19
 
20
  #### Get URLs from metadata
 
23
  urls = df['url']
24
  ```
25