File size: 5,705 Bytes
462ebf5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e850d7c
462ebf5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e850d7c
462ebf5
 
 
 
 
 
 
 
 
 
e850d7c
462ebf5
e850d7c
462ebf5
e850d7c
462ebf5
e850d7c
462ebf5
 
 
e850d7c
 
 
 
462ebf5
e850d7c
462ebf5
e850d7c
462ebf5
e850d7c
 
 
462ebf5
 
 
 
 
 
e850d7c
 
 
 
 
462ebf5
 
 
e850d7c
462ebf5
 
e850d7c
462ebf5
 
 
 
 
e850d7c
 
 
 
 
 
 
462ebf5
e850d7c
462ebf5
e850d7c
462ebf5
e850d7c
462ebf5
e850d7c
462ebf5
e850d7c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
annotations_creators: []
language: en
license: other
size_categories:
- 1K<n<10K
task_categories:
- image-classification
task_ids: []
pretty_name: Describable Textures Dataset
tags:
- fiftyone
- image
- image-classification
dataset_summary: '



  ![image/png](dataset_preview.gif)



  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 5640 samples.


  ## Installation


  If you haven''t already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  import fiftyone.utils.huggingface as fouh


  # Load the dataset

  # Note: other available arguments include ''max_samples'', etc

  dataset = fouh.load_from_hub("Voxel51/Describable-Textures-Dataset")


  # Launch the App

  session = fo.launch_app(dataset)

  ```

  '
---

# Dataset Card for Describable Textures Dataset

<!-- Provide a quick summary of the dataset. -->




![image/png](dataset_preview.gif)


This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 5640 samples.

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/Describable-Textures-Dataset")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

### Dataset Description

"Our ability of vividly describing the content of images is a clear demonstration of the power of human visual system. Not only we can recognise objects in images (e.g. a cat, a person, or a car), but we can also describe them to the most minute details, extracting an impressive amount of information at a glance. But visual perception is not limited to the recognition and description of objects. Prior to high-level semantic understanding, most textural patterns elicit a rich array of visual impressions. We could describe a texture as "polka dotted, regular, sparse, with blue dots on a white background"; or as "noisy, line-like, and irregular".

Our aim is to reproduce this capability in machines. Scientifically, the aim is to gain further insight in how textural information may be processed, analysed, and represented by an intelligent system. Compared to classic task of textural analysis such as material recognition, such perceptual properties are much richer in variety and structure, inviting new technical challenges.

DTD is a texture database, consisting of 5640 images, organized according to a list of 47 terms (categories) inspired from human perception. There are 120 images for each category. Image sizes range between 300x300 and 640x640, and the images contain at least 90% of the surface representing the category attribute. The images were collected from Google and Flickr by entering our proposed attributes and related terms as search queries. The images were annotated using Amazon Mechanical Turk in several iterations. For each image we provide key attribute (main category) and a list of joint attributes.

The data is split in three equal parts, in train, validation and test, 40 images per class, for each split. We provide the ground truth annotation for both key and joint attributes, as well as the 10 splits of the data we used for evaluation."



- **Curated by:** M.Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, A. Vedaldi,
- **Funded by:** NSF Grant #1005411, JHU-HLTCOE, Google Research, ERC grant VisRec no. 228180, ANR-10-JCJC-0205
- **Language(s) (NLP):** en
- **License:** other

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/dtd/
- **Paper:** https://www.robots.ox.ac.uk/~vgg/publications/2014/Cimpoi14/cimpoi14.pdf
- **Demo:** https://try.fiftyone.ai/datasets/describable-textures-dataset/samples


## Dataset Creation

### Curation Rationale

'Patterns and textures are key characteristics of many natural objects: a shirt can be striped, the wings of a butterfly can be veined, and the skin of an animal can be scaly.
Aiming at supporting this dimension in image understanding, we address the problem of describing textures with semantic attributes. We identify a vocabulary of forty-seven
texture terms and use them to describe a large dataset of
patterns collected “in the wild”. The resulting Describable
Textures Dataset (DTD) is a basis to seek the best representation for recognizing describable texture attributes in images. ' - dataset authors

### Source Data

Google and Flickr


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```bibtex
@InProceedings{cimpoi14describing,
	      Author    = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi},
	      Title     = {Describing Textures in the Wild},
	      Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})},
	      Year      = {2014}}
```

## More Information

This research is based on work done at the 2012 CLSP Summer Workshop, and was partially supported by NSF Grant #1005411, ODNI via the JHU-HLTCOE and Google Research. Mircea Cimpoi was supported by the ERC grant VisRec no. 228180 and Iasonas Kokkinos by ANR-10-JCJC-0205.

The development of the describable textures dataset started in June and July 2012 at the Johns Hopkins Centre for Language and Speech Processing (CLSP) Summer Workshop. The authors are most grateful to Prof. Sanjeev Khudanpur and Prof. Greg Hager.

## Dataset Card Authors

[Jacob Marks](https://huggingface.co/jamarks)