The labels are different from the original ones
#2
by
grodino
- opened
Hi !
Thanks for uploading stanford-dogs on HF.
It seems that the id of the labels follow the alphabetical order of the labels (0 => Affenpinscher, 1 => Afghan Hound, ...).
This order is different from the original labels ids (as found in the train_list.mat
and test_list.mat
files of the original dataset).
Would you be interested in correcting the labels?
Best,
Augustin
No worries, I'm on holidays right now so I'll correct this in two weeks.
Thanks !
Just had a look. If you can put the mapping here I'll do the rest.
Hi !
In the end, I uploaded the dataset to the hub myself, here is the script I used to have the right labels and class names:
def sdogs_to_hfds(data_dir: Path, to: Path, hub_id: str):
"""Converts the StanfordDogs dataset to the hugginface dataset format.
Assumes that the data is downloaded `data_dir`. The original data can be
found at https://vision.stanford.edu/aditya86/ImageNetDogs/.
The folder pointed by `data_dir` should contain
- The images (http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar) extracted in the "Images" folder.
- `train_list.mat` and `test_list.mat` (http://vision.stanford.edu/aditya86/ImageNetDogs/lists.tar) at its root.
An example of `hub_id` is `myname/my-dataset`
"""
splits = {
"train": loadmat(data_dir / "train_list.mat"),
"test": loadmat(data_dir / "test_list.mat"),
}
classes: dict[str, int] = {}
ordered_classnames: list[str] = [None] * 120
metadata: dict[str, list[dict]] = {"train": [], "test": []}
for split_name, split in splits.items():
for (file_name,), label in zip(split["file_list"][:, 0], split["labels"][:, 0]):
file_name = Path(file_name)
label = int(label.item()) - 1
# read the class label
class_name = file_name.parent.name
if class_name in classes:
assert classes[class_name] == label
else:
classes[class_name] = label
ordered_classnames[label] = class_name
# copy the files to the right folder
dest = to / split_name / file_name
dest.parent.mkdir(parents=True, exist_ok=True)
shutil.copy(data_dir / "Images" / file_name, dest)
metadata[split_name].append({"file_name": str(file_name), "label": label})
pl.from_records(metadata[split_name]).write_csv(
to / split_name / "metadata.csv"
)
# Print the class mapping to include it in the readme
for class_name, class_idx in classes.items():
print(f"'{class_idx}': {class_name}")
# Declare the features of the dataset
features = Features(
{
"image": Image(),
"label": ClassLabel(
num_classes=120,
names=ordered_classnames,
),
}
)
dataset = load_dataset("imagefolder", data_dir=to, features=features)
dataset.push_to_hub(hub_id)
Cheers,
Augustin
grodino
changed discussion status to
closed