Image Classification
timm
JointTaggerProject / README.md
drhead's picture
Update README.md
80249a2 verified
|
raw
history blame
4.95 kB
metadata
license: apache-2.0
library_name: timm
pipeline_tag: image-classification

RedRocket Joint Tagger Project

JTP-1: PILOT

This model is a multi-label classifier model designed and trained by RedRocket for use on furry images, using E621 tags.

PILOT is the first model of this series. It is trained on over 9000 tags -- tags were selected with the criteria of being e621 tags with more than 500 occurrences, that are not artist or character tags.

Model Details

Model Description

  • Developed by: RedRocket
  • Compute power provided by: Minotoro and Frosting.ai (thank you)
  • Model type: Multi-label classifier
  • License: Apache 2.0
  • Finetuned from model: SigLIP-400M ViT

Model Sources

Uses

Direct Use

Use it to tag furry images.

Downstream Use

Use it to train a text-to-image model on synthetic tags.

Out-of-Scope Use

Use it to tag non-furry images. It might not work terribly well but it might also work surprisingly well! Great entertainment value either way.

Bias, Risks, and Limitations

This model may contain biases. Tags that are poorly tagged in the original data may be weakly predicted by the classifier, for instance. Tags that are very commonly present alongside other tags may be hallucinated.

Recommendations

Check at least some portion of your outputs manually, preferably a diverse sample, to verify the correctness of its outputs, and apply a different threshold if it seems necessary.

How to Get Started with the Model

Use the included code to launch a Gradio demo for playing with the model. We recommend a threshold of 0.2 for starting out. Validation stats during training showed a Bookmaker's Informedness of 0.725 at this value (this means that the model is that much better at guessing tags than random guessing). Manual evaluation seems to suggest that a large portion of the gap between that value and 1 is likely to be due to false negatives from the dataset.

Training Details

Training Data

The model was trained on a roughly 4 million image subset of e621. No dataset filtering was applied.

Loss weighting was informed by a Bayesian prior model trained on a set of tags from non-deleted post tag strings from an e621 database dump.

Training Procedure

Images go in, logits come out. You can't explain that.

Preprocessing [optional]

Image preprocessing should be done in the following order:

  1. Resize image to longest side 384.
  2. torchvision.transforms.ToTensor()
  3. Composite the alpha channel, if present, with 50% gray.
  4. Normalize to mean 0.5 and std 0.5 (changing the range from (0, 1) to (-1, 1))
  5. Pad image to 384x384 (torchvision.transforms.CenterCrop((384,384)) will do this)

Training Hyperparameters

  • Training regime: Model was trained for 4 epochs on a batch size of 512, using Schedule Free Adam

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

It seems to tag furry images fairly well.

Summary