madebyollin
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -53,4 +53,11 @@ Based on this random sample, I would estimate the following dataset statistics:
|
|
53 |
* 5-7% of images may have minor edits or annotatations (timestamps, color grading, borders, etc.)
|
54 |
* 1-2% of images may be copyright-constrained (watermarks or text descriptions cast doubt on the license metadata)
|
55 |
* 1-2% of images may be non-wholesome (guns, suggestive poses, etc.)
|
56 |
-
* 1-2% of images may be non-photos (paintings, screenshots, etc.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
* 5-7% of images may have minor edits or annotatations (timestamps, color grading, borders, etc.)
|
54 |
* 1-2% of images may be copyright-constrained (watermarks or text descriptions cast doubt on the license metadata)
|
55 |
* 1-2% of images may be non-wholesome (guns, suggestive poses, etc.)
|
56 |
+
* 1-2% of images may be non-photos (paintings, screenshots, etc.)
|
57 |
+
|
58 |
+
### Is 10 million images really enough to teach a neural network about the visual world?
|
59 |
+
|
60 |
+
For the parts of the visual world that are well-represented in Megalith-10m, definitely!
|
61 |
+
Projects like [CommonCanvas](https://arxiv.org/abs/2310.16825), [Mitsua Diffusion](https://huggingface.co/Mitsua/mitsua-diffusion-one), and [Matroyshka Diffusion](https://arxiv.org/abs/2310.15111)
|
62 |
+
have shown that you can train useable generative models on similarly-sized image datasets.
|
63 |
+
Of course, many parts of the world aren't well-represented in Megalith-10m, so you'd need additional data to learn about those.
|