Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign UpThey know that having full models and stuff in the commit history is taking it up.. right?
I have no clue. I guess I'm about to find out..
Adding context
@reach-vb made some clarifications in this post in r/LocalLLaMA
Here's another thread about the same issue.
It doesn't seem like anyone is actually hitting barriers with creating new repos or uploading currently. I do find it pretty sus to even show me the storage quota, assuming legitimate storage will always be granted anyway... 🤔
I'd appreciate a more official announcement on HF though, I've anecdotally seen a few notable creators panic-deleting repositories
VB's reddit post
Heya! I’m VB, I lead the advocacy and on-device team @ HF. This is just a UI update for limits which have been around for a long while. HF has been and always will be liberal at giving out storage + GPU grants (this is already the case - this update just brings more visibility).
We’re working on updating the UI to make it more clear and recognisable - grants are made for use-cases where the community utilises your model checkpoints and benefits for them - Quantising models is one such use-case, other use-cases are pre-training/ fine-tuning datasets, Model merges and more.
Similarly we also give storage grants to multi-PB datasets like YODAS, Common Voice, FineWeb and the likes.
This update is more for people who dump random stuff across model repos, or use model/ dataset repos to spam users and abuse the HF storage and community.
I’m a fellow GGUF enjoyer, and a quant creator (see - https://huggingface.co/spaces/ggml-org/gguf-my-repo) - we will continue to add storage + GPU grants as we have in past.
Cheers!
too much Bases