John Rachwan
johnrachwanpruna
AI & ML interests
None yet
Recent Activity
new activity
28 days ago
PrunaAI/FLUX.1-schnell-4bit:not run
updated
a Space
about 1 month ago
PrunaAI/README
updated
a model
about 2 months ago
PrunaAI/FLUX.1-dev-8bit
Organizations
johnrachwanpruna's activity
Upload folder using huggingface_hub
2
#1 opened 4 months ago
by
johnrachwanpruna
Upload folder using huggingface_hub
2
#1 opened 4 months ago
by
johnrachwanpruna
Upload folder using huggingface_hub
2
#1 opened 4 months ago
by
johnrachwanpruna
Upload folder using huggingface_hub
2
#1 opened 4 months ago
by
johnrachwanpruna
New weights available
1
#2 opened 8 months ago
by
michaelfeil
Mistake in readme instructions
2
#5 opened 8 months ago
by
adamkdean
What is smashed?
2
#1 opened 8 months ago
by
supercharge19
gibberish results when context is greater 2048
9
#4 opened 8 months ago
by
Bakanayatsu
Thanks Guys!
5
#1 opened 8 months ago
by
Joseph717171
Do they work with ollama? How was the conversion done for 128K, llama.cpp/convert.py complains about ROPE.
8
#2 opened 8 months ago
by
BigDeeper
- Download huggingface-cli command have a typo
1
#3 opened 8 months ago
by
Jay9787WaiF
Please post the imatrix that was used to quantize
1
#1 opened 8 months ago
by
Joseph717171
Based on?
5
#1 opened 8 months ago
by
DefamationStation
Upload folder using huggingface_hub
2
#1 opened 8 months ago
by
johnrachwanpruna
Upload folder using huggingface_hub
2
#1 opened 8 months ago
by
johnrachwanpruna
Upload folder using huggingface_hub
2
#1 opened 8 months ago
by
johnrachwanpruna
Upload folder using huggingface_hub
2
#1 opened 8 months ago
by
johnrachwanpruna