MrDragonFox
MrDragonFox
AI & ML interests
None yet
Recent Activity
Reacted to
danielhanchen's
post
with 🔥
3 days ago
Vision finetuning is in 🦥Unsloth! You can now finetune Llama 3.2, Qwen2 VL, Pixtral and all Llava variants up to 2x faster and with up to 70% less VRAM usage! Colab to finetune Llama 3.2: https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing
New activity
23 days ago
m-a-p/MIO-7B-Instruct:code still missing ... at least give us examples
New activity
about 1 month ago
LanguageBind/Open-Sora-Plan-v1.3.0:prompt refiner misses part 3 of the model
Organizations
MrDragonFox's activity
code still missing ... at least give us examples
#1 opened 23 days ago
by
MrDragonFox
prompt refiner misses part 3 of the model
2
#1 opened about 1 month ago
by
MrDragonFox
Training code
2
#1 opened 2 months ago
by
ChristophSchuhmann
Any Inference code?
12
#6 opened 3 months ago
by
DongfuJiang
really good model
2
#2 opened 3 months ago
by
gileneo
licence requires LLAMA3 as prefix
2
#8 opened 7 months ago
by
MrDragonFox
Loading all safetensors cost
2
#13 opened 8 months ago
by
mohres
Is there a best way to infer this model from multiple small memory GPUs?
1
#39 opened 8 months ago
by
hongdouzi
Context length is not 128k
2
#41 opened 8 months ago
by
pseudotensor
Can't stop this from rambling and repeating
1
#5 opened 8 months ago
by
FlareRebellion
Is this meant to be used with alpaca prompt templates?
1
#4 opened 9 months ago
by
left1000
Update README.md
#3 opened 9 months ago
by
Dampfinchen
Update README.md
#1 opened 9 months ago
by
Dampfinchen
add chat template
#2 opened 9 months ago
by
Dampfinchen
Please upload the full model first
88
#1 opened 10 months ago
by
ChuckMcSneed
Inference generation extremely slow
6
#57 opened 11 months ago
by
aledane