Having issue for doing inference after fine tuning idefics2 using LoRA
#70 opened 4 days ago
by
jxue005
Mixing text-only data into fine-tuning
3
#68 opened 11 days ago
by
bilibraker
Update README.md
#63 opened 17 days ago
by
SalmanFaroz
How to modify the weights of the LLM section
2
#62 opened 18 days ago
by
cookey39
Reproducing idefics-8b(instruct)
1
#61 opened 20 days ago
by
Iheb-Chaabane
Bug in attention mask
1
#58 opened 26 days ago
by
lucasjin
How is the image resolution expanded in a vision encoder?
2
#57 opened 26 days ago
by
efei
Pretraining deduplication of data to prevent data leakage?
1
#55 opened about 1 month ago
by
SS12444
Idefics2-pretraining
4
#54 opened about 1 month ago
by
orrzohar
Add idefics2-8b for HuggingChat
3
#53 opened about 1 month ago
by
wangdafa
shape mismatch: value tensor of shape [2320] cannot be broadcast to indexing result of shape [2262]
6
#52 opened about 1 month ago
by
yeargun
After fine tuning, there is a problem for using it.
4
#50 opened about 1 month ago
by
SalmanFaroz
Bounding boxes in the pre-training data and pre-training tasks
1
#49 opened about 1 month ago
by
bilibraker
How does the attention_mask contribute to the projector preformance?
12
#45 opened about 1 month ago
by
lucasjin
Getting idefics2 into gguf format for use with llama.cpp and/or ollama?
2
#43 opened about 1 month ago
by
PaulCapestany
Large value difference when comparing hidden_states with flash attention ON and OFF
#42 opened about 1 month ago
by
Ye27
Fine-tuning Script: QLoRA w/ Flash Attn fails
2
#41 opened about 2 months ago
by
RonanMcGovern
Use in pipelines?
1
#40 opened about 2 months ago
by
harpreetsahota
Setting compute_metrics in Trainer() leads to AttributeError
3
#38 opened about 2 months ago
by
Eyel
[AUTOMATED] Model Memory Requirements
#33 opened about 2 months ago
by
model-sizer-bot
Dedicated Inference Endpoints for Idefics2-8b
8
#32 opened about 2 months ago
by
zesquirrelnator
How can I deploy idefics2-8b with TensorRT + Triton?
9
#31 opened about 2 months ago
by
catworld1212
Multi-gpu fine-tuning
20
#30 opened about 2 months ago
by
matbee
Model is incompatible with Inference Endpoints
2
#23 opened about 2 months ago
by
sebbyjp
Cuda OOM by simply doing a forward pass on an A6000 (48GB VRAM)
10
#11 opened 2 months ago
by
starzmustdie
CUDA out of memory on A100 with 40GB
7
#8 opened 2 months ago
by
SkalskiP
Error running idefics2-8b-AWQ
23
#7 opened 2 months ago
by
oliverguhr