Sudhir Gupta
sudhir2016
·
AI & ML interests
None yet
Recent Activity
updated
a model
about 23 hours ago
sudhir2016/your_reft_emoji_chat
new activity
9 days ago
Stanford-ILIAD/minivla-vq-libero90-prismatic:Integration in Transformers library
new activity
10 days ago
openvla/openvla-7b:4 bit quantized version
Organizations
None yet
sudhir2016's activity
Integration in Transformers library
#1 opened 9 days ago
by
sudhir2016
4 bit quantized version
#6 opened 10 days ago
by
sudhir2016
Unable to run on free tier Colab
11
#10 opened 4 months ago
by
sudhir2016
Integration into transformer library
1
#2 opened 6 months ago
by
sudhir2016
Unable to run on free tier Google Colab
1
#9 opened 6 months ago
by
sudhir2016
Using timesfm in Google Colab notebook.
5
#9 opened 8 months ago
by
sudhir2016
Unable to use on free tier Google Colab
#2 opened 9 months ago
by
sudhir2016
New activity in
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ
9 months ago
Runs out of memory on free tier Google Colab
3
#3 opened 9 months ago
by
sudhir2016
Running on GPU via HF transformers
1
#1 opened 10 months ago
by
sudhir2016
Use in pipeline
2
#1 opened 11 months ago
by
sudhir2016
Memory crash in Google Colab free tier
1
#5 opened 11 months ago
by
sudhir2016
Getting error running inference in Free tier Google Colab
1
#6 opened 11 months ago
by
sudhir2016
Request for quantized version
2
#2 opened 12 months ago
by
sudhir2016
Error using in transformers with this code
2
#2 opened 12 months ago
by
sudhir2016
Request for quantized version.
2
#2 opened 11 months ago
by
sudhir2016
Runs out of memory on free tier Google Colab
3
#1 opened 12 months ago
by
sudhir2016
Error trying single_inference.py
3
#4 opened 12 months ago
by
JimChien
Unable to replicate using LazyMergeKit Colab
2
#3 opened 12 months ago
by
sudhir2016
Getting key error
2
#1 opened 12 months ago
by
sudhir2016
Quantized version please
9
#1 opened 12 months ago
by
HR1777