Mike Ravkine
mike-ravkine
AI & ML interests
AI, Research
Organizations
None yet
mike-ravkine's activity
Eval Request & Question
3
#19 opened 25 days ago
by
isr431
Ability to select multiple sizes in the leaderboard
1
#17 opened 2 months ago
by
nviraj
Leaderboard is down
4
#18 opened about 1 month ago
by
Nekuromento
New Phi-3-vision-128k-instruct Dropped today, please test it
2
#14 opened 4 months ago
by
Dotoro22
What does best result only option mean? How does it differ from the original leaderboard?
3
#6 opened 8 months ago
by
zhiminy
Why doesn't the number of rows of both Python and Javascript add up to both?
1
#10 opened 6 months ago
by
zhiminy
Model request. (Instruct-senior)
4
#8 opened 7 months ago
by
rombodawg
Evaluate interview.ndjson
2
#13 opened 5 months ago
by
Dotoro22
CUDA support
2
#2 opened 5 months ago
by
mike-ravkine
Model request - Codeqwen-7b-code-v1.5-fp16
10
#12 opened 5 months ago
by
Dotoro22
Fix trailing , in generation_config.json
2
#1 opened 5 months ago
by
mike-ravkine
What is `junior-dev` after all?
1
#7 opened 7 months ago
by
zhiminy
Update boomer_code_tokenizer.py
#1 opened 8 months ago
by
mike-ravkine
Any manner to download the evaluation results in the leaderboard as csv or json format?
7
#2 opened 10 months ago
by
zhiminy
Prompting
2
#7 opened 9 months ago
by
AliceThirty
Can you include my models?
6
#5 opened 9 months ago
by
ajibawa-2023
What does summarization result option mean? How does it differ from the original leaderboard?
4
#4 opened 10 months ago
by
zhiminy
Error with new version of transformers v4.35.0
2
#10 opened 11 months ago
by
alonsosilva
OOM under vLLM even with 80GB GPU
5
#2 opened 9 months ago
by
mike-ravkine
Why not only keep the latest version?
2
#3 opened 10 months ago
by
zhiminy
Apply for community grant: Personal project (gpu and storage)
#1 opened about 1 year ago
by
mike-ravkine
exllama requires that pad_token_id be specified in config.json
2
#2 opened about 1 year ago
by
mike-ravkine
Benchmark of different GGML version
2
#2 opened about 1 year ago
by
aiapprentice101
Update prompt format
8
#4 opened about 1 year ago
by
mike-ravkine
Chat prompt format?
12
#3 opened about 1 year ago
by
mike-ravkine
Successful inference using a 24GB GPU
#2 opened about 1 year ago
by
mike-ravkine
Fix q_group_size
1
#1 opened about 1 year ago
by
mike-ravkine
ValueError: If `eos_token_id` is defined, make sure that `pad_token_id` is defined.
#2 opened about 1 year ago
by
mike-ravkine
Thanks for this model!
4
#1 opened about 1 year ago
by
mike-ravkine
special tokens in prompt with ggml/examples/starcoder
1
#3 opened over 1 year ago
by
mljxy
Upload merges.txt
5
#1 opened over 1 year ago
by
mike-ravkine
Cleaned version
5
#3 opened over 1 year ago
by
mike-ravkine
First attempt at a cleaning
3
#2 opened over 1 year ago
by
mike-ravkine
Upload solutions-2023-06-17.jsonl
#1 opened over 1 year ago
by
mike-ravkine
Does this model support FIM?
1
#1 opened over 1 year ago
by
mike-ravkine
The default eos_token_id is 2, should be 11
3
#11 opened over 1 year ago
by
mike-ravkine
Update config.json
1
#10 opened over 1 year ago
by
mike-ravkine
Update config.json
#8 opened over 1 year ago
by
mike-ravkine
Default eos_token_id=2 is incorrect, needs to be 11
1
#8 opened over 1 year ago
by
mike-ravkine
Fix eos_token_id to align with vocabulary of this model
#6 opened over 1 year ago
by
mike-ravkine
Fix eos_token_id to align with vocabulary of this model
1
#9 opened over 1 year ago
by
mike-ravkine
Any working tag/release/hash of AutoGPTQ?
4
#4 opened over 1 year ago
by
mike-ravkine
Fim tokens use _ as seperator not -
1
#2 opened over 1 year ago
by
mike-ravkine
Thanks! But what happened to cross-encoder organization?
2
#1 opened over 1 year ago
by
mike-ravkine