Sophosympatheia
sophosympatheia
AI & ML interests
Large Llama2 model merging, 70b and 103b size. Using LLMs for roleplaying and interactive fiction.
Organizations
None yet
sophosympatheia's activity
Control vector discussion
450
#2 opened 4 months ago
by
ChuckMcSneed
Ultra HD merge
6
#4 opened about 2 months ago
by
Autumnlight
Merge failed
4
#3 opened 2 months ago
by
Autumnlight
Upload model weights
2
#1 opened 2 months ago
by
sophosympatheia
Merge recipe to produce the intermediate models?
2
#2 opened 2 months ago
by
sophosympatheia
Instruction following issues
3
#4 opened 3 months ago
by
lightning-missile
rep_pen_slope in koboldcpp
1
#11 opened 3 months ago
by
lightning-missile
The question is about the next model.
2
#8 opened 3 months ago
by
Kotokin
Update README.md
3
#7 opened 3 months ago
by
llama-anon
How was the rope_theta value determined?
1
#6 opened 4 months ago
by
ddh0
Update README.md
1
#5 opened 4 months ago
by
altomek
Hey, Do I need ALL models?
1
#3 opened 4 months ago
by
JRTCloud9
Update config.jon
2
#3 opened 4 months ago
by
BigHuggyD
Upload model weights
2
#1 opened 4 months ago
by
sophosympatheia
Very nice model.
116
#3 opened 6 months ago
by
Iommed
Love the name!
11
#1 opened 5 months ago
by
jukofyork
Poor grammer?
40
#2 opened 6 months ago
by
colourspec
Is it possible to make Midnight-Mixtral?
2
#10 opened 5 months ago
by
ddh0
Anyone hosting this on a service?
1
#9 opened 5 months ago
by
unluckyton
Appreciate your work
#1 opened 6 months ago
by
sophosympatheia
Censorship and dataset questions
3
#8 opened 6 months ago
by
SicariusSicariiStuff
Question about prompting and System prompt in Vicuna format
4
#10 opened 7 months ago
by
houmie
Better than Midnight-Rose (llama based)??
1
#5 opened 7 months ago
by
Puchacz19
Metadata: encode merged models as `base_model`
1
#4 opened 7 months ago
by
julien-c
Merged a 103B version
2
#3 opened 7 months ago
by
FluffyKaeloky
Almost the same as what I have been planning!
17
#2 opened 8 months ago
by
froggeric
Merged... :P
2
#5 opened 7 months ago
by
altomek
Thank You!
2
#7 opened 7 months ago
by
playcrackthesky
Upload model weights
2
#1 opened 7 months ago
by
sophosympatheia
Adding Evaluation Results
#4 opened 8 months ago
by
leaderboard-pr-bot
5.0 bpw exl2 quant request
4
#2 opened 8 months ago
by
BeefyRook
EXL2 Quants
9
#2 opened 8 months ago
by
Dracones
GGUF Quants
2
#3 opened 8 months ago
by
Dracones
Upload model weights
2
#1 opened 8 months ago
by
sophosympatheia
Adding Evaluation Results
#5 opened 8 months ago
by
leaderboard-pr-bot
Feedback
6
#6 opened 8 months ago
by
Szarka
Hi, I made gptq quant.
12
#3 opened 8 months ago
by
Kotokin
fix quant link
1
#5 opened 8 months ago
by
mradermacher
I have a question about increased context
1
#4 opened 8 months ago
by
AS1200
Upload model weights
2
#1 opened 8 months ago
by
sophosympatheia
Scores very highly on EQ Bench!
6
#3 opened 8 months ago
by
BoshiAI
quantize to 4bit
1
#2 opened 9 months ago
by
Kotokin
Very interesting that miqu will give 16k context work even only first layer and last layer
15
#2 opened 8 months ago
by
akoyaki
Recommendation for subsequent versions
1
#6 opened 8 months ago
by
sophosympatheia
Exl2 Quants
1
#3 opened 8 months ago
by
llmixer
VRAM requirements
8
#1 opened 8 months ago
by
sophosympatheia
GGUF quants available
2
#2 opened 8 months ago
by
mradermacher
License?
1
#2 opened 8 months ago
by
Artefact2
Upload model weights
2
#1 opened 8 months ago
by
sophosympatheia