Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
csabakecskemetiΒ 
posted an update 29 days ago
Post
290
Repurposed my older AI workstation to a homelab server, it has received 2xV100 + 1xP40
I can reach huge 210k token context size with MegaBeam-Mistral-7B-512k-GGUF ~70+tok/s, or run Llama-3.1-Nemotron-70B-Instruct-HF-GGUF with 50k Context ~10tok/s (V100 only 40k ctx and 15tok/s).
Also able to Lora finetune with similar performace as an RTX3090.
It moved to the garage to no complaints for the noise from the family. Will move to a Rack soon :D

Nice, right above the fridge! ;)

Β·

Exactly :D
Overflow fridge will be replaced with a rack :)