DBME
DBMe
AI & ML interests
I do optimized EXL2 quants for 48GB VRAM setups.
Recent Activity
updated
a model
about 2 months ago
DBMe/Behemoth-123B-v1.2-2.85bpw-h6-exl2
updated
a model
about 2 months ago
DBMe/Monstral-123B-v2-2.85bpw-h6-exl2
updated
a model
about 2 months ago
DBMe/Endurance-100B-v1-3.6bpw-h6-exl2
Organizations
None yet
DBMe's activity
Update README.md
1
#1 opened about 2 months ago
by
bullerwins
![](https://cdn-avatars.huggingface.co/v1/production/uploads/65cccccefb8ab7fcc2c6424c/0dlk5hmzNhTWr8j9E1DXP.jpeg)
Set PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync
4
#1 opened 4 months ago
by
John198
Set PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync
4
#1 opened 4 months ago
by
John198