|
--- |
|
license: other |
|
license_name: tongyi-qianwen |
|
base_model: cognitivecomputations/dolphin-vision-72b |
|
datasets: |
|
- cognitivecomputations/Dolphin-2.9 |
|
- teknium/OpenHermes-2.5 |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
- cognitivecomputations/dolphin-coder |
|
- cognitivecomputations/samantha-data |
|
- microsoft/orca-math-word-problems-200k |
|
- Locutusque/function-calling-chatml |
|
- internlm/Agent-FLAN |
|
--- |
|
|
|
# DolphinVision 72b - 4.0bpw EXL2 🐬 |
|
|
|
Base model: [cognitivecomputations/dolphin-vision-72b](https://huggingface.co/cognitivecomputations/dolphin-vision-72b) |
|
|
|
Language model quantized to 4.0bpw with FP16 vision layers merged back in. |
|
|
|
Text working in exllamav2/tabbyapi. Vision input not working yet. |
|
|
|
n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi. |
|
|