failed to load model in LM Studio 0.2.24

#32
by skzz - opened

Could you tell me how to load in LM studio or GPT4ALL?

{
  "cause": "(Exit code: -36861). Unknown error. Try a different model and/or config.",
  "suggestion": "",
  "data": {
    "memory": {
      "ram_capacity": "31.95 GB",
      "ram_unused": "14.69 GB"
    },
    "gpu": {
      "gpu_names": [
        "NVIDIA GeForce RTX 3090"
      ],
      "vram_recommended_capacity": "24.00 GB",
      "vram_unused": "22.76 GB"
    },
    "os": {
      "platform": "win32",
      "version": "10.0.19045",
      "supports_avx2": true
    },
    "app": {
      "version": "0.2.24",
      "downloadsDir": "S:\\llama3"
    },
    "model": {}
  },
  "title": "Error loading model."
}```

Please my dear, load the gguf file with llama.cpp. see link below.
https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf

use one of them from 4 bits (Q4) to 8 bits (Q8). if you really wish to get a load of 16 bits floats, use that F16 one.

Sign up or log in to comment