for people with low Mid pc specs

#26
by bhaveshNOm - opened
  1. your download stopped at 40% (2/5) just ignore that and close the tab and note to delete the model files or you'll get error like file not found or something like that after that put your model that you downloaded in their

  2. CpudefaultAllocator out of memory you have to use swap memory you can find tuts online (if system managed dosent work use custom size option and click on set) it will start working now

  3. if it still doesn't work edit the start bat file and edit this line as "call python server.py --auto-devices --chat --wbits 4 --groupsize 128 --load-in-8bit --pre_layer 25 --gpu-memory 5" note that most useful is pre_layer you can start from pre_layer 35 and then go lower and lower like pre_layer 25 it note that honestly it is use less cause if you lower it a lot it will start giving useless response also you can increase pre_layer to get better response

  4. now if you mp be able to run it but it may still not answer you it cause ur now ur gpu memory is low (you cant do anything other than lowering your pre_layer)

  5. if still you cant run it you have no option other than run it on your cpu using llmac++ it will be hell of slow and it is hard to change parameters and dont have may features like oogabooga (or what ever that is) also it runs on cmd so there will be other problem like you cant just paste multiple lines etc (it will take a lot of time to generate response)

  6. if you want to run it on your cpu and want gui you can use alpaca electron here is a tutorial https://www.youtube.com/watch?v=KopKQDmGk_o it dont have proper memory and will not be able to hold conversation but is little faster also has gui so its nice and you can other cpu models on it as well

  7. this is the best you can have use koala it also similar if not better to alpcaXgpt 4 it is also uncensored here is the tut https://www.youtube.com/watch?v=AZUTsp9Et-o you can also run it on browser including other models like vicunia etc got to this web site https://chat.lmsys.org/?model=koala-13b but its not that private you can also run it locally and on google cloud instruction in the tutorial

I hope this helps if i missed anything or you have something better please tell meπŸ™‚

Sign up or log in to comment