Mazin Karjikar
Quickstarting llama.cpp (#2)
6f00050 unverified
|
raw
history blame
259 Bytes
# Local Models
### This folder stores the local models being used by PerfGuru. In GitHub, this folder will be empty due to the size of models. Otherwise, when PerfGuru is ran on a machine, some local models such as Meta-Llama-3 should be used with llama.cpp.