Tokenizer? Gemma 27B
#13
by
Kar02
- opened
Hi there,
Thanks for previous help! I was able to fix the issue by updating Nvidia's latest driver. Now, I am trying to connect LM studio with a local file storage system, PrivateGPT. I had some issues in specifying the tokenizer in the yaml file. Could you help me figure out what should be the right tokenizer to put in?
server:
env_name: ${APP_ENV:vllm}
llm:
mode: openailike
max_new_tokens: 8192
tokenizer: gemma-2-27b-it
temperature: 0.8
embedding:
mode: huggingface
ingest_mode: simple
huggingface:
embedding_hf_model_name: nomic-ai/nomic-embed-text-v1.5
openai:
api_base: http://localhost:8000/v1
api_key: lm-studio
model: bartowski/gemma-2-27b-it-GGUF
request_timeout: 600.0
Best