daqc commited on
Commit
933fe66
·
1 Parent(s): 5c32957

Add .env template

Browse files
Files changed (1) hide show
  1. .env.template +46 -0
.env.template ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Required: Hugging Face token with read/write permissions for repositories and inference API
2
+ # Get it from: https://huggingface.co/settings/tokens
3
+ HF_TOKEN=your_hugging_face_token_here
4
+
5
+ # Model Configuration
6
+ # Choose ONE of the following model setups:
7
+
8
+ ## 1. For Hugging Face Serverless Inference (Default):
9
+ MODEL=meta-llama/Llama-3.1-8B-Instruct
10
+ # MODEL=Qwen/Qwen2.5-1.5B-Instruct
11
+ # MODEL=mistralai/Mixtral-8x7B-Instruct-v0.1
12
+ # MODEL=HuggingFaceH4/zephyr-7b-beta
13
+
14
+ ## 2. For OpenAI-compatible API:
15
+ # OPENAI_BASE_URL=https://api.openai.com/v1/
16
+ # MODEL=gpt-4
17
+ # API_KEY=your_openai_api_key_here
18
+
19
+ ## 3. For Ollama:
20
+ # OLLAMA_BASE_URL=http://127.0.0.1:11434/
21
+ # MODEL=qwen2.5:32b-instruct-q5_K_S
22
+ # TOKENIZER_ID=Qwen/Qwen2.5-32B-Instruct
23
+
24
+ ## 4. For VLLM:
25
+ # VLLM_BASE_URL=http://127.0.0.1:8000/
26
+ # MODEL=Qwen/Qwen2.5-1.5B-Instruct
27
+ # TOKENIZER_ID=Qwen/Qwen2.5-1.5B-Instruct
28
+
29
+ ## 5. For Hugging Face Inference Endpoints or TGI:
30
+ # HUGGINGFACE_BASE_URL=http://127.0.0.1:3000/
31
+ # TOKENIZER_ID=meta-llama/Llama-3.1-8B-Instruct
32
+
33
+ # Generation Settings (Optional - these are the defaults)
34
+ MAX_NUM_TOKENS=2048
35
+ MAX_NUM_ROWS=1000
36
+ DEFAULT_BATCH_SIZE=5
37
+
38
+ # Chat/SFT Configuration
39
+ # Required for chat data generation with Llama or Qwen models
40
+ # Options: "llama3", "qwen2", or custom template string
41
+ MAGPIE_PRE_QUERY_TEMPLATE=llama3
42
+
43
+ # Optional: Argilla Integration
44
+ # Follow https://docs.argilla.io/latest/getting_started/quickstart/
45
+ # ARGILLA_API_URL=https://[your-owner-name]-[your_space_name].hf.space
46
+ # ARGILLA_API_KEY=your_argilla_api_key_here