MaziyarPanahi commited on
Commit
54df7dc
1 Parent(s): 3431cbb

Update README.md (#5)

Browse files

- Update README.md (47d5ceb801000fc61e3ee78588a59148291cad09)

Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -106,6 +106,37 @@ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/g
106
  It achieves the following results on the evaluation set:
107
  - Loss: 1.1468
108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  ## Model description
110
 
111
  More information needed
 
106
  It achieves the following results on the evaluation set:
107
  - Loss: 1.1468
108
 
109
+ ## How to use
110
+
111
+ **PEFT**
112
+ ```python
113
+ from peft import PeftModel, PeftConfig
114
+ from transformers import AutoModelForCausalLM
115
+
116
+ model_id = "MaziyarPanahi/gemma-7b-alpaca-52k-v0.1"
117
+
118
+ config = PeftConfig.from_pretrained(model_id)
119
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
120
+ model = PeftModel.from_pretrained(model, model_id)
121
+ ```
122
+
123
+ **Transformers**
124
+ ```python
125
+ # Use a pipeline as a high-level helper
126
+ from transformers import pipeline
127
+
128
+ model_id = "MaziyarPanahi/gemma-7b-alpaca-52k-v0.1"
129
+
130
+ pipe = pipeline("text-generation", model=model_id)
131
+
132
+ # Load model directly
133
+ from transformers import AutoTokenizer, AutoModelForCausalLM
134
+
135
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
136
+ model = AutoModelForCausalLM.from_pretrained(model_id)
137
+ ```
138
+
139
+
140
  ## Model description
141
 
142
  More information needed