cognitivess commited on
Commit
d981a81
1 Parent(s): b9b2851

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -3
README.md CHANGED
@@ -11,8 +11,29 @@ pip install git+https://huggingface.co/CognitivessAI/cognitivess
11
  Then, you can use the model like this:
12
 
13
  ```python
14
- from transformers import AutoTokenizer, AutoModelForCausalLM
 
 
15
 
16
- tokenizer = AutoTokenizer.from_pretrained('CognitivessAI/cognitivess')
17
- model = AutoModelForCausalLM.from_pretrained('CognitivessAI/cognitivess')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ```
 
11
  Then, you can use the model like this:
12
 
13
  ```python
14
+ # pip install bitsandbytes accelerate
15
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
16
+ import torch
17
 
18
+ # Set up quantization config
19
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
20
+
21
+ # Load tokenizer and model
22
+ tokenizer = AutoTokenizer.from_pretrained("CognitivessAI/cognitivess")
23
+ model = AutoModelForCausalLM.from_pretrained(
24
+ "CognitivessAI/cognitivess",
25
+ quantization_config=quantization_config,
26
+ device_map="auto" # This will automatically distribute the model across available GPUs
27
+ )
28
+
29
+ # Prepare input
30
+ input_text = "Write me a poem about Machine Learning."
31
+ inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
32
+
33
+ # Generate output
34
+ with torch.no_grad():
35
+ outputs = model.generate(**inputs, max_length=100)
36
+
37
+ # Decode and print the result
38
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
39
  ```