casperhansen commited on
Commit
4e449ff
1 Parent(s): 43f37cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -42,4 +42,37 @@ generation_output = model.generate(
42
  streamer=streamer,
43
  max_new_tokens=512
44
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ```
 
42
  streamer=streamer,
43
  max_new_tokens=512
44
  )
45
+ ```
46
+
47
+ ### vLLM
48
+
49
+ Support is added to vLLM:
50
+
51
+ ```
52
+ pip install git+https://github.com/mistralai/vllm-release@add-mistral
53
+ ```
54
+
55
+ Run using this model:
56
+
57
+ ```python
58
+ from vllm import LLM, SamplingParams
59
+
60
+ prompts = [
61
+ "Hello, my name is",
62
+ "The president of the United States is",
63
+ "The capital of France is",
64
+ "The future of AI is",
65
+ ]
66
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
67
+
68
+ llm = LLM(model="casperhansen/mistral-7b-instruct-v0.1-awq", quantization="awq", dtype="half")
69
+
70
+ outputs = llm.generate(prompts, sampling_params)
71
+
72
+ # Print the outputs.
73
+ for output in outputs:
74
+ prompt = output.prompt
75
+ generated_text = output.outputs[0].text
76
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
77
+
78
  ```