Triangle104 commited on
Commit
a359949
1 Parent(s): c364734

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -17,6 +17,63 @@ library_name: transformers
17
  This model was converted to GGUF format from [`nvidia/OpenMath2-Llama3.1-8B`](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) for more details on the model.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Use with llama.cpp
21
  Install llama.cpp through brew (works on Mac and Linux)
22
 
 
17
  This model was converted to GGUF format from [`nvidia/OpenMath2-Llama3.1-8B`](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) for more details on the model.
19
 
20
+ ---
21
+ Model details:
22
+ -
23
+ OpenMath2-Llama3.1-8B is obtained by finetuning Llama3.1-8B-Base with OpenMathInstruct-2.
24
+
25
+ The model outperforms Llama3.1-8B-Instruct on all the popular math benchmarks we evaluate on, especially on MATH by 15.9%.
26
+
27
+ How to use the models?
28
+ -
29
+ Our models are trained with the same "chat format" as Llama3.1-instruct models (same system/user/assistant tokens). Please note that these models have not been instruction tuned on general data and thus might not provide good answers outside of math domain.
30
+
31
+ We recommend using instructions in our repo to run inference with these models, but here is an example of how to do it through transformers api:
32
+
33
+ import transformers
34
+ import torch
35
+
36
+ model_id = "nvidia/OpenMath2-Llama3.1-8B"
37
+
38
+ pipeline = transformers.pipeline(
39
+ "text-generation",
40
+ model=model_id,
41
+ model_kwargs={"torch_dtype": torch.bfloat16},
42
+ device_map="auto",
43
+ )
44
+
45
+ messages = [
46
+ {
47
+ "role": "user",
48
+ "content": "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.\n\n" +
49
+ "What is the minimum value of $a^2+6a-7$?"},
50
+ ]
51
+
52
+ outputs = pipeline(
53
+ messages,
54
+ max_new_tokens=4096,
55
+ )
56
+ print(outputs[0]["generated_text"][-1]['content'])
57
+
58
+ Reproducing our results
59
+ -
60
+ We provide all instructions to fully reproduce our results.
61
+ Citation
62
+
63
+ If you find our work useful, please consider citing us!
64
+
65
+ @article{toshniwal2024openmath2,
66
+ title = {OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data},
67
+ author = {Shubham Toshniwal and Wei Du and Ivan Moshkov and Branislav Kisacanin and Alexan Ayrapetyan and Igor Gitman},
68
+ year = {2024},
69
+ journal = {arXiv preprint arXiv:2410.01560}
70
+ }
71
+
72
+ Terms of use
73
+ -
74
+ By accessing this model, you are agreeing to the LLama 3.1 terms and conditions of the license, acceptable use policy and Meta’s privacy policy
75
+
76
+ ---
77
  ## Use with llama.cpp
78
  Install llama.cpp through brew (works on Mac and Linux)
79