Update README.md
Browse files
README.md
CHANGED
@@ -279,7 +279,7 @@ See the snippet below for usage with Transformers:
|
|
279 |
```python
|
280 |
>>> import transformers
|
281 |
>>> import torch
|
282 |
-
>>> model_id = "
|
283 |
>>> pipeline = transformers.pipeline(
|
284 |
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
|
285 |
)
|
@@ -643,7 +643,7 @@ As part of the Llama 3 release, we updated our [Responsible Use Guide](https://l
|
|
643 |
|
644 |
#### Llama 3-Instruct
|
645 |
|
646 |
-
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
|
647 |
|
648 |
<span style="text-decoration:underline;">Safety</span>
|
649 |
|
@@ -653,7 +653,7 @@ For our instruction tuned model, we conducted extensive red teaming exercises, p
|
|
653 |
|
654 |
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
|
655 |
|
656 |
-
We built internal benchmarks and developed mitigations to limit false refusals making
|
657 |
|
658 |
|
659 |
#### Responsible release
|
@@ -669,7 +669,7 @@ If you access or use R3, a fine tuned version of Llama 3, you agree to the Accep
|
|
669 |
|
670 |
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
|
671 |
|
672 |
-
LLaMA3 and by extension R3 undergone a
|
673 |
|
674 |
|
675 |
|
@@ -705,11 +705,11 @@ Please see the Responsible Use Guide available at [http://llama.meta.com/respons
|
|
705 |
|
706 |
## Citation
|
707 |
|
708 |
-
@article{
|
709 |
|
710 |
-
title={
|
711 |
|
712 |
-
author={
|
713 |
|
714 |
year={2024},
|
715 |
|
|
|
279 |
```python
|
280 |
>>> import transformers
|
281 |
>>> import torch
|
282 |
+
>>> model_id = "qompass/r3"
|
283 |
>>> pipeline = transformers.pipeline(
|
284 |
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
|
285 |
)
|
|
|
643 |
|
644 |
#### Llama 3-Instruct
|
645 |
|
646 |
+
As outlined in the Meta Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
|
647 |
|
648 |
<span style="text-decoration:underline;">Safety</span>
|
649 |
|
|
|
653 |
|
654 |
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
|
655 |
|
656 |
+
We built internal benchmarks and developed mitigations to limit false refusals making R3 our most helpful model to date.
|
657 |
|
658 |
|
659 |
#### Responsible release
|
|
|
669 |
|
670 |
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
|
671 |
|
672 |
+
LLaMA3 and by extension R3 undergone a two fold assessment of the safety of the model in this area:
|
673 |
|
674 |
|
675 |
|
|
|
705 |
|
706 |
## Citation
|
707 |
|
708 |
+
@article{R33modelcard,
|
709 |
|
710 |
+
title={R3 3 Model Card},
|
711 |
|
712 |
+
author={map@qompass},
|
713 |
|
714 |
year={2024},
|
715 |
|