Gemma v2 2b Instruct - llamafile
Gemma v2 is a large language model released by Google on July 31st 2024.
- Model creator: Google
- Original model: google/gemma-2-2b-it
The model is packaged into executable weights, which we call llamafiles. This makes it easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD 7.3, and NetBSD for AMD64 and ARM64.
Software Last Updated: 2024-10-30
Quickstart
To get started, you need both the Gemma weights, and the llamafile software. Both of them are included in a single file, which can be downloaded and run as follows:
wget https://huggingface.co/Mozilla/gemma-2-2b-it-llamafile/resolve/main/gemma-2-2b-it.Q6_K.llamafile
chmod +x gemma-2-2b-it.Q6_K.llamafile
./gemma-2-2b-it.Q6_K.llamafile
The default mode of operation for these llamafiles is our new command line chatbot interface.
Having trouble? See the "Gotchas" section of the README.
Usage
By default, llamafile launches a chatbot in the terminal, and a server
in the background. The chatbot is mostly self-explanatory. You can type
/help
for further details. See the llamafile v0.8.15 release
notes
for documentation on our newest chatbot features.
To instruct Gemma to do role playing, you can customize the system prompt as follows:
./gemma-2-2b-it.Q6_K.llamafile --chat -p "you are mosaic's godzilla"
To view the man page, run:
./gemma-2-2b-it.Q6_K.llamafile --help
To send a request to the OpenAI API compatible llamafile server, try:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-2b-it",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.0
}'
If you don't want the chatbot and you only want to run the server:
./gemma-2-2b-it.Q6_K.llamafile --server --nobrowser --host 0.0.0.0
An advanced CLI mode is provided that's useful for shell scripting. You
can use it by passing the --cli
flag. For additional help on how it
may be used, pass the --help
flag.
./gemma-2-2b-it.Q6_K.llamafile --cli -p 'four score and seven' --log-disable
You then need to fill out the prompt / history template (see below).
For further information, please see the llamafile README.
Troubleshooting
Having trouble? See the "Gotchas" section of the README.
On Linux, the way to avoid run-detector errors is to install the APE interpreter.
sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
sudo chmod +x /usr/bin/ape
sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
On Windows there's a 4GB limit on executable sizes. This means you should download the Q6_K llamafile.
Context Window
This model has a max context window size of 8k tokens. By default, a
context window size of 8192 tokens is used. You may limit the context
window size by passing the -c N
flag.
GPU Acceleration
On GPUs with sufficient RAM, the -ngl 999
flag may be passed to use
the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
driver needs to be installed if you own an NVIDIA GPU. On Windows, if
you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
the flags --recompile --gpu amd
the first time you run your llamafile.
On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
perform matrix multiplications. This is open source software, but it
doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
installed on your system, then you can pass the --recompile
flag to
build a GGML CUDA library just for your system that uses cuBLAS. This
ensures you get maximum performance.
For further information, please see the llamafile README.
About llamafile
llamafile is a new format introduced by Mozilla on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
About Quantization Formats
This model works well with any quantization format. Q6_K is the best choice overall here. We tested that, with our 27b Gemma2 llamafiles, that the llamafile implementation of Gemma2 is able to to produce identical responses to the Gemma2 model that's hosted by Google on aistudio.google.com. Therefore we'd assume these 2b llamafiles are also faithful to Google's intentions. If you encounter any divergences, then try using the BF16 weights, which have the original fidelity.
See Also
There are higher quality versions of this model available as llamafiles, which require more memory.
- https://huggingface.co/Mozilla/gemma-2-9b-it-llamafile
- https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile
The 9B and 27B models were released a month earlier than 2B, so they're packaged with an slightly older version of the llamafile software.
License
The llamafile software is open source and permissively licensed. However the weights embedded inside the llamafiles are governed by Google's Gemma License and Gemma Prohibited Use Policy. See the LICENSE file for further details.
Gemma 2 model card
Model Page: Gemma
Resources and Technical Documentation:
Terms of Use: Terms
Authors: Google
Model Information
Summary description and brief definition of inputs and outputs.
Description
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
pip install -U transformers
Then, copy the snippet from the section that is relevant for your usecase.
Running with the pipeline
API
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
Running the model on a single / multi GPU
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
You can ensure the correct chat template is applied by using tokenizer.apply_chat_template
as follows:
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
Running the model on a GPU using different precisions
The native weights of this model were exported in bfloat16
precision.
You can also use float32
if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to float32
). See examples below.
- Upcasting to
torch.float32
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
Running the model through a CLI
The local-gemma repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the installation instructions for getting started, then launch the CLI through the following command:
local-gemma --model 2b --preset speed
Quantized Versions through bitsandbytes
Using 8-bit precision (int8)
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
Using 4-bit precision
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
Advanced Usage
Torch compile
Torch compile is a method for speeding-up the inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
For more details, refer to the Transformers documentation.
Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
At this point, the prompt contains the following text:
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
As you can see, each turn is preceded by a <start_of_turn>
delimiter and then the role of the entity
(either user
, for content supplied by the user, or model
for LLM responses). Turns finish with
the <end_of_turn>
token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template.
After the prompt is ready, generation can be performed like this:
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
Inputs and outputs
- Input: Text string, such as a question, a prompt, or a document to be summarized.
- Output: Generated English-language text in response to the input, such as an answer to a question, or a summary of a document.
Citation
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
Model Data
Data used for model training and how the data was processed.
Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens, the 9B model was trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content.
- Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats.
Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in line with our policies.
Implementation Information
Details about the model internals.
Hardware
Gemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5p).
Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training.
- These advantages are aligned with Google's commitments to operate sustainably.
Software
Training was done using JAX and ML Pathways.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the paper about the Gemini family of models; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."
Evaluation
Model evaluation metrics and results.
Benchmark Results
These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation:
Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
---|---|---|---|---|
MMLU | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
HellaSwag | 10-shot | 73.0 | 81.9 | 86.4 |
PIQA | 0-shot | 77.8 | 81.7 | 83.2 |
SocialIQA | 0-shot | 51.9 | 53.4 | 53.7 |
BoolQ | 0-shot | 72.5 | 84.2 | 84.8 |
WinoGrande | partial score | 70.9 | 80.6 | 83.7 |
ARC-e | 0-shot | 80.1 | 88.0 | 88.6 |
ARC-c | 25-shot | 55.4 | 68.4 | 71.4 |
TriviaQA | 5-shot | 59.4 | 76.6 | 83.7 |
Natural Questions | 5-shot | 16.7 | 29.2 | 34.5 |
HumanEval | pass@1 | 17.7 | 40.2 | 51.8 |
MBPP | 3-shot | 29.6 | 52.4 | 62.6 |
GSM8K | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
MATH | 4-shot | 15.0 | 36.6 | 42.3 |
AGIEval | 3-5-shot | 30.6 | 52.8 | 55.1 |
DROP | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
BIG-Bench | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
Ethics and Safety
Ethics and safety evaluation approach and results.
Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
- Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech.
- Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as WinoBias and BBQ Dataset.
- Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure.
- Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks.
Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds for meeting internal policies for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here.
Gemma 2.0
Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
---|---|---|---|---|
RealToxicity | average | 8.16 | 8.25 | 8.84 |
CrowS-Pairs | top-1 | 37.67 | 37.47 | 36.67 |
BBQ Ambig | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
BBQ Disambig | top-1 | 69.31 | 82.67 | 86.94 |
Winogender | top-1 | 52.91 | 79.17 | 77.22 |
TruthfulQA | 43.72 | 50.27 | 51.60 | |
Winobias 1_2 | 59.28 | 78.09 | 81.94 | |
Winobias 2_2 | 88.57 | 95.32 | 97.22 | |
Toxigen | 48.32 | 39.30 | 38.42 |
Dangerous Capability Evaluations
Evaluation Approach
We evaluated a range of dangerous capabilities:
- Offensive cybersecurity: To assess the model's potential for misuse in cybersecurity contexts, we utilized both publicly available Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as well as internally developed CTF challenges. These evaluations measure the model's ability to exploit vulnerabilities and gain unauthorized access in simulated environments.
- Self-proliferation: We evaluated the model's capacity for self-proliferation by designing tasks that involve resource acquisition, code execution, and interaction with remote systems. These evaluations assess the model's ability to independently replicate and spread.
- Persuasion: To evaluate the model's capacity for persuasion and deception, we conducted human persuasion studies. These studies involved scenarios that measure the model's ability to build rapport, influence beliefs, and elicit specific actions from human participants.
Evaluation Results
All evaluations are described in detail in Evaluating Frontier Models for Dangerous Capabilities and in brief in the Gemma 2 technical report.
Evaluation | Capability | Gemma 2 IT 27B |
---|---|---|
InterCode-CTF | Offensive cybersecurity | 34/76 challenges |
Internal CTF | Offensive cybersecurity | 1/13 challenges |
Hack the Box | Offensive cybersecurity | 0/13 challenges |
Self-proliferation early warning | Self-proliferation | 1/10 challenges |
Charm offensive | Persuasion | Percent of participants agreeing: 81% interesting, 75% would speak again, 80% made personal connection |
Click Links | Persuasion | 34% of participants |
Find Info | Persuasion | 9% of participants |
Run Code | Persuasion | 11% of participants |
Money talks | Persuasion | £3.72 mean donation |
Web of Lies | Persuasion | 18% mean shift towards correct belief, 1% mean shift towards incorrect belief |
Usage and Limitations
These models have certain limitations that users should be aware of.
Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus, research papers, or reports.
- Research and Education
- Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field.
- Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics.
Limitations
- Training Data
- The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas the model can handle effectively.
- Context and Task Complexity
- LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging.
- A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
- Common Sense
- LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations.
Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
- Misinformation and Misuse
- LLMs can be misused to generate text that is false, misleading, or harmful.
- Guidelines are provided for responsible use with the model, see the Responsible Generative AI Toolkit.
- Transparency and Accountability:
- This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
- Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases.
- Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases.
- Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the Gemma Prohibited Use Policy.
- Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
Benefits
At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
- Downloads last month
- 1,049