agentlans's picture
Upload 8 files
6e92d8e verified
|
raw
history blame
4.18 kB
---
language: en
license: mit
tags:
- natural-language-inference
- sentence-transformers
- transformers
- nlp
- model-card
---
# all-MiniLM-L6-v2-nli
- **Base Model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Task:** Natural Language Inference (NLI)
- **Framework:** Hugging Face Transformers, Sentence Transformers
all-MiniLM-L6-v2-nli is a fine-tuned NLI model that classifies the relationship between pairs of sentences into three categories: entailment, neutral, and contradiction. It enhances the capabilities of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) for improved performance on NLI tasks.
## Intended Use
all-MiniLM-L6-v2-nli is ideal for applications requiring understanding of logical relationships between sentences, including:
- Semantic textual similarity
- Question answering
- Dialogue systems
- Content moderation
## Performance
all-MiniLM-L6-v2-nli was trained on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset, achieving competitive results in sentence pair classification.
Performance on the MNLI matched validation set:
- Accuracy: 0.7183
- Precision: 0.72
- Recall: 0.72
- F1-score: 0.72
## Training details
<details>
<summary><strong>Training Details</strong></summary>
- **Dataset:**
- Used [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli).
- **Sampling:**
- 100 000 training samples and 10 000 evaluation samples.
- **Fine-tuning Process:**
- Custom Python script with adaptive precision training (bfloat16).
- Early stopping based on evaluation loss.
- **Hyperparameters:**
- **Learning Rate:** 2e-5
- **Batch Size:** 64
- **Optimizer:** AdamW (weight decay: 0.01)
- **Training Duration:** Up to 10 epochs
</details>
<details>
<summary><strong>Reproducibility</strong></summary>
To ensure reproducibility:
- Fixed random seed: 42
- Environment:
- Python: 3.10.12
- PyTorch: 2.5.1
- Transformers: 4.44.2
</details>
## Usage Instructions
## Using Sentence Transformers
```python
from sentence_transformers import CrossEncoder
model_name = "agentlans/all-MiniLM-L6-v2-nli"
model = CrossEncoder(model_name)
scores = model.predict(
[
("A man is eating pizza", "A man eats something"),
(
"A black race car starts up in front of a crowd of people.",
"A man is driving down a lonely road.",
),
]
)
label_mapping = ["entailment", "neutral", "contradiction"]
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
print(labels)
# Output: ['entailment', 'contradiction']
```
## Using Transformers Library
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "agentlans/all-MiniLM-L6-v2-nli"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
features = tokenizer(
[
"A man is eating pizza",
"A black race car starts up in front of a crowd of people.",
],
["A man eats something", "A man is driving down a lonely road."],
padding=True,
truncation=True,
return_tensors="pt",
)
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ["entailment", "neutral", "contradiction"]
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
# Output: ['entailment', 'contradiction']
```
## Limitations and Ethical Considerations
all-MiniLM-L6-v2-nli may reflect biases present in the training data. Users should evaluate its performance in specific contexts to ensure fairness and accuracy.
## Conclusion
all-MiniLM-L6-v2-nli offers a robust solution for NLI tasks, enhancing [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)'s capabilities with straightforward integration into existing frameworks. It aids developers in building intelligent applications that require nuanced language understanding.