refactored the readme
Browse files
README.md
CHANGED
@@ -1,9 +1,63 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
## Training procedure
|
5 |
|
6 |
-
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
- PEFT 0.5.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
metrics:
|
7 |
+
- accuracy
|
8 |
+
pipeline_tag: text-classification
|
9 |
+
tags:
|
10 |
+
- opensource
|
11 |
+
- finetunning
|
12 |
+
- llm
|
13 |
+
- sentiment-analysis
|
14 |
---
|
|
|
15 |
|
16 |
+
## Model Description
|
17 |
|
18 |
+
This repository contains a fine-tuned sentiment analysis model based on the `distilbert-base-uncased` architecture, trained on the "shawhin/imdb-truncated" dataset. The model is designed for text classification tasks in the English language.
|
19 |
+
|
20 |
+
## Model Performance
|
21 |
+
|
22 |
+
The model's performance is evaluated based on accuracy, a common metric for text classification tasks. The specific performance metrics may vary depending on the use case and dataset.
|
23 |
+
|
24 |
+
## Training Procedure
|
25 |
+
|
26 |
+
### Framework Versions
|
27 |
|
28 |
- PEFT 0.5.0
|
29 |
+
|
30 |
+
### Dataset
|
31 |
+
|
32 |
+
The model is trained on the "shawhin/imdb-truncated" dataset, which is a truncated version of the IMDb movie review dataset. It contains labeled movie reviews with binary sentiment labels (positive or negative).
|
33 |
+
|
34 |
+
## Fine-Tuning Details
|
35 |
+
|
36 |
+
The model is fine-tuned using the `distilbert-base-uncased` architecture, a smaller and faster version of BERT, well-suited for various NLP tasks.
|
37 |
+
|
38 |
+
## How to Use
|
39 |
+
|
40 |
+
You can use this fine-tuned sentiment analysis model for various text classification tasks, including sentiment analysis, text categorization, and more. To use the model, you can easily load it with the Hugging Face Transformers library and integrate it into your Python applications.
|
41 |
+
|
42 |
+
Here's an example of how to load and use the model:
|
43 |
+
|
44 |
+
```python
|
45 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
46 |
+
|
47 |
+
# Load the fine-tuned model
|
48 |
+
model = AutoModelForSequenceClassification.from_pretrained("samadpls/sentiment-analysis")
|
49 |
+
|
50 |
+
# Load the tokenizer
|
51 |
+
tokenizer = AutoTokenizer.from_pretrained("your-model-name")
|
52 |
+
|
53 |
+
# Perform inference
|
54 |
+
text = "This is a great movie!"
|
55 |
+
inputs = tokenizer(text, return_tensors="pt")
|
56 |
+
outputs = model(**inputs)
|
57 |
+
predicted_label = outputs.logits.argmax().item()
|
58 |
+
|
59 |
+
# Print the predicted sentiment label
|
60 |
+
print("Predicted Sentiment: Positive" if predicted_label == 1 else "Predicted Sentiment: Negative")
|
61 |
+
```
|
62 |
+
# License
|
63 |
+
This model is distributed under the Apache License 2.0. For more details, see the [LICENSE](LICENSE) file.
|