File size: 4,996 Bytes
9f2527a 38df2ea a3b2d4d 9f2527a fbd5f2a 9f2527a fbd5f2a 9f2527a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: creativeml-openrail-m
datasets:
- prithivMLmods/Context-Based-Chat-Summary-Plus
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- safetensors
- chat-summary
- 3B
- Ollama
- text-generation-inference
- trl
- Llama3.2
---
### **Llama-Chat-Summary-3.2-3B: Context-Aware Summarization Model**
**Llama-Chat-Summary-3.2-3B** is a fine-tuned model designed for generating **context-aware summaries** of long conversational or text-based inputs. Built on the **meta-llama/Llama-3.2-3B-Instruct** foundation, this model is optimized to process structured and unstructured conversational data for summarization tasks.
| **File Name** | **Size** | **Description** | **Upload Status** |
|--------------------------------------------|------------------|--------------------------------------------------|-------------------|
| `.gitattributes` | 1.57 kB | Git LFS tracking configuration. | Uploaded |
| `README.md` | 42 Bytes | Initial commit with minimal documentation. | Uploaded |
| `config.json` | 1.03 kB | Model configuration settings. | Uploaded |
| `generation_config.json` | 248 Bytes | Generation-specific configurations. | Uploaded |
| `pytorch_model-00001-of-00002.bin` | 4.97 GB | Part 1 of the PyTorch model weights. | Uploaded (LFS) |
| `pytorch_model-00002-of-00002.bin` | 1.46 GB | Part 2 of the PyTorch model weights. | Uploaded (LFS) |
| `pytorch_model.bin.index.json` | 21.2 kB | Index file for the model weights. | Uploaded |
| `special_tokens_map.json` | 477 Bytes | Mapping of special tokens for the tokenizer. | Uploaded |
| `tokenizer.json` | 17.2 MB | Pre-trained tokenizer file. | Uploaded (LFS) |
| `tokenizer_config.json` | 57.4 kB | Configuration file for the tokenizer. | Uploaded |
### **Key Features**
1. **Conversation Summarization:**
- Generates concise and meaningful summaries of long chats, discussions, or threads.
2. **Context Preservation:**
- Maintains critical points, ensuring important details aren't omitted.
3. **Text Summarization:**
- Works beyond chats; supports summarizing articles, documents, or reports.
4. **Fine-Tuned Efficiency:**
- Trained with *Context-Based-Chat-Summary-Plus* dataset for accurate summarization of chat and conversational data.
---
### **Training Details**
- **Base Model:** [meta-llama/Llama-3.2-3B-Instruct](#)
- **Fine-Tuning Dataset:** [prithivMLmods/Context-Based-Chat-Summary-Plus](#)
- Contains **98.4k** structured and unstructured conversations, summaries, and contextual inputs for robust training.
---
### **Applications**
1. **Customer Support Logs:**
- Summarize chat logs or support tickets for insights and reporting.
2. **Meeting Notes:**
- Generate concise summaries of meeting transcripts.
3. **Document Summarization:**
- Create short summaries for lengthy reports or articles.
4. **Content Generation Pipelines:**
- Automate summarization for newsletters, blogs, or email digests.
5. **Context Extraction for AI Systems:**
- Preprocess chat or conversation logs for downstream AI applications.
#### **Load the Model**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Llama-Chat-Summary-3.2-3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
#### **Generate a Summary**
```python
prompt = """
Summarize the following conversation:
User1: Hey, I need help with my order. It hasn't arrived yet.
User2: I'm sorry to hear that. Can you provide your order number?
User1: Sure, it's 12345.
User2: Let me check... It seems there was a delay. It should arrive tomorrow.
User1: Okay, thank you!
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Summary:", summary)
```
---
### **Expected Output**
**"The user reported a delayed order (12345), and support confirmed it will arrive tomorrow."**
---
### **Deployment Notes**
- **Serverless API:**
This model currently lacks sufficient usage for serverless endpoints. Use **dedicated endpoints** for deployment.
- **Performance Requirements:**
- GPU with sufficient memory (recommended for large models).
- Optimization techniques like quantization can improve efficiency for inference.
--- |