prithivMLmods commited on
Commit
f07b80c
·
verified ·
1 Parent(s): fbd5f2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -10
README.md CHANGED
@@ -74,10 +74,6 @@ tags:
74
  5. **Context Extraction for AI Systems:**
75
  - Preprocess chat or conversation logs for downstream AI applications.
76
 
77
- ---
78
-
79
- ### **Usage**
80
-
81
  #### **Load the Model**
82
  ```python
83
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -86,9 +82,6 @@ model_name = "prithivMLmods/Llama-Chat-Summary-3.2-3B"
86
  tokenizer = AutoTokenizer.from_pretrained(model_name)
87
  model = AutoModelForCausalLM.from_pretrained(model_name)
88
  ```
89
-
90
- ---
91
-
92
  #### **Generate a Summary**
93
  ```python
94
  prompt = """
@@ -106,14 +99,11 @@ outputs = model.generate(**inputs, max_length=100, temperature=0.7)
106
  summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
107
  print("Summary:", summary)
108
  ```
109
-
110
  ---
111
-
112
  ### **Expected Output**
113
  **"The user reported a delayed order (12345), and support confirmed it will arrive tomorrow."**
114
 
115
  ---
116
-
117
  ### **Deployment Notes**
118
 
119
  - **Serverless API:**
 
74
  5. **Context Extraction for AI Systems:**
75
  - Preprocess chat or conversation logs for downstream AI applications.
76
 
 
 
 
 
77
  #### **Load the Model**
78
  ```python
79
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
82
  tokenizer = AutoTokenizer.from_pretrained(model_name)
83
  model = AutoModelForCausalLM.from_pretrained(model_name)
84
  ```
 
 
 
85
  #### **Generate a Summary**
86
  ```python
87
  prompt = """
 
99
  summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
100
  print("Summary:", summary)
101
  ```
 
102
  ---
 
103
  ### **Expected Output**
104
  **"The user reported a delayed order (12345), and support confirmed it will arrive tomorrow."**
105
 
106
  ---
 
107
  ### **Deployment Notes**
108
 
109
  - **Serverless API:**