jolenechong commited on
Commit
1508700
1 Parent(s): 5732759

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # lora-bart-samsum-tib-1024
17
 
18
- This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the TIB dataset.
19
 
20
  ## Model description
21
 
@@ -42,6 +42,24 @@ Artificial Intelligence (AI) is a rapidly evolving technology that has profound
42
 
43
  Intended for summarizing video conferences/webinars.
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ## Training and evaluation data
46
 
47
  More information needed
 
15
 
16
  # lora-bart-samsum-tib-1024
17
 
18
+ This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the TIB dataset.
19
 
20
  ## Model description
21
 
 
42
 
43
  Intended for summarizing video conferences/webinars.
44
 
45
+ Try out the model with the code below :D
46
+ ```python
47
+ from peft import PeftModel, PeftConfig
48
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
49
+
50
+ config = PeftConfig.from_pretrained("jolenechong/lora-bart-samsum-tib-1024")
51
+ model = AutoModelForSeq2SeqLM.from_pretrained("philschmid/bart-large-cnn-samsum")
52
+ model = PeftModel.from_pretrained(model, "jolenechong/lora-bart-samsum-tib-1024")
53
+ tokenizer = AutoTokenizer.from_pretrained("jolenechong/lora-bart-samsum-tib-1024", from_pt=True)
54
+
55
+ text = """[add transcript you want to summarize here]"""
56
+ inputs = tokenizer(text, return_tensors="pt")
57
+
58
+ with torch.no_grad():
59
+ outputs = model.generate(input_ids=inputs["input_ids"])
60
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy())[0])
61
+ ```
62
+
63
  ## Training and evaluation data
64
 
65
  More information needed