rohansb10 commited on
Commit
624b3f9
1 Parent(s): 5169ea9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -38
README.md CHANGED
@@ -54,43 +54,5 @@ print(output)
54
  Training the Model
55
  The training process involves loading the pre-trained BART model and tokenizer, preparing a custom dataset, and training the model using the PyTorch DataLoader. Refer to the train_model() and evaluate_model() functions in the code for the detailed implementation.
56
 
57
- Saving and Uploading the Model
58
- After training, save your model and tokenizer using:
59
-
60
- # Save the model and tokenizer
61
- save_directory = "./custom_bart_model"
62
- custom_model.model.save_pretrained(save_directory)
63
- tokenizer.save_pretrained(save_directory)
64
- Upload the saved model to Hugging Face Hub:
65
-
66
-
67
- from huggingface_hub import notebook_login, create_repo, upload_folder
68
-
69
- # Log in to Hugging Face
70
- notebook_login()
71
-
72
- # Upload to Hugging Face Hub
73
- upload_folder(
74
- folder_path=save_directory,
75
- path_in_repo="",
76
- repo_id="your_hf_username/custom-bart-finetuned",
77
- repo_type="model",
78
- )
79
- Generating Summaries
80
- The model can be used to generate summaries by feeding it input text. Adjust the parameters in the generate_summary() function to tweak the length, beam size, and other settings.
81
-
82
- Contributing
83
- Contributions are welcome! If you have any suggestions or improvements, please submit a pull request.
84
-
85
- License
86
- This project is licensed under the MIT License - see the LICENSE file for details.
87
-
88
-
89
- ### Key Points:
90
- - **Overview**: Describes what the project is about.
91
- - **Installation**: Lists dependencies and how to install them.
92
- - **Usage**: Provides instructions on how to load and use the model.
93
- - **Training, Saving, and Uploading**: Steps to train, save, and upload the model.
94
- - **Contributing and License**: Information on contributing and licensing.
95
 
96
  Feel free to modify any section to better fit your project’s needs!
 
54
  Training the Model
55
  The training process involves loading the pre-trained BART model and tokenizer, preparing a custom dataset, and training the model using the PyTorch DataLoader. Refer to the train_model() and evaluate_model() functions in the code for the detailed implementation.
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  Feel free to modify any section to better fit your project’s needs!