Edit model card

Model Details

Model Description

The willieseun/Enron-Falcon-11b model is a large-scale language model based on the Falcon architecture, fine-tuned specifically for email generation using the Enron dataset. This model is designed to generate coherent and contextually appropriate text, particularly suited for tasks related to email composition.

  • Developed by: WILLIESEUN
  • Model type: Transformer-based Language Model
  • Language(s) (NLP): English
  • License: Apache 2.0

Model Sources

Uses

Direct Use

The model can be used directly for email generation tasks. Users can input prompts or partial content, and the model will generate corresponding text.

Downstream Use

This model is suitable for downstream tasks requiring email composition, such as email summarization, response generation, or personalized email content generation.

Bias, Risks, and Limitations

The model's performance may vary depending on the quality and representativeness of the training data (Enron dataset). It may exhibit biases present in the training data, and caution should be exercised when using generated text in sensitive or critical applications.

Recommendations

Users should review and post-process the generated text to ensure appropriateness and accuracy, particularly in professional or formal communication settings.

How to Get Started with the Model

To use the model, you can leverage the Hugging Face Transformers library. Below is an example code snippet for generating emails:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_name = "willieseun/Enron-Falcon-11b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example prompt
prompt_text = "Compose an email from Claudio Ribeiro to Vince J Kaminski regarding the possibility of sponsoring a Financial Engineering Pro-Seminar at MIT. The email should mention that Enron may have sponsored a similar seminar in the past (related to Real Options) and inquire if the Research department or the Weather Desk (interested in a Weather Trading problem) would be interested in co-sponsoring."

pipe = pipeline("text-generation", tokenizer=tokenizer, model=model, return_full_text=False, max_length=190)
print(pipe(prompt_text))

Training Details

Training Data

The model was fine-tuned on the Enron email dataset, which contains real-world emails from employees at the Enron Corporation.

Training Procedure

The training utilized the Falcon architecture and was fine-tuned using a Causal-LM approach, optimizing for email generation tasks.

Training Hyperparameters

  • Training regime: CausalLM fine-tuning
  • Batch size: 1
  • Learning rate: 2e-5
  • Epochs: 3

Evaluation

Testing Data, Factors & Metrics

Testing Data

The model was evaluated on a held-out subset of the Enron dataset.

Metrics

Only the evaluation loss was used.

Results

The model demonstrates coherent and contextually relevant email generation based on the evaluation metrics.

Environmental Impact

The environmental impact of model training and inference can vary based on the hardware and compute infrastructure used.

Citation

BibTeX:

@article{willieseun_enron_falcon_11b,
  title={willieseun/Enron-Falcon-11b: Fine-tuned Email Generation Model},
  author={WILLIESEUN},
  journal={Hugging Face Model Hub},
  year={2024},
  howpublished={url{https://huggingface.co/willieseun/Enron-Falcon-11b}}
}

APA:

WILLIESEUN. (2024). willieseun/Enron-Falcon-11b: Fine-tuned Email Generation Model. Hugging Face Model Hub. https://huggingface.co/willieseun/Enron-Falcon-11b

Downloads last month
166
Safetensors
Model size
5.98B params
Tensor type
F32
·
U8
·
Inference API
Input a message to start chatting with willieseun/Enron-Falcon-11b.
This model can be loaded on Inference API (serverless).