--- language: en tags: - llama - Peft - fine-tuning - text-generation - causal-lm - NLP license: mit datasets: - mlabonne/FineTome-100k --- # Llama-3.2-3b-FineTome-100k ## Model Description **Llama-3.2-3b-FineTome-100k** is a fine-tuned version of the Llama 3.2 model, optimized for various natural language processing (NLP) tasks. It has been trained on a dataset containing 100,000 examples, designed to improve its performance on domain-specific applications. ### Key Features - **Model Size**: 3 billion parameters - **Architecture**: Transformer-based architecture optimized for NLP tasks - **Fine-tuning Dataset**: 100k curated examples from diverse sources ## Use Cases - Text generation - Sentiment analysis - Question answering - Language translation - Dialogue systems ## Installation To use the **Llama-3.2-3b-FineTome-100k** model, ensure you have the `transformers` library installed. You can install it using pip: ```bash pip install transformers ``` ```bash from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("khushwant04/Llama-3.2-3b-FineTome-100k") model = AutoModelForCausalLM.from_pretrained("khushwant04/Llama-3.2-3b-FineTome-100k") # Encode input text input_text = "Tell me someting intresting about India and its culture?" input_ids = tokenizer.encode(input_text, return_tensors='pt') # Generate output output = model.generate(input_ids, max_length=50) output_text = tokenizer.decode(output[0], skip_special_tokens=True) print(output_text) ```