--- base_model: bigcode/tiny_starcoder_py library_name: transformers model_name: tiny-starcoder-ft tags: - generated_from_trainer - smol-course - module_1 - code_generation - trl - sft licence: license --- # Model Card for tiny-starcoder-ft This model is a fine-tuned version of [bigcode/tiny_starcoder_py](https://huggingface.co/bigcode/tiny_starcoder_py) using a samples from [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python model_name = "sky-2002/tiny-starcoder-ft" model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path=model_name ).to(device) tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name) prompt = "Write python code to calculate sum of a list" # Format with template messages = [{"role": "user", "content": prompt}] formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```