--- base_model: HuggingFaceTB/SmolLM2-360M library_name: transformers model_name: SmolLM2-360M-tldr-sft-2025-02-12_15-13 tags: - generated_from_trainer - trl - sft license: mit datasets: - davanstrien/hub-tldr-dataset-summaries-llama - davanstrien/hub-tldr-model-summaries-llama --- # Smol-Hub-tldr
Model visualization
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M). The model is focused on generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub. These summaries are intended to be used for: - creating useful tl;dr descriptions that can give you a quick sense of what a dataset or model is for - as input text for creating embeddings for semantic search. You can see a demo of this in [librarian-bots/huggingface-datasets-semantic-search](https://huggingface.co/spaces/librarian-bots/huggingface-datasets-semantic-search). The model was trained using supervised fine-tuning (SFT) with [TRL](https://github.com/huggingface/trl). A meta example of a summary generated for this card: > This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub. ## Intended Use The model is designed to generate brief, informative summaries of: - Model cards: Focusing on key capabilities and characteristics - Dataset cards: Capturing essential dataset characteristics and purposes ## Training Data The model was trained on: - Model card summaries generated by Llama 3.3 70B - Dataset card summaries generated by Llama 3.3 70B ## Usage Using the chat template when using the model in inference is recommended. Additionally, you should prepend either `` or `` to the start of the card you want to summarize. The training data used the body of the model or dataset card, i.e., the part after the YAML, so you will likely get better results only by passing this part of the card. I have so far found that a low temperature of `0.4` generates better results. Example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from huggingface_hub import ModelCard card = ModelCard.load("davanstrien/Smol-Hub-tldr") # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained("davanstrien/Smol-Hub-tldr") model = AutoModelForCausalLM.from_pretrained("davanstrien/Smol-Hub-tldr") # Format input according to the chat template messages = [{"role": "user", "content": f"{card.text}"}] # Encode with the chat template inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ) # Generate with stop tokens outputs = model.generate( inputs, max_new_tokens=60, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, temperature=0.4, do_sample=True, ) input_length = inputs.shape[1] response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=False) # Extract just the summary part summary = response.split("")[-1].split("")[0] print(summary) >>> "The Smol-Hub-tldr model is a fine-tuned version of SmolLM2-360M designed to generate concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub." ``` The model currently should close its summary with a `` (cooking some more with this...), so you can also use this as a stopping criterion when using `pipeline` inference. ```python from transformers import pipeline, StoppingCriteria, StoppingCriteriaList import torch class StopOnTokens(StoppingCriteria): def __init__(self, tokenizer, stop_token_ids): self.stop_token_ids = stop_token_ids self.tokenizer = tokenizer def __call__( self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs ) -> bool: for stop_id in self.stop_token_ids: if input_ids[0][-1] == stop_id: return True return False # Initialize pipeline pipe = pipeline("text-generation", "davanstrien/Smol-Hub-tldr") tokenizer = pipe.tokenizer # Get the token IDs for stopping stop_token_ids = [ tokenizer.encode("", add_special_tokens=True)[-1], tokenizer.eos_token_id, ] # Create stopping criteria stopping_criteria = StoppingCriteriaList([StopOnTokens(tokenizer, stop_token_ids)]) # Generate with stopping criteria response = pipe( messages, max_new_tokens=50, do_sample=True, temperature=0.7, stopping_criteria=stopping_criteria, return_full_text=False, ) # Clean up the response summary = response[0]["generated_text"] print(summary) >>> "This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub." ``` ## Framework Versions - TRL 0.14.0 - Transformers 4.48.3 - PyTorch 2.6.0 - Datasets 3.2.0 - Tokenizers 0.21.0