|
--- |
|
license: apache-2.0 |
|
tags: |
|
- instruct |
|
- instructions |
|
- domain adapt |
|
- instructiongen |
|
metrics: |
|
- rouge |
|
widget: |
|
- text: >- |
|
You'll need to start by choosing the right venue. Consider the type of |
|
atmosphere and the size of the area that will be suitable for the number of |
|
guests you plan to invite. Choose the right decorations based on your |
|
brother's interests, such as balloons in his favorite colors, banners, and |
|
streamers. Next, decide on the food and drinks, making sure they are tasty |
|
and appropriate for the occasion. Then decide on the other games, music, and |
|
entertainment that will make the party memorable. Finally, involve your |
|
brother's friends and family to help create the perfect surprise. |
|
example_title: birthday party |
|
- text: 1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo |
|
example_title: ice cream |
|
- text: >- |
|
Start by selecting a scale model of a building that fits the theme. Use a |
|
hobby knife and glue to cut and assemble the model into a ruined or |
|
abandoned version of itself, adding details like broken windows and |
|
graffiti. Create a base for the diorama using foam, plaster, or other |
|
materials, and paint it to resemble a ruined street or sidewalk. Add |
|
miniature vehicles, debris, and figures to complete the scene, and use |
|
weathering techniques like dry brushing and rust washes to add realism. |
|
Display the diorama in a shadow box or other protective case to showcase |
|
your work. |
|
example_title: Miniature diorama creation |
|
- text: >- |
|
Start by selecting clothing that is futuristic and edgy, such as leather |
|
jackets, neon-colored accessories, and tech-inspired patterns. Add |
|
accessories like goggles, cybernetic implants, and LED lights to enhance the |
|
cyberpunk vibe. Use makeup and body paint to create a futuristic look, such |
|
as metallic skin or neon makeup. Consider adding functional elements to your |
|
costume, such as a built-in backpack or hidden pockets for your tech |
|
gadgets. Finally, practice your confident walk and embrace your inner |
|
cyberpunk for a memorable and immersive costume experience. |
|
example_title: Cyberpunk costume design |
|
- text: >- |
|
Start by creating a base terrain with mountains, valleys, and other natural |
|
features. Use fractal noise and displacement mapping to add texture and |
|
detail to the terrain, and experiment with different materials like rock, |
|
grass, and water. Add surreal elements like floating islands, giant |
|
mushrooms, or impossible geometry to create a dreamlike atmosphere. Use |
|
lighting and color grading to enhance the mood and tone of the scene, and |
|
render the final image at a high resolution for maximum impact. Share your |
|
surreal landscape with the world and inspire others to explore the |
|
possibilities of 3D art. |
|
example_title: Surreal 3D landscape creation |
|
- text: >- |
|
Start by setting a realistic goal and creating a training plan. Build up |
|
your mileage gradually over time, and incorporate cross-training and |
|
strength exercises to prevent injury and improve endurance. Be sure to stay |
|
hydrated and properly fuel your body with nutritious foods. Listen to your |
|
body and adjust your training as needed to avoid overexertion or burnout. |
|
Finally, taper your training in the weeks leading up to the race to give |
|
your body time to rest and recover before the big day. |
|
example_title: Marathon training |
|
inference: |
|
parameters: |
|
max_length: 96 |
|
num_beams: 4 |
|
early_stopping: true |
|
datasets: |
|
- pszemraj/fleece2instructions-inputs-alpaca-cleaned |
|
language: |
|
- en |
|
pipeline_tag: text2text-generation |
|
library_name: transformers |
|
--- |
|
|
|
|
|
# bart-base-instructiongen-w-inputs |
|
|
|
Use this text2text model to find out what LLM `instruction` (**and** `inputs` if relevant) might have generated `<arbitrary input text>`! |
|
|
|
|
|
|
|
- Check out a [basic demo on Spaces](https://huggingface.co/spaces/pszemraj/generate-instructions) |
|
- An example of how to use instructiongen models in a CLI script can be found [here](https://gist.github.com/pszemraj/8b0213e700763106074d3ac15d041c14) |
|
- You can find other models fine-tuned for instruction generation by [searching for the instructiongen tag](https://huggingface.co/models?other=instructiongen) |
|
|
|
## about |
|
|
|
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the `pszemraj/fleece2instructions-inputs-alpaca-cleaned` dataset. |
|
|
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.9579 |
|
- Rouge1: 62.3604 |
|
- Rouge2: 39.5109 |
|
- Rougel: 58.8843 |
|
- Rougelsum: 60.4494 |
|
- Gen Len: 24.9917 |
|
|
|
|
|
## Example |
|
|
|
![base](https://i.imgur.com/1Vq5Fys.png) |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
This model is intended to be used to generate instructions from arbitrary text. You can then use these instructions + your data to fine-tune an LLM on instructions w.r.t. a specific domain. This model is primarily intended to enable **low-resource domain adaptation**, rather than "_I want to generate even better prompts for the FLAN-V2 dataset!_". |
|
|
|
The `fleece2instructions-inputs-alpaca-cleaned` dataset, obtained from the [alpaca-lora repo](https://github.com/tloen/alpaca-lora) under the ODC-BY license, has been converted to a text2text format for use with language models. In this dataset, the original 'inputs' and 'instructions' columns are combined into a single 'instructions_inputs' column. To clearly separate the two types of content, each piece of text is prefixed with either an `<instruction>` or `<inputs>` token. These tokens not only facilitate model comprehension, but also allow for easy regex separation of model outputs during inference. |
|
|
|
As such, users can expect the output of this model to be similarly structured with `<instruction>` and `<inputs>` tokens. |
|
|
|
This is just the base model, for better performance (but slower/compute intensive) see the [bart-large](https://huggingface.co/pszemraj/bart-large-instructiongen-w-inputs) version. Further exploration/data may lead to even better models! |
|
|
|
## Training and evaluation data |
|
|
|
Refer to the [fleece2instructions-inputs-alpaca-cleaned](https://huggingface.co/datasets/pszemraj/fleece2instructions-inputs-alpaca-cleaned) dataset |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 8e-05 |
|
- train_batch_size: 4 |
|
- eval_batch_size: 4 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- gradient_accumulation_steps: 16 |
|
- total_train_batch_size: 64 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.02 |
|
- num_epochs: 2.0 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |
|
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| |
|
| 1.1147 | 1.0 | 680 | 0.9901 | 61.8451 | 38.8293 | 58.3372 | 59.8658 | 25.2401 | |
|
| 0.9565 | 2.0 | 1360 | 0.9579 | 62.3604 | 39.5109 | 58.8843 | 60.4494 | 24.9917 | |