|
--- |
|
library_name: transformers |
|
tags: [] |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
```python |
|
import torch |
|
from torch import nn |
|
import random |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
# make sure the model doesn't generate mask tokens |
|
bias = torch.zeros(34048) |
|
bias[32000:] = -100 |
|
model.lm_head.bias = nn.Parameter(bias) |
|
|
|
# -------------------------------------------------------------------------------- |
|
# Generation without masking |
|
input_ids = tokenizer("Once upon a time, in a land far far away...", return_tensors='pt').input_ids |
|
print(input_ids) |
|
# tensor([[ 1, 5713, 3714, 264, 727, 28725, 297, 264, 2533, 2082, |
|
# 2082, 1753, 1101]]) |
|
output = model.generate(input_ids, max_new_tokens=64) |
|
print(tokenizer.decode(output[0])) |
|
# '<s> Once upon a time, in a land far far away...\n\nThere was a magical place called Disneyland.\n\nIt was a place where dreams came true, where fairy tales became reality, and where magic was all around.\n\nBut one day, something terrible happened.\n\nThe magic began to fade.\n\nThe fairy tales became dull, the' |
|
|
|
# -------------------------------------------------------------------------------- |
|
# replace "far far" with two random indices instead (anything after 32k up to 34,048) |
|
# the model should pick up that two repeating words after "Once upon a time, in a land-" |
|
# and before "away" would probably be "far far" |
|
|
|
input_ids[input_ids==2082] = 32_001 |
|
print(input_ids) |
|
# tensor([[ 1, 5713, 3714, 264, 727, 28725, 297, 264, 2533, 32001, |
|
# 32001, 1753, 1101]]) |
|
output = model.generate(input_ids, max_new_tokens=64) |
|
print(tokenizer.decode(output[0])) |
|
# '<s> Once upon a time, in a land<ID-000001><ID-000001> away...\n\nOnce upon a time, in a land far, far away, there was a magical kingdom called Flanders. It was a peaceful land, where everyone lived happily ever after.\n\nBut one day, a terrible thing happened. A terrible, terrible thing.\n\nA terrible, terrible thing happened.' |
|
|
|
# -------------------------------------------------------------------------------- |
|
# we can also get rid of everything except "<s>", "Once", "upon", "away", "..." |
|
def create_masked_ids(input_ids, token_offset, ids_to_mask): |
|
unique_ids = torch.unique(input_ids).tolist() |
|
unique_id_map = random.sample([i for i in range(2048)], len(unique_ids)) |
|
|
|
id_to_shuffled = {id: shuffled for id, shuffled in zip(unique_ids, unique_id_map)} |
|
|
|
def map_to_shuffled(id): |
|
return id_to_shuffled[id] + token_offset |
|
|
|
shuffled_ids = input_ids.clone().apply_(map_to_shuffled) |
|
|
|
mask = torch.zeros_like(input_ids, dtype=torch.bool) |
|
for id_to_mask in ids_to_mask: |
|
mask |= (input_ids == id_to_mask) |
|
|
|
masked_ids = torch.where(mask, input_ids, shuffled_ids) |
|
|
|
return masked_ids |
|
|
|
masked_ids = create_masked_ids(input_ids, 32_000, [1, 5713, 3714, 1753, 1101]) |
|
print(masked_ids) |
|
# tensor([[ 1, 5713, 3714, 33048, 34032, 32238, 32016, 33048, 33013, 33299, |
|
# 33299, 1753, 1101]]) |
|
|
|
output = model.generate(masked_ids, max_new_tokens=64) |
|
print(tokenizer.decode(output[0])) |
|
# '<s> Once upon<ID-000418><ID-0007F0><ID-0000EE><ID-000010><ID-000418><ID-0003F5><ID-000513><ID-000513> away...\n\nOnce upon a time, there was a young man named Alex. He was a very curious young man, and loved to explore the world around him. One day, he stumbled upon a magical book called "The Book of Secrets." This book contained all sorts of secrets about the world, and Alex was fasc' |
|
|
|
``` |
|
|
|
this model isn't really made for benchmarks, it's worse on everything besides ARC-C and TruthfulQA |
|
|
|
| Model | ARC-C | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8k | |
|
| ------------------------------------------------------------ | --------- | --------- | ---------- | ---------- | ---------- | --------- | |
|
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 59.98 | **83.31** | **64.16** | 42.15 | **78.37** | **37.83** | |
|
| [crumb/92d52f-ame-full-7B](https://hf.co/crumb/92d52f-ame-full-7B) | **61.18** | 81.52 | 63.44 | **42.39** | 77.58 | 35.41 | |
|
|
|
it's got extra tokens which can all equally be used as masks, you can replace all instances of one token in context with one of the extra tokens (`[f'<ID-{i:06X}>' for i in range(2048)]`) to give the model an extra hard time. it was trained with context length 2048 on three separate replacement techniques through a schedule, with 80% of all sequences being completely replaced with the mask tokens near the end of training. it was trained over ~0.5B tokens |
|
|
|
> what? how is that useful? |
|
|
|
i'm hoping to finetune it further while replacing the entire tokenizer with any number of other tokenizers, all utilizing the unique mask ids, to hopefully build a causal model of any sufficiently long artifact from any domain, for example, the voynich manuscript or an alien artifact |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** [More Information Needed] |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Model type:** [More Information Needed] |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model [optional]:** [More Information Needed] |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [More Information Needed] |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
[More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
[More Information Needed] |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Preprocessing [optional] |
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
[More Information Needed] |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
[More Information Needed] |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** [More Information Needed] |
|
- **Hours used:** [More Information Needed] |
|
- **Cloud Provider:** [More Information Needed] |
|
- **Compute Region:** [More Information Needed] |
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|
|
### Compute Infrastructure |
|
|
|
[More Information Needed] |
|
|
|
#### Hardware |
|
|
|
[More Information Needed] |
|
|
|
#### Software |
|
|
|
[More Information Needed] |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
[More Information Needed] |
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] |