File size: 1,233 Bytes
e710b86 0b399ea e710b86 0b399ea e710b86 b20cd6d 0b399ea e710b86 0b399ea e710b86 0b399ea e710b86 31b2285 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
library_name: transformers
base_model:
- meta-llama/Llama-2-7b-hf
tags:
- llama-factory
- full
- diffusion
model-index:
- name: diffullama
results: []
license: apache-2.0
datasets:
- bigcode/starcoderdata
- cerebras/SlimPajama-627B
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diffullama
This model is a fine-tuned version of [llama2].
## Model description
Details and model loading can be seen [https://github.com/HKUNLP/DiffuLLaMA](https://github.com/HKUNLP/DiffuLLaMA).
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
```
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
``` |