model-index: | |
- name: distilgpt2 | |
results: | |
- task: | |
type: text-generation | |
dataset: | |
name: Wikitext | |
type: wikitext | |
metrics: | |
- type: perplexity (BASELINE) | |
value: 52.96194118952879 | |
- type: perplexity (BASIC) | |
value: 56.66175009356436 | |
This is a d-Matrix functional reference of the DISTILGPT2 model. | |
The reference provides the following functional *configurations*: | |
Configuration | Explanation | |
:-- | :-- | |
**`BASELINE`** | a reference functionally equivalent to the original model | |
**`BASIC`** | all linear algebraic operands quantized to `MXINT8-64`, and all other operations transformed to approximated kernel simulations | |
### Usage | |
Install d-Matrix [Dmx_Compressor](https://github.com/d-matrix-ai/dmx-compressor) first. | |
```sh | |
pip install dmx_compressor | |
``` | |
The following is an example model and its evaluation. | |
```sh | |
git clone https://github.com/EleutherAI/lm-evaluation-harness | |
cd lm-evaluation-harness | |
pip install -e . | |
``` | |
```python | |
from dmx.compressor.modeling import DmxModel | |
import lm_eval | |
model_args = "pretrained=d-matrix/distilgpt2,trust_remote_code=True" | |
lm = lm_eval.api.registry.get_model("hf").create_from_arg_string(model_args, {"batch_size": 1}) | |
# Transform the model with DMX | |
lm._model = DmxModel.from_torch(lm._model) | |
eval_results = lm_eval.evaluate(lm, lm_eval.tasks.get_task_dict(["wikitext"])) # Assign desired task, i.e. "wikitext" | |
``` |