Text Generation
Transformers
GGUF
English
llama
code
Inference Endpoints
Edit model card

QuantFactory Banner

QuantFactory/web-doc-refining-lm-GGUF

This is quantized version of gair-prox/web-doc-refining-lm created using llama.cpp

Original Model Card

Web-doc-refining-lm

ArXiv | Code

Web-doc-refining-lm is an adapted 0.3B-ProX model, fine-tuned for document level refining via program generation.

Citation

@article{zhou2024programming,
  title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
  author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
  journal={arXiv preprint arXiv:2409.17115},
  year={2024}
}
Downloads last month
251
GGUF
Model size
354M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/web-doc-refining-lm-GGUF

Quantized
(2)
this model

Dataset used to train QuantFactory/web-doc-refining-lm-GGUF