Trained by Jina AI.
Intro
Jina Reader-LM is a series of models that convert HTML content to Markdown content, which is useful for content conversion tasks. The model is trained on a curated collection of HTML content and its corresponding Markdown content.
Models
Name | Context Length | Download |
---|---|---|
reader-lm-0.5b | 256K | ๐ค Hugging Face |
reader-lm-1.5b | 256K | ๐ค Hugging Face |
Quick Start
On Google Colab
The easiest way to experience reader-lm is by running our Colab notebook, where we demonstrate how to use reader-lm-1.5b to convert the HackerNews website into markdown. The notebook is optimized to run smoothly on Google Colabโs free T4 GPU tier. You can also load reader-lm-0.5b or change the URL to any website and explore the output. Note that the input (i.e., the prompt) to the model is the raw HTMLโno prefix instruction is required.
Local
To use this model, you need to install transformers
:
pip install transformers<=4.43.4
Then, you can use the model as follows:
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "jinaai/reader-lm-0.5b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
# example html content
html_content = "<html><body><h1>Hello, world!</h1></body></html>"
messages = [{"role": "user", "content": html_content}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08)
print(tokenizer.decode(outputs[0]))
AWS Sagemaker & Azure Marketplace
- Downloads last month
- 1,327