### Run Huggingface RWKV World Model #### CPU ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("BBuf/RWKV-4-World-7B") tokenizer = AutoTokenizer.from_pretrained("BBuf/RWKV-4-World-7B", trust_remote_code=True) text = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." prompt = f'Question: {text.strip()}\n\nAnswer:' inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(inputs["input_ids"], max_new_tokens=256) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` output: ```shell Question: In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese. Answer: The dragons in the valley spoke perfect Chinese, according to the scientist who discovered them. ``` #### GPU ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("BBuf/RWKV-4-World-7B", torch_dtype=torch.float16).to(0) tokenizer = AutoTokenizer.from_pretrained("BBuf/RWKV-4-World-7B", trust_remote_code=True) text = "你叫什么名字?" prompt = f'Question: {text.strip()}\n\nAnswer:' inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` output: ```shell Question: 你叫什么名字? Answer: 我是一个人工智能语言模型,没有名字。 ```