--- license: apache-2.0 --- Below is the reference code for inference. First load the tokenizer and the model. ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("KLGR123/WEPO-llama-3-8b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("KLGR123/WEPO-llama-3-8b", trust_remote_code=True).to('cuda:0') ``` Run a test-demo with random input. ``` messages = [ {"role": "system", "content": "You are a web navigation intelligence who interacts with webpage environments to achieve human user intent."}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=128, eos_token_id=terminators, do_sample=True, temperature=0.2, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] output = tokenizer.decode(response, skip_special_tokens=True) output ```