Edit model card

Interactive Evolution: A Neural-Symbolic Self-Training Framework for Large Language Models

Paper Link: https://arxiv.org/abs/2406.11736

Code Repo: https://github.com/xufangzhi/ENVISIONS

πŸ”₯ News

  • πŸ”₯πŸ”₯πŸ”₯ We make public the final checkpoints after self-training ! ! !

Note

The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.

Prompt for Zero-shot Evaluation

You are required to navigate the web. To accomplish the task, use methods in Agent class to generate actions, with the following functions.
type(characters: str): Type a string via the keyboard.
click_xpath(xpath: str): Click an HTML element with a valid XPath.
press(key_type: str): Press a key on the keyboard (enter, space, arrowleft, arrowright, backspace, arrowup, arrowdown, command+a, command+c, command+v).
click_option(xpath: str): Click an option HTML element in a list with a valid XPath.
movemouse(xpath: str): Move the mouse cursor on an HTML element with a valid XPath.
The observation is: <observation>
The action is:

Citation

If you find it helpful, please kindly cite the paper.

@misc{xu2024interactive,
      title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models}, 
      author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
      year={2024},
      eprint={2406.11736},
      archivePrefix={arXiv},
}
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.