DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs

Model Information

This model is a fine-tuned semantic parsing LLM agent for KGQA. We fine-tune the llama-2-7B on our curated reasoning trajectory in the Agentbench format: https://huggingface.co/datasets/UKPLab/dara-agentbench.

Model Usage

from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained( "UKPLab/agentbench-7b", torch_dtype=torch.float16, device_map="auto", cache_dir = "cache")

For more information, please check the repository https://github.com/UKPLab/acl2024-DARA

Hyperparameters

  • Learning rate: 2e-5
  • Batch size: 4
  • Training epochs: 10
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train UKPLab/agentbench-7b