Edit model card

DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs

Model Information

This model is a fine-tuned semantic parsing LLM agent for KGQA. We fine-tune the llama-2-13B on our curated reasoning trajectory https://huggingface.co/datasets/UKPLab/dara.

Model Usage from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained( "UKPLab/dara-llama-2-13b", torch_dtype=torch.float16, device_map="auto", cache_dir = "cache" ) For more information, please check the repository https://github.com/UKPLab/acl2024-DARA

Hyperparameters Learning rate: 2e-5 Batch size: 4 Training epochs: 10

Downloads last month
6
Safetensors
Model size
13B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train UKPLab/dara-llama-2-13b