Prompt-Time Symbolic Knowledge Capture with Large Language Models
Abstract
Augmenting large language models (LLMs) with user-specific knowledge is crucial for real-world applications, such as personal AI assistants. However, LLMs inherently lack mechanisms for <PRE_TAG>prompt-driven knowledge capture</POST_TAG>. This paper investigates utilizing the existing LLM capabilities to enable prompt-driven knowledge capture, with a particular emphasis on knowledge graphs. We address this challenge by focusing on <PRE_TAG>prompt-to-triple (P2T) generation</POST_TAG>. We explore three methods: zero-shot prompting, few-shot prompting, and fine-tuning, and then assess their performance via a specialized <PRE_TAG>synthetic dataset</POST_TAG>. Our code and datasets are publicly available at https://github.com/HaltiaAI/paper-PTSKC.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper