The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: SchemaInferenceError Message: Please pass `features` or at least one example when writing data Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 611, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Agent4S-BioKG
A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science.
Introduction
Pursuing artificial intelligence for biomedical science, a.k.a. AI Scientist, draws increasing attention, where one common approach is to build a copilot agent driven by Large Language Models(LLMs).(QA) to the LLM itself, or in a biomedical experimental manner. How to precisely benchmark biomedical agents from an AI Scientist perspective remains largely unexplored. To this end, we draw inspiration from one most important abilities of scientists, understanding the literature, and introduce
However, to evaluate such systems, people either rely on direct Question-AnsweringBioKGBench
.
In contrast to traditional evaluation benchmark that only focuses on factual QA, where the LLMs are known to have hallucination issues, we first disentangle Understanding Literature into two atomic abilities, i) Understanding the unstructured text from research papers by performing scientific claim verification, and ii) Ability to interact with structured Knowledge-Graph Question-Answering~(KGQA) as a form of Literature grounding. We then formulate a novel agent task, dubbed KGCheck, using KGQA and domain-based Retrieval-Augmented Generation (RAG) to identify the factual errors of existing large-scale knowledge graph databases. We collect over two thousand data for two atomic tasks and 225 high-quality annotated data for the agent task. Surprisingly, we discover that state-of-the-art agents, both daily scenarios and biomedical ones, have either failed or inferior performance on our benchmark. We then introduce a simple yet effective baseline, dubbed BKGAgent
. On the widely used popular dataset, we discover over 90 factual errors which yield the effectiveness of our approach, yields substantial value for both the research community or practitioners in the biomedical domain.
Overview
Dataset(Need to download from huggingface)
- bioKG: The knowledge graph used in the dataset.
- KGCheck: Given a knowledge graph and a scientific claim, the agent needs to check whether the claim is supported by the knowledge graph. The agent can interact with the knowledge graph by asking questions and receiving answers.
- Dev: 20 samples
- Test: 205 samples
- corpus: 51 samples
- KGQA: Given a knowledge graph and a question, the agent needs to answer the question based on the knowledge graph.
- Dev: 60 samples
- Test: 638 samples
- SCV: Given a scientific claim and a research paper, the agent needs to check whether the claim is supported by the research paper.
- Dev: 120 samples
- Test: 1265 samples
- corpus: 5664 samples
Citation
Contact
For adding new features, looking for helps, or reporting bugs associated with BioKGBench
, please open a GitHub issue and pull request with the tag new features
, help wanted
, or enhancement
. Feel free to contact us through email if you have any questions.
- Xinna Lin(linxinna@westlake.edu.cn), Westlake University
- Siqi Ma(masiqi@westlake.edu.cn), Westlake University
- Junjie Shan(shanjunjie@westlake.edu.cn), Westlake University
- Xiaojing Zhang(zhangxiaojing@westlake.edu.cn), Westlake University
- Downloads last month
- 228