Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering Paper • 2410.15999 • Published Oct 21 • 19
Adapting Neural Link Predictors for Data-Efficient Complex Query Answering Paper • 2301.12313 • Published Jan 29, 2023
Attention Is All You Need But You Don't Need All Of It For Inference of Large Language Models Paper • 2407.15516 • Published Jul 22 • 1
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations Paper • 2410.18860 • Published Oct 24 • 9
A Simple and Effective $L_2$ Norm-Based Strategy for KV Cache Compression Paper • 2406.11430 • Published Jun 17 • 22
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them Paper • 2102.07033 • Published Feb 13, 2021
No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models Paper • 2307.06440 • Published Jul 12, 2023 • 3
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations Paper • 2305.13235 • Published May 22, 2023 • 1
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization Paper • 2310.14418 • Published Oct 22, 2023 • 1
Using Natural Language Explanations to Improve Robustness of In-context Learning for Natural Language Inference Paper • 2311.07556 • Published Nov 13, 2023
XQA-DST: Multi-Domain and Multi-Lingual Dialogue State Tracking Paper • 2204.05895 • Published Apr 12, 2022
Analysing The Impact of Sequence Composition on Language Model Pre-Training Paper • 2402.13991 • Published Feb 21 • 1
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks Paper • 2210.16773 • Published Oct 30, 2022 • 1
Large language models surpass human experts in predicting neuroscience results Paper • 2403.03230 • Published Mar 4 • 4
Adaptive Computation Modules: Granular Conditional Computation For Efficient Inference Paper • 2312.10193 • Published Dec 15, 2023 • 1
The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models Paper • 2404.05904 • Published Apr 8 • 8