Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression Paper • 2403.15447 • Published Mar 18 • 16
GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations Paper • 2402.12348 • Published Feb 19 • 1
Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression Paper • 2403.15447 • Published Mar 18 • 16
Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression Paper • 2403.15447 • Published Mar 18 • 16
Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression Paper • 2403.15447 • Published Mar 18 • 16
Good at captioning, bad at counting: Benchmarking GPT-4V on Earth observation data Paper • 2401.17600 • Published Jan 31
DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer Paper • 2312.03724 • Published Nov 27, 2023 • 1
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark Paper • 2402.11592 • Published Feb 18 • 2
Data-Free Knowledge Distillation for Heterogeneous Federated Learning Paper • 2105.10056 • Published May 20, 2021
DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer Paper • 2312.03724 • Published Nov 27, 2023 • 1
Understanding Deep Gradient Leakage via Inversion Influence Functions Paper • 2309.13016 • Published Sep 22, 2023
Revisiting Data-Free Knowledge Distillation with Poisoned Teachers Paper • 2306.02368 • Published Jun 4, 2023
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models Paper • 2306.11698 • Published Jun 20, 2023 • 12