Post
Here is my selection of papers for today (12 Jan)
https://huggingface.co/papers
PALP: Prompt Aligned Personalization of Text-to-Image Models
Object-Centric Diffusion for Efficient Video Editing
TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
Diffusion Priors for Dynamic View Synthesis from Monocular Videos
Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation
TOFU: A Task of Fictitious Unlearning for LLMs
Patchscope: A Unifying Framework for Inspecting Hidden Representations of Language Models
Secrets of RLHF in Large Language Models Part II: Reward Modeling
LEGO:Language Enhanced Multi-modal Grounding Model
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages
A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism
Towards Conversational Diagnostic AI
Transformers are Multi-State RNNs
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Distilling Vision-Language Models on Millions of Videos
Efficient LLM inference solution on Intel GPU
TrustLLM: Trustworthiness in Large Language Models
https://huggingface.co/papers
PALP: Prompt Aligned Personalization of Text-to-Image Models
Object-Centric Diffusion for Efficient Video Editing
TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
Diffusion Priors for Dynamic View Synthesis from Monocular Videos
Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation
TOFU: A Task of Fictitious Unlearning for LLMs
Patchscope: A Unifying Framework for Inspecting Hidden Representations of Language Models
Secrets of RLHF in Large Language Models Part II: Reward Modeling
LEGO:Language Enhanced Multi-modal Grounding Model
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages
A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism
Towards Conversational Diagnostic AI
Transformers are Multi-State RNNs
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Distilling Vision-Language Models on Millions of Videos
Efficient LLM inference solution on Intel GPU
TrustLLM: Trustworthiness in Large Language Models