librarian-bot commited on
Commit
37be826
1 Parent(s): 8292eb2

Scheduled Commit

Browse files
data/2411.06559.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.06559", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [One STEP at a time: Language Agents are Stepwise Planners](https://huggingface.co/papers/2411.08432) (2024)\n* [Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation](https://huggingface.co/papers/2410.13232) (2024)\n* [WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents](https://huggingface.co/papers/2410.07484) (2024)\n* [ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning](https://huggingface.co/papers/2410.02052) (2024)\n* [GUI Agents with Foundation Models: A Comprehensive Survey](https://huggingface.co/papers/2411.04890) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.10867.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.10867", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Can LVLMs Describe Videos like Humans? A Five-in-One Video Annotations Benchmark for Better Human-Machine Comparison](https://huggingface.co/papers/2410.15270) (2024)\n* [The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio](https://huggingface.co/papers/2410.12787) (2024)\n* [EventHallusion: Diagnosing Event Hallucinations in Video LLMs](https://huggingface.co/papers/2409.16597) (2024)\n* [AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large Language Models](https://huggingface.co/papers/2410.18325) (2024)\n* [LongHalQA: Long-Context Hallucination Evaluation for MultiModal Large Language Models](https://huggingface.co/papers/2410.09962) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.10913.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.10913", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation](https://huggingface.co/papers/2410.12669) (2024)\n* [DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models](https://huggingface.co/papers/2410.11208) (2024)\n* [Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation](https://huggingface.co/papers/2410.00447) (2024)\n* [HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation](https://huggingface.co/papers/2410.14324) (2024)\n* [MCGM: Mask Conditional Text-to-Image Generative Model](https://huggingface.co/papers/2410.00483) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.10958.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.10958", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration](https://huggingface.co/papers/2410.02367) (2024)\n* [Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference](https://huggingface.co/papers/2409.20361) (2024)\n* [INT-FlashAttention: Enabling Flash Attention for INT8 Quantization](https://huggingface.co/papers/2409.16997) (2024)\n* [COMET: Towards Partical W4A4KV4 LLMs Serving](https://huggingface.co/papers/2410.12168) (2024)\n* [PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs](https://huggingface.co/papers/2410.05265) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.12925.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.12925", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [A Hitchhiker's Guide to Scaling Law Estimation](https://huggingface.co/papers/2410.11840) (2024)\n* [Scaling Optimal LR Across Token Horizons](https://huggingface.co/papers/2409.19913) (2024)\n* [Scaling Laws for Precision](https://huggingface.co/papers/2411.04330) (2024)\n* [Scaling Laws for Predicting Downstream Performance in LLMs](https://huggingface.co/papers/2410.08527) (2024)\n* [Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models](https://huggingface.co/papers/2410.05661) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.13025.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.13025", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Advancing Medical Radiograph Representation Learning: A Hybrid Pre-training Paradigm with Multilevel Semantic Granularity](https://huggingface.co/papers/2410.00448) (2024)\n* [Expert-level vision-language foundation model for real-world radiology and comprehensive evaluation](https://huggingface.co/papers/2409.16183) (2024)\n* [Anatomy-Guided Radiology Report Generation with Pathology-Aware Regional Prompts](https://huggingface.co/papers/2411.10789) (2024)\n* [MCL: Multi-view Enhanced Contrastive Learning for Chest X-ray Report Generation](https://huggingface.co/papers/2411.10224) (2024)\n* [R2Gen-Mamba: A Selective State Space Model for Radiology Report Generation](https://huggingface.co/papers/2410.18135) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.13281.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.13281", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models](https://huggingface.co/papers/2410.23266) (2024)\n* [LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models](https://huggingface.co/papers/2410.09732) (2024)\n* [VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?](https://huggingface.co/papers/2411.10979) (2024)\n* [Q-Bench-Video: Benchmarking the Video Quality Understanding of LMMs](https://huggingface.co/papers/2409.20063) (2024)\n* [VCBench: A Controllable Benchmark for Symbolic and Abstract Challenges in Video Cognition](https://huggingface.co/papers/2411.09105) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.13476.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.13476", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Why Does the Effective Context Length of LLMs Fall Short?](https://huggingface.co/papers/2410.18745) (2024)\n* [On the token distance modeling ability of higher RoPE attention dimension](https://huggingface.co/papers/2410.08703) (2024)\n* [DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads](https://huggingface.co/papers/2410.10819) (2024)\n* [A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts](https://huggingface.co/papers/2410.01485) (2024)\n* [How to Train Long-Context Language Models (Effectively)](https://huggingface.co/papers/2410.02660) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2411.13503.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2411.13503", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [The Dawn of Video Generation: Preliminary Explorations with SORA-like Models](https://huggingface.co/papers/2410.05227) (2024)\n* [A Survey of AI-Generated Video Evaluation](https://huggingface.co/papers/2410.19884) (2024)\n* [WorldSimBench: Towards Video Generation Models as World Simulators](https://huggingface.co/papers/2410.18072) (2024)\n* [VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?](https://huggingface.co/papers/2411.10979) (2024)\n* [VideoAutoArena: An Automated Arena for Evaluating Large Multimodal Models in Video Analysis through User Simulation](https://huggingface.co/papers/2411.13281) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}