# Introduction As a video generation correspondence of ImagenHub, VideoGenHub is a centralized framework to standardize the evaluation of conditional video generation models by curating unified datasets, building an inference library and a benchmark that align with real life applications. This is continuous effort to publish leaderboard to help everyone track the progress in the field. ## Why VideoGenHub? ### What sets #VideoGenHub apart? 1) Unified Datasets: We’ve meticulously curated evaluation datasets for 2 video generation tasks. This ensures comprehensive testing of models across diverse scenarios. 2) Inference Library: Say goodbye to inconsistent comparisons. Our unified inference pipeline ensures that every model is evaluated on a level playing field with full transparency. 3) Human-centric Evaluation: Beyond traditional metrics, we’ve innovated with human evaluation scores that measure Semantic Consistency & Perceptual Quality. This aligns evaluations closer to human perceptions, while better than existing Human-preference evaluation methods. ### Why should you use #VideoGenHub? 1) Streamlined Research: We’ve taken the guesswork out of research by defining clear tasks and providing curated datasets. Objective Evaluation: Our framework ensures a bias-free, standardized evaluation, giving a true measure of a model’s capabilities. 2) Experiment Transparency: By standardizing the human-evaluation dataset, human evaluation results would be come far more convincing with the experiment transparency. 3) Collaborative Spirit: We believe in the power of community. Our platform is designed to foster collaboration, idea exchange, and innovation in the realm of image generation. 4) Comprehensive Functionality: From common GenAI metrics to visualization tools, we’ve got you covered. Also stay tuned for our upcoming Amazon Mechanical Turk Templates! 5) Engineering Excellence: We emphasize good engineering practice. Documentations, type hints, and (coming soon!) extensive code coverage.