Spaces:
Running
Running
File size: 1,108 Bytes
128eee8 0b22875 98a44f9 a637a6a 0b22875 a637a6a 128eee8 f2e7ec6 128eee8 0b22875 128eee8 98a44f9 f2e7ec6 128eee8 28d0068 128eee8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
title: CogVideoX-2B
emoji: 🎥
colorFrom: yellow
colorTo: green
sdk: gradio
sdk_version: 4.41.0
suggested_hardware: a10g-large
suggested_storage: large
app_port: 7860
app_file: app.py
models:
- THUDM/CogVideoX-2b
tags:
- cogvideox
- video-generation
- thudm
short_description: Text-to-Video
disable_embedding: false
---
# CogVideoX HF Space
## How to run this space
CogVideoX does not rely on any external API models.
However, during the training of CogVideoX, we used relatively long prompts. To enable users to achieve rendering with
shorter prompts, we integrated an LLM to refine the prompts for better results.
This step is not mandatory, but we recommend using an LLM to enhance the prompts.
### Using with GLM-4 Model
```shell
OPENAI_BASE_URL=https://open.bigmodel.cn/api/paas/v4/ OPENAI_API_KEY="ZHIPUAI_API_KEY" python gradio_demo.py
```
### Using with OpenAI GPT-4 Model
```shell
OPENAI_API_KEY="OPENAI_API_KEY" python gradio_demo.py
```
and change `app.py` here:
```
model="glm-4-0520" # change to GPT-4o
```
### Not using LLM to refine prompts.
```shell
python app.py
``` |