license: mit | |
library_name: transformers | |
pipeline_tag: unconditional-image-generation | |
datasets: | |
- commaai/commavq | |
# commaVQ - GPT2M | |
A GPT2M model trained on a larger version of the commaVQ dataset. | |
This model is able to generate driving video unconditionally. | |
Below is an example of 5 seconds of imagined video using GPT2M. | |
<video title="imagined" controls> | |
<source src="https://github.com/commaai/commavq/assets/29985433/f6f7699b-b6cb-4f9c-80c9-8e00d75fbfae" type="video/mp4"> | |
</video> |