We are Genmo—building frontier models for video generation to unlock the right brain of artificial general intelligence.
Imagine AI that can simulate anything—possible or impossible.
Our video generation models act as world simulators, driving breakthroughs in embodied AI by enabling infinite explorations in synthetic realities. Video is the ultimate medium for human-AI interaction, seamlessly integrating text, audio, images, and 3D into one unified experience.
Our team includes the original creators of DDPM, DreamFusion, and Emu Video.
Mochi 1, our first public open-source release, is licensed under Apache 2.0 for both individual and commercial use.
Our principles:
- 1State-of-the-Art Performance: Reality sets a very high standard for video generation. We are going to close the gap.
- 2Open Models: Open ecosystems win over closed. Open-source is the best long-term business model for video generation.
- 3Community-first: By releasing Mochi 1 under the Apache 2.0 license, the community can build on it by sharing their fine-tunes.
Video is the language of the future. Help us write the script.
Investors and advisors
CEO of Typeface
CEO of Replit
Investor and author
BAG, long time executive at Google
VP of AI at Replit
UC Berkeley, Databricks Co-Founder
UC Berkeley, Deep RL pioneer, Covariant AI
UC Berkeley, SysML pioneer
Careers
Work with us to build the best open video models.