Lucid
Winter 2025 · Active
interactive video models
Founded: 2024
Team size: 3
About
We are building universe simulations powered by interactive video models.
We train video models that simulate hyper-realistic environments with immersive control, replacing hard-coded game or physics engines with dynamic neural networks.
We built the fastest action-conditioned diffusion video model (running at 20+fps on a 4090 gaming gpu) to simulate minecraft. It is 5x faster than other minecraft World Models and was trained with 100x less resources. Our unique insight was relying on aggressive compression in our tokenizer (128x versus the traditional 8x), and because attention scales quadratically with # of tokens our model can run blindingly faster.
Now we’re training a hyper realistic world model!
Founders
Alberto HojelFounder
Studied computer science at UC Berkeley and did research at BAIR under Trevor Darrell. Worked on weather forecasting with ML at Rainmaker. Originally from Mexico City. Was on track to pursue a PhD but realized I wanted to build something greater while presenting my paper at ECCV. Passionate about lucid dreaming and immersive content.
Rami SeidCo-Founder
A high school dropout who got his first internship as an MLE at a robotics lab, I went on to become the CTO of a govtech contracting company. After that, I cofounded my own telecommunications company while doing ML work on the side. Now I'm building a universe simulator at Lucid.