LTX-2 UNLEASHED AI VIDEO
By Wes Roth
Watch on YouTube (21:01)
Overview
This video introduces LTX-2, Lightricks' fully open-source AI video generation model that represents a major breakthrough in local video generation. Unlike typical "open" models that only provide weights, LTX-2 includes full training code, model weights, and optimization for consumer hardware. The host demonstrates how to install and use the model locally on ComfyUI, comparing full vs distilled versions, and showcasing various features including text-to-video, image-to-video, and camera control LoRAs.
Key Takeaways
- LTX-2 is truly open-source, providing not just model weights but also full training code, benchmarks, and LoRAs, making it adaptable for developers and studios
- The distilled model variant enables practical local video generation on consumer hardware, with significantly faster render times (53 seconds vs 2 minutes 27 seconds) while maintaining good quality
- LTX-2 uses a two-stage rendering process: generating a lower-resolution base video first, then upscaling it to final resolution, which requires applying LoRAs to both stages
- The model supports multiple workflows including text-to-video, image-to-video, video-to-video, and audio-conditioned generation, with resolutions up to 4K
- Camera control LoRAs enable precise control over camera movements (dolly, pan, etc.) and require specific prompting techniques describing the destination of movement and hidden scene elements