Luma AI launches Ray3, a cinematic GenAI video model with built-in reasoning for precise, high-quality creative workflows.

Luma AI has unveiled Ray3, its most advanced generative video model, designed for professional-grade cinematic production. Unlike earlier models, Ray3 introduces chain-of-thought reasoning, enabling it to plan scenes, refine drafts, and ensure outputs align with artistic direction. This marks a shift from random generation to structured, intelligent creativity.

The model mimics a filmmaker’s workflow, sketching rough storyboards before producing polished video. Users can collaborate during drafting, annotating frames or adding text instructions to guide scene development. Ray3 understands both written and visual prompts, allowing it to follow complex, multi-step ideation with greater precision than prior models.

Ray3 also raises the technical bar. It generates true high dynamic range (HDR) video in ACES2065-1 EXR formats, across 10-, 12-, and 16-bit depths. It can also transform standard dynamic range footage into HDR, enhancing exposure without losing detail. This gives advertisers and filmmakers professional control over color, lighting, and mood. Consistency across stitched scenes has been significantly improved, supporting longer narratives and richer storytelling.

Adoption is already underway. Adobe is integrating Ray3 into its Firefly creative suite, while Dentsu Digital plans to use it for brand personalization in Japan. Other agencies, including Monks and StrawberryFrog, are deploying Ray3 to scale ad production. Saudi AI firm Humain is also embedding it into enterprise services. By combining speed, fidelity, and reasoning, Ray3 delivers both creative freedom and safeguards for cultural and ethical alignment.