• Picasso AI Logo
    Logo Picasso IA
  • Home
  • AI Image
    Nano Banana 2
  • AI Video
    Veo 3.1 Fast
  • AI Chat
    GPT 5
  • Edit Images
  • Upscale Image
  • Remove Background
  • Text to Speech
  • Effects
    NEW
  • Generations
  • Billing
  • Support
  • Account
  1. Collection
  2. Text to Video
  3. Ltx 2 Distilled

CreditsUpgrade

LTX-2 Distilled — Free AI Text-to-Video Generator

LTX-2 Distilled is the first open-source model that generates video with audio from a plain text description. Type what you want to see, hit generate, and get back a short clip — complete with sound — in seconds. It solves a real problem: you no longer need expensive software, a video editor, or a production team to produce original video content from scratch. The model handles both text-to-video and image-to-video generation. Feed it a still photo and it brings the scene to life, with an image strength slider that controls how closely the output follows your original. You can pick from six aspect ratios — including vertical 9:16 for Reels and Stories — and adjust the number of frames to get exactly the clip length you need. Prompt enhancement is built in too, so even a rough description can produce a polished result. Whether you are building a social post, prototyping a video ad, or just experimenting with an idea, LTX-2 Distilled fits directly into a browser-based workflow with zero setup. Try it now — type a prompt and see a video appear in front of you.

Official

Lightricks

14.2k runs

Ltx 2 Distilled

2026-01-07

Commercial Use

Table of contents
  • Overview
  • How It Works
  • Key Features
  • Frequently Asked Questions
  • Credit Cost
  • Use Cases
Get Nano Banana Pro

Overview

ltx-2-distilled is an open-source text-to-video generation model built by Lightricks that produces synchronized audio-video clips directly from written prompts. It solves one of the most persistent friction points in AI content creation: the need to separately generate video and audio, then stitch them together in post-production. On Picasso IA, you can type a scene description, hit generate, and receive a fully composed clip where the motion and sound are produced together as a single output. Think of a filmmaker who wants a quick cinematic concept reel, or a content creator who needs a social media clip with ambient sound, all without touching a timeline editor.

How It Works

  • Write your prompt: Describe the scene, mood, action, or subject in plain text. The more specific your language, the more accurately the model reflects your intent.
  • Set your parameters: Adjust available controls such as duration, aspect ratio, or inference steps to shape the output before generation begins.
  • The model processes your input: ltx-2-distilled runs a distilled version of the LTX-2 architecture, generating synchronized video frames and audio simultaneously rather than in separate passes.
  • Receive your clip: Within seconds to a few minutes, a video file is returned with motion and sound already combined, ready to download or iterate on.
  • Refine if needed: Change a word in your prompt, tweak a setting, and regenerate instantly. No coding required at any stage.

Key Features

  • Simultaneous audio-video output: You get a clip where sound and visuals are generated together, not assembled from two separate models, which means the audio actually fits the scene rather than feeling layered on top.
  • Open-source architecture: The underlying model weights are publicly available, meaning the research community actively improves the model and the outputs benefit from ongoing refinement.
  • Distilled inference speed: The distilled variant of LTX-2 is specifically optimized for faster generation, so you spend less time waiting and more time iterating on creative ideas.
  • No coding required: The entire workflow runs through a browser interface. You do not need a local GPU, a Python environment, or any technical setup to produce results.
  • Flexible prompt control: The model responds to detailed scene descriptions, stylistic language, and emotional tone cues, giving you meaningful creative control through natural language alone.
  • Instant results in a free online environment: You can start generating immediately without subscriptions or installations blocking your first attempt.

Frequently Asked Questions

Do I need programming skills or technical knowledge to use this? No — just open ltx-2-distilled on Picasso IA, adjust the settings you want, and hit generate. The interface handles everything behind the scenes, and your only input is a text description of the scene you want to create.

Is it free to try? Yes, you can run ltx-2-distilled free online without committing to a paid plan first. This lets you test prompts, evaluate output quality, and get a feel for what the model can do before deciding whether you want to generate at higher volumes or with extended settings.

How long does it take to get results? Generation time depends on the length and complexity of the clip you are requesting, but the distilled architecture is specifically designed to reduce wait times compared to full-scale video models. Most short clips return within a minute or two, making iteration fast enough to feel genuinely interactive.

What output formats are supported? ltx-2-distilled returns video files that include the generated audio track, so you receive a single file rather than separate assets you need to combine. This makes the output immediately usable in social media tools, presentation software, or any standard video editor.

Can I customize the output quality or style? Yes. The available parameters let you influence aspects like inference steps, which affects how refined the final frames look, as well as duration and aspect ratio. Stylistic direction comes primarily through your prompt, so descriptive language about lighting, pace, and mood carries a lot of weight.

What happens if I am not happy with the result? Regenerate. You can adjust a single word in your prompt, shift a parameter slightly, or try an entirely different description. Because no coding is required and results come back quickly, the cost of experimenting is low, and most users find a satisfying output within a few attempts.

Where can I use the outputs? The clips you generate are yours to use across personal projects, social media content, concept presentations, and creative portfolios. There are no platform-specific restrictions baked into the output files, so they behave like any other video asset you would produce through a standard workflow.

Try ltx-2-distilled on Picasso IA right now and see what a single well-written prompt can produce in under two minutes.

Credit Cost

Each generation consumes 15 credits

15 credits
or 75 credits for 5 generations

Use Cases

Describe a product in motion — like a perfume bottle rotating on a reflective surface — and get a short video clip ready for an e-commerce page.

Upload a portrait photo and animate it into a short video with natural movement, using image strength to keep the face accurate.

Type a scene description for a vertical 9:16 clip and get a ready-to-post video for Instagram Reels or TikTok without opening any editing software.

Generate a 5-second animated background loop from a text prompt to use behind a title slide or presentation screen.

Draft a rough concept for a video ad by typing the scene, using prompt enhancement to fill in details you have not thought through yet.

Create multiple versions of the same scene by changing the seed value each time — same prompt, different visual interpretations, fast.

Turn a single product photograph into an animated clip showing the item from different angles or in a lifestyle setting.

Switch Category

Text To Image

Text To Image

Text To Video

Large Language Models

Large Language Models

Text To Speech

Text To Speech

Super Resolution

Super Resolution

Lipsync

AI Music Generation

AI Music Generation

Video Editing

Speech To Text

Speech To Text

AI Enhance Videos

Remove Backgrounds

Remove Backgrounds

Effects