LTX 2 Distilled is a text-to-video AI model that takes a written prompt or an input image and produces a short video clip without any setup. Creators and marketers who need quick video drafts no longer have to spend hours in editing software or wait for large production runs. You type what you want to see, adjust the aspect ratio, and get a generated video ready for review. The model supports six aspect ratios including 16:9 for widescreen content and 9:16 for social media reels, giving you direct control over how the final clip is framed. Built-in prompt expansion takes a rough idea like "a rainy street at night" and automatically broadens it into a richer scene description before generation begins. You can also supply a reference image and control how closely the video follows it, making it straightforward to animate product photos or reference shots. LTX 2 Distilled fits naturally into content pipelines where speed matters. Iteration is fast: adjust a word in the prompt, tweak the image strength, and run again to refine the clip. It is available directly on Picasso IA with no account required to start testing.
LTX 2 Distilled is an open-source text-to-video model that converts a written prompt, or an uploaded image, into a playable video clip within seconds. Available on Picasso IA, it is built for creators, marketers, and content teams who need video drafts without editing software or a full production setup. A typical use case looks like this: you type a scene description, pick 16:9 for a widescreen format, and receive a generated clip in under a minute. The model also handles image-to-video, so you can start from a still photograph and describe the motion you want to see in the output.
Do I need programming skills or technical knowledge to use this? No, just open LTX 2 Distilled on Picasso IA, adjust the settings you want, and hit generate. The interface is point-and-click with no code required.
Is it free to try? Yes, you can run the model on Picasso IA without paying upfront. Some usage tiers may apply depending on how many clips you generate in a session.
How long does it take to get results? Most clips are ready in under a minute. Generation time scales with the number of frames you request and current server load, so shorter clips are faster.
What output formats are supported? The model outputs a standard video file you can download and open in any common video editor or upload directly to social media platforms.
Can I customize the output quality or style? Yes. You adjust the prompt wording, aspect ratio, frame count, and image strength slider to shape each result. If you supply a reference image, the image strength setting controls how much the video deviates from that source.
What happens if I'm not happy with the result? Revise your prompt or change one setting and run the model again. Toggling prompt expansion or setting a different seed typically produces noticeably different output in the next run.
Everything this model can do for you
Type a prompt and receive a generated video clip without uploading any source media.
Upload a reference image and animate it into a short clip with adjustable image influence.
Choose from 16:9, 9:16, 4:3, 3:4, 1:1, or 21:9 to match your publishing format.
Toggle on automatic scene expansion to turn a short idea into a richer description before generation.
Set a seed value to get the same output across runs for consistent testing and iteration.
Pick the number of frames from the available options to set clip length and pacing.
Download generated video clips as clean files ready for direct use or further editing.
The shot opens on a news reporter standing in front of a row of cordoned-off cars, yellow caution tape fluttering behind him. The light is warm, early sun reflecting off the camera lens. The faint hum of chatter and distant drilling fills the air. The reporter, composed but visibly excited, looks directly into the camera, microphone in hand with the letters "R8". Reporter (live): “Thank you, Sylvia. And yes this is a sentence I never thought I’d say on live television but as of today, you can now run LTX 2 distilled on Replicate”
A cinematic close-up of Wednesday Addams frozen mid-dance on a dark, blue-lit ballroom floor as students move indistinctly behind her, their footsteps and muffled music reduced to a distant, underwater thrum; the audio foregrounds her steady breathing and the faint rustle of fabric as she slowly raises one arm, never breaking eye contact with the camera, then after a deliberately long silence she speaks in a flat, dry, perfectly controlled voice, “LTX 2 distilled is now on Replicate,” each word crisp and unemotional, followed by an abrupt cutoff of her voice as the background sound swells slightly, reinforcing the deadpan humor, with precise lip sync, minimal facial movement, stark gothic lighting, and cinematic realism.