Lucy Edit 2 lets you rewrite the visual content of a video by typing a description or dropping in a reference image. Instead of spending hours in a timeline editor, you describe what you want changed and the model handles the rest. You can replace objects in a scene, shift the entire visual style to match a mood board, or change how a character looks without re-shooting. The model reads both your text prompt and a reference image simultaneously, so you can point to exactly the style or item you have in mind. Results stay coherent across frames, keeping motion natural while the visuals shift. Drop it into a social media production workflow, a marketing content pipeline, or a personal video project. Upload your MP4, type or paste a description, optionally attach a reference image, and you get back an edited clip in minutes.
Lucy Edit 2 takes an existing video and rewrites parts of it based on what you describe in text, a reference image, or both. The problem it solves is straightforward: making specific, targeted changes to footage without touching a timeline or loading a layer-based editor. On Picasso IA, you upload your clip, describe the edit, and get a modified version back without any technical setup. It handles style shifts, object swaps, and character-level changes in a single pass, so you spend less time on revisions and more time on the actual creative decision.
Do I need programming skills or technical knowledge to use this? No, just open Lucy Edit 2 on Picasso IA, adjust the settings you want, and hit generate.
Is it free to try? Lucy Edit 2 is available on Picasso IA without requiring a paid plan to test it. Check the credits page for the latest usage details.
How long does it take to get results? Most short clips come back within 30 to 90 seconds. Longer videos or more involved edits may take a bit more time depending on server load.
What output formats are supported? The model returns your edited video as an MP4 file, ready to download and drop into any editing app or publish directly.
Can I use a reference image without writing a text prompt? Yes. Leave the prompt field empty and upload a reference image on its own. The model uses that image alone to guide the edit, which works well when you know what you want visually but find it hard to put into words.
What if the result doesn't match what I had in mind? Refine the wording of your prompt, try a different reference image, or change the seed to generate a new variation. Small changes to the prompt often produce noticeably different outputs, so iterating quickly is part of the process.
Everything this model can do for you
Type a description of the change you want and the model applies it across all video frames.
Upload a photo of any object, person, or style to show the model exactly what you want added or changed.
Shift the visual tone of an entire clip by describing a mood or attaching a reference image.
Swap specific items in the scene, from clothing to props, without re-shooting.
An optional setting rewrites your text description into a more precise instruction before generation.
Set a seed value to get the same edit output every time you run the model.
Get back a ready-to-use video file in standard format, no conversion needed.