Kling v2.6 Motion Control takes a reference image and a reference video and produces a new video where the characters move exactly as they do in the source footage. If you've ever wanted to animate a still photo or place a character into a different scene while keeping their natural movements, this model handles that without any manual animation work. You can choose between Standard mode for quick, cost-effective results or Professional mode for sharper output with finer detail. The model reads motion data directly from your reference video, so the character's actions, gestures, and head turns carry over into the new clip. You can also add a text prompt to introduce extra elements on screen or adjust the feel of the scene. It fits into workflows where consistency matters: product demos, social video content, short-form storytelling, or animating artwork you've already created. Set your inputs, pick your duration and orientation, and the model generates the video in one step. No frame-by-frame work, no rigging, no timeline editing.
Kling v2.6 Motion Control is a text-to-video model that puts the movement of a scene under your direct control. Instead of describing motion entirely in words, you supply a reference image and a reference video, and the model uses both to generate a new clip where the character moves exactly as they do in your footage. On Picasso IA, this runs in the browser with no setup required. Think of it as a tool for anyone who has a still image they want to animate, or a motion sequence they want to apply to a different subject.
Do I need programming skills or technical knowledge to use this? No, just open Kling v2.6 Motion Control on Picasso IA, adjust the settings you want, and hit generate.
Is it free to try? Yes, Picasso IA offers free access to Kling v2.6 Motion Control so you can test it with your own images and videos before committing to a project.
How long does it take to get results? Generation time depends on the clip length and the mode selected. Standard mode is faster; Professional mode takes a bit longer but produces noticeably sharper detail.
What output formats are supported? The model generates video files you can download directly from the results panel. Standard video formats are supported for playback and editing.
Can I customize the output quality or style? Yes. You can switch between Standard and Professional modes, add a text prompt to shape scene content, and choose the character orientation to control how long the output clip can run.
How many times can I run the model? You can run Kling v2.6 Motion Control as many times as you need. Each run produces a new video, so you can iterate until the result matches what you had in mind.
What happens if I'm not happy with the result? Adjust your reference video, refine the text prompt, or try a different orientation setting, then run the model again. Small changes to the input often produce meaningfully different outputs.
Everything this model can do for you
Copies character actions and expressions from a reference video onto the subject in your image with frame-level accuracy.
Standard mode delivers fast, cost-effective results; Pro mode produces higher-quality output for client-ready work.
Set the character to face the direction shown in the image or match the orientation of the reference video.
Add scene elements or motion effects on top of the reference inputs by typing a short description.
Keep the sound from the reference video intact in the generated clip with a single toggle.
Accepts JPG and PNG images up to 10MB and MP4 or MOV video files up to 100MB.
Generate videos from 3 seconds up to 30 seconds depending on your chosen orientation setting.
Ideal for both quick drafts and polished productions