Wan 2.7 R2V is a reference-to-video model that takes your photos or video clips and turns them into fully animated videos while keeping your subject looking exactly the same throughout. If you have ever tried to generate a video of a specific person, product, or character only to end up with something unrecognizable, this model fixes that problem. It reads your reference material and uses it to anchor the visual identity of your subject across every frame. You can upload one or more reference images or short video clips alongside a text prompt describing the scene you want. The model supports 720p and 1080p output, multiple aspect ratios including vertical and horizontal formats, and durations you set at generation time. You also get control over shot type, letting you choose between single-subject framing and multi-subject scenes so the composition fits your project. This fits naturally into content creation, product showcasing, or social media workflows where consistency matters. Instead of rebuilding your character or product from scratch every time, you bring your existing visuals and let the model do the animation work. Open it on Picasso IA and run your first generation in under a minute.
Wan 2.7 R2V generates videos from reference images or short clips, keeping the subject consistent across every frame. If you've ever tried to animate a character, product, or scene only to watch the AI drift into something unrecognizable, this model solves that directly. You provide a reference photo or video clip, write a prompt describing the motion or scenario you want, and get back a video where the subject stays true to your source material. Picasso IA makes this available without any coding or local setup, so you can go from a single photo to a finished clip in a few clicks.
Do I need programming skills or technical knowledge to use this? No, just open Wan 2.7 R2V on Picasso IA, adjust the settings you want, and hit generate.
Is it free to try? Yes, you can run Wan 2.7 R2V without a paid subscription to test it. Check the current plan details on Picasso IA for generation limits and credit costs.
How long does it take to get results? Generation time depends on the clip duration and resolution you pick. A 5-second video at 1080p typically finishes in under two minutes.
What output resolutions and aspect ratios are supported? The model outputs at either 720p or 1080p. You can pick from five aspect ratios: 16:9, 9:16, 1:1, 4:3, and 3:4, so the result fits your target platform without cropping.
Can I control how closely the video matches my reference material? Yes. Supplying multiple reference images or clips gives the model more context about the subject's appearance. You can also write a negative prompt to steer the output away from unwanted visual elements.
What happens if I'm not happy with the result? Adjust your prompt, try a different seed value, or change the resolution and shot type, then regenerate. Small prompt changes often produce noticeably different results.
Where can I use the videos I generate? The videos are yours to use however you need, including social media posts, product demos, client presentations, or personal creative projects.
Everything this model can do for you
Anchors your subject's visual appearance using uploaded photos or video clips.
Renders video at full HD resolution, ready for direct use in professional projects.
Supports 16:9, 9:16, 1:1, 4:3, and 3:4 so output fits any platform without cropping.
Choose between single-subject and multi-subject framing to match your intended composition.
Describe what should not appear in the video to keep results clean and on-brief.
Set a seed value to get the same output again when you need consistent assets.
Describe the action and environment in a prompt and the model translates it into video movement.