Wan 2.1 1.3b is a text-to-video model that turns a written description into a short animated clip without any technical setup. It solves a practical problem for creators who need motion content but lack video editing skills or production time: type what you want to see, and the model renders a downloadable 5-second clip at 480p. The model handles both landscape and portrait orientations, so you can generate 16:9 clips for web or desktop and 9:16 clips for mobile-first platforms in the same session. Frame count is adjustable from 17 to 81 frames, giving you control over clip length at a standard 16fps playback rate. The sampling steps and guidance scale parameters let you trade generation speed for output quality, or tighten how faithfully the video follows your written prompt. It fits into a content workflow as a fast ideation or prototyping step. Write a scene description, generate a clip, review it, and refine your prompt from there. No rendering software, no timeline editor, no prior video experience required.
Wan 2.1 1.3b is a text-to-video model that converts a written prompt into a short animated clip, ready to download in seconds. It is available on Picasso IA for anyone who needs motion content without video editing software or production experience. You describe a scene, set your aspect ratio and duration, and the model renders a 5-second 480p clip. It works well for quick content ideation: write a scene, watch it move, refine from there.
Do I need programming skills or technical knowledge to use this? No, just open Wan 2.1 1.3b on Picasso IA, adjust the settings you want, and hit generate.
Is it free to try? Yes. You can run Wan 2.1 1.3b without a subscription to test it with your own prompts.
How long does generation take? Generation time depends on the sampling steps you choose. Lower step counts typically finish in under 30 seconds; higher step counts take a bit longer but produce sharper output.
What resolution does it output? The model generates videos at 480p. This suits social media previews, mockups, and short-form content where file size and fast loading matter.
Can I reproduce the same output later? Yes. Set the seed parameter before generating, then use the same seed again to reproduce the same video frame-for-frame.
What aspect ratios are available? 16:9 landscape and 9:16 portrait. Use 16:9 for presentations and web embeds; use 9:16 for mobile stories and short-form vertical content.
What if the output does not match my prompt? Try raising the guidance scale, which makes the model follow your description more closely. You can also rewrite the prompt to be more specific about colors, motion, and scene elements.
Everything this model can do for you
Type a description and get a rendered 5-second video clip, no design tools needed.
Choose between 16:9 landscape and 9:16 portrait to match the platform where you publish.
Pick from 17 to 81 frames to control clip duration at standard 16fps playback.
Lower sampling steps for faster results, or raise them for a cleaner, more detailed output.
Raise the guidance scale to get outputs that follow your written description more closely.
Set a fixed seed to regenerate the exact same clip whenever consistency matters.
Run the model in a browser with no software to download or configure.
a dog is riding on a skateboard down a hill
A close up of a woman
a dog is riding on a skateboard down a hill