• Picasso AI Logo
    Logo Picasso IA
  • Home
  • AI Image
    Nano Banana 2
  • AI Video
    Veo 3.1 Lite
  • AI Chat
    Gemini 3 Pro
  • Edit Images
  • Upscale Image
  • Remove Background
  • Text to Speech
  • Effects
    NEW
  • Generations
  • Billing
  • Support
  • Account
  1. Collection
  2. Text to Image
  3. Sdxl Multi Controlnet Lora

SDXL Multi Controlnet LoRA: Layered AI Image Control

SDXL Multi Controlnet LoRA is a text-to-image model that gives you direct control over the structure, style, and composition of generated images, all in one place on Picasso IA. Most AI image models accept a text prompt and return a result, but when you need a specific pose, a particular spatial layout, or a distinct visual style applied consistently, a prompt alone often falls short. This model lets you feed in reference images through up to three ControlNet inputs and layer them simultaneously, so the output conforms to the shapes, depths, and contours you actually specify. The model supports LoRA weight loading, which means you can bring in a trained style or character and blend it into your generation at a tunable scale. It also handles img2img workflows, letting you feed an existing photo as a base and adjust how much the prompt reshapes it. Inpainting adds a third layer: mask a specific area, describe what you want there, and the model fills it in while leaving the rest of the image intact. This is a practical tool for illustrators who need to match a reference pose, product designers testing color or texture variations, and art directors who want repeatable visual styles across a campaign. Open it on Picasso IA, upload your reference images, set your ControlNet conditions, and run your first generation in minutes.

Fofr

213.8k runs

Sdxl Multi Controlnet Lora

2023-10-21

Commercial Use

SDXL Multi Controlnet LoRA: Layered AI Image Control

Table of contents

  • Overview
  • How It Works
  • Frequently Asked Questions
  • Credit Cost
  • Features
  • Use Cases
  • Examples
Get Nano Banana Pro

Overview

SDXL Multi Controlnet LoRA is a text-to-image model built for creators who need direct, repeatable control over the structure and style of generated images, available on Picasso IA. A single text prompt works well for quick ideation, but it rarely delivers the precise pose, spatial layout, or visual consistency that a professional project demands. This model accepts up to three ControlNet reference images simultaneously, layering conditions like edge detection, depth maps, and body pose to steer the output toward a specific visual target. Pair that with LoRA weight support and inpainting, and you have a single tool that handles complex, multi-step image projects without switching between separate apps.

How It Works

  • Write your text prompt and, optionally, a negative prompt to filter out unwanted elements from the output.
  • Select up to three ControlNet types, such as edge detection, depth, or OpenPose, and upload a reference image for each one you want active.
  • Upload a base image and switch to img2img or inpainting mode if you want to edit an existing photo rather than generate from scratch.
  • Paste a LoRA weights URL into the weights field and set the LoRA scale to blend in a specific trained style.
  • Click generate and download the result from the output panel; change the seed, conditioning strength, or prompt and re-run to refine.

Frequently Asked Questions

Do I need programming skills or technical knowledge to use this? No, just open SDXL Multi Controlnet LoRA on Picasso IA, adjust the settings you want, and hit generate.

Is it free to try? Yes, you can run generations without any upfront cost. The number of free runs available depends on your account plan.

How long does it take to get results? A standard 768x768 generation at 30 inference steps typically finishes in 20 to 40 seconds. Enabling the refiner or increasing the step count adds time proportionally.

What output formats are supported? The model returns image files you can download directly from the results panel. You can generate up to four images per run by adjusting the number of outputs setting.

Can I customize the output quality or style? Yes. You can adjust inference steps, the classifier-free guidance scale, scheduler type, LoRA scale, and each ControlNet's conditioning strength. Each parameter changes the result in a measurable way.

How many times can I run the model? There is no hard cap built into the model itself. How many generations you can run depends on your current account plan.

What happens if I am not happy with the result? Change the seed, lower or raise the ControlNet conditioning scale, or adjust the prompt strength slider. Small parameter changes often produce noticeably different outputs without rebuilding the prompt from scratch.

Credit Cost

Each generation consumes 1 credit

1 credit

or 5 credits for 5 generations

Features

Everything this model can do for you

Three simultaneous ControlNets

Stack up to three independent conditioning inputs, such as edge, depth, and pose, in a single generation pass.

LoRA style loading

Apply any compatible LoRA weights and dial in the blend with a dedicated scale slider.

img2img workflow

Feed an existing photo as the base image and use prompt strength to control how far the output drifts from the original.

Inpainting

Mask any area of an image and fill it with new content described in the prompt, leaving surrounding pixels intact.

Multi-scheduler support

Choose from seven schedulers, including K_EULER and DPMSolverMultistep, to match your preferred generation behavior.

Fine-grained conditioning

Set start, end, and conditioning scale for each ControlNet to control exactly when and how strongly each input takes effect.

Flexible image sizing

Set custom output dimensions or match the size automatically to an input or ControlNet image.

Fine control over controlnet start/end and conditioning scale

Use Cases

Stack two or three ControlNet conditions, such as edge detection combined with a depth map, to generate an image that matches both the outline and spatial structure of reference photos

Inpaint a specific region of a photo, like a background or an object, by drawing a mask over it and writing a prompt describing what should replace it

Apply a LoRA style file to shift the visual look of your output, tuning the blend scale until the result matches the aesthetic you are aiming for

Feed a character pose reference through the OpenPose ControlNet to reproduce exact body positioning in a newly generated scene

Use img2img mode with a low prompt strength setting to recolor or restyle a photo while preserving most of its original composition

Run lineart and depth ControlNets at the same time to generate an illustration that follows both the line structure and three-dimensional feel of a sketch

Generate multiple product concept images from a single reference shot by swapping prompts while keeping the same ControlNet layout

Edge-to-image creative workflows

Examples

768x768
2m 31s
Refine: no_refiner
Scheduler: K_EULER
Lora Scale: 0.8
Num Outputs: 1
Controlnet 1: soft_edge_hed
Controlnet 2: none
Controlnet 3: none
Guidance Scale: 7.5
Apply Watermark: No
Prompt Strength: 0.8
Sizing Strategy: width_height
Controlnet 1 End: 1
Controlnet 2 End: 1
Controlnet 3 End: 1
Controlnet 1 Start: 0
Controlnet 2 Start: 0
Controlnet 3 Start: 0
Num Inference Steps: 30
Controlnet 1 Conditioning Scale: 0.8
Controlnet 2 Conditioning Scale: 0.8
Controlnet 3 Conditioning Scale: 0.75
rainbow

A TOK photo, extreme macro photo of a golden astronaut riding a unicorn statue, in a museum, bokeh, 50mm

768x768
14.1s
Refine: base_image_refiner
Scheduler: K_EULER
Lora Scale: 0.8
Num Outputs: 1
Controlnet 1: soft_edge_hed
Controlnet 2: depth_leres
Controlnet 3: none
Refine Steps: 20
Guidance Scale: 7.5
Apply Watermark: No
Prompt Strength: 0.85
Sizing Strategy: width_height
Controlnet 1 End: 1
Controlnet 2 End: 1
Controlnet 3 End: 1
Controlnet 1 Start: 0
Controlnet 2 Start: 0
Controlnet 3 Start: 0
Num Inference Steps: 30
Controlnet 1 Conditioning Scale: 0.4
Controlnet 2 Conditioning Scale: 0.4
Controlnet 3 Conditioning Scale: 0.75
soft, rainbow

A TOK photo, extreme macro photo of a golden astronaut riding a unicorn statue, in a museum, 18mm

768x768
9.0s
Refine: no_refiner
Scheduler: K_EULER
Lora Scale: 0.8
Num Outputs: 1
Controlnet 1: soft_edge_hed
Controlnet 2: none
Controlnet 3: none
Guidance Scale: 7.5
Apply Watermark: No
High Noise Frac: 0.8
Prompt Strength: 0.8
Sizing Strategy: width_height
Controlnet 1 End: 1
Controlnet 2 End: 1
Controlnet 3 End: 1
Controlnet 1 Start: 0
Controlnet 2 Start: 0
Controlnet 3 Start: 0
Num Inference Steps: 30
Controlnet 1 Conditioning Scale: 0.8
Controlnet 2 Conditioning Scale: 0.8
Controlnet 3 Conditioning Scale: 0.75
soft, rainbow

A TOK photo, extreme macro photo of a golden astronaut riding a unicorn statue, in a museum, 18mm

Switch Category

Effects

Text To Image

Text To Image

Text To Video

Large Language Models

Large Language Models

Text To Speech

Text To Speech

Super Resolution

Super Resolution

Lipsync

AI Music Generation

AI Music Generation

Video Editing

Speech To Text

Speech To Text

AI Enhance Videos

Remove Backgrounds

Remove Backgrounds