How how to TEXT TO IMAGE , CONTROLNETS, STABLE DIFFUSION

How to create controlled poses and styles using Stable Diffusion and ControlNets

Using ControlNets with Stable Diffusion to get more control on the generated output images

Lars Nielsen
7 min readMar 7, 2023

--

What this Article is about !

Goodnews !! (for all AUTOMATIC1111 Stable diffusion UI users)

There is now a plugin/extension for the ControlNet compatible with AUTOMATIC1111 . Here, we will walk you through what ControlNets are, what it can be used and detail out the initial guide to getting your Stable Diffusion ( SD ) working with ControlNets .

A short note on Control nets

If you have worked with Image2Image option in Stable Diffusion (SD ), you know how easily you can transfer a style / pose from a base image to your generated image. Now, ControlNet goes a step forward and create almost exact replicas of your poses / styles / positions.

To put in one line, ControlNets let you decide the posture, shape and style of your generated image when you are using any Text-To-Image based models. Enough of the basic introduction , more later …

What can you do with ControlNet anyways?

The possibilities are endless, but here are a few sample use-cases , you can try your own !

--

--

Lars Nielsen
Lars Nielsen

Written by Lars Nielsen

Making ML and AI available to everyone. One commit at a time. | Quantitative programming | Python | Arduino | Comp Vision | pythoslabs@gmail.com / @PythosL

Responses (2)