ControlNet Models
Learn about using ControlNet Models
Last updated
Learn about using ControlNet Models
Last updated
Sogni offers 14 different ControlNet models, each with different purposes and offering different results. You can easily select which model to use within the CN Model dropdown below the ControlNet reference image or video field.
Unlike with Preprocessors, the magic of ControlNet models happens behind the scenes and is applied to your generations. Let's break down what each ControlNet model (CN model) does and how it can be best used to further your creations ✨
Works with the outlines of an image. It works best if the reference image is an inverted outline-style image (white lines on a black background). You can use the 'Sketch' and 'Invert' preprocessors to prepare an inverted outline-style reference from any image you drop into the ControlNet module.
Reproduces depth relationships from the reference image. Works well with original images and depth-maps created using the 'Depth Map' preprocessor.
Step One : Add Reference | |
Once you've completed the above steps, enter your prompt, select your style and generate ✨
Using the Depth CN Model can help you to create beautiful images that replicates shapes within both the foreground and background.
Utilize masks to define and modify specific areas. Employ reference images with clear transparency areas to be filled with the InPaint model. Tip: To zoom out an image and fill the empty areas, use InPaint. Just add the image to ControlNet, activate 'Camera: zoom, pan, roll', zoom out to your desired level, select the InPaint model, and click the "Imagine" button.
Use the Mask tool to draw an area you would like to InPaint and select InPaint on the right as shown below.
This will then load the image with the mask into the ControlNet panel with the InPaint CN Model active (you may need to wait for this to download if it's your first time using)
Enter your prompt within the prompt field, for the below example we used : Prompt : "Cat wearing suit. bird on shoulder" Style : InkPunk Model : Sogni Artist v1.0
You will notice that other elements of the image have changed slightly from the original, this is in the nature of working with ControlNet models at present but we are working on a solution!
Capable of making direct changes to the reference image from text instructions. Use the prompt and style fields to instruct ControlNet to change something in the image. You can type things like "Make them look like robots", "Add boats on the water", "Change sunglasses for ski goggles", "Make it look like winter", etc.
For this example we generated our first image - Prompt : 3d ugly evil purple goblin with big nose, big eyes. Hoop earring. Head and shoulders Style : (Custom) 3d game design Model : Sogni Artist v1.0
We gave us this good looking fella 👇
We then added this image to the ControlNet tab and selected Instruct Pixel2Pixel CN Model
We then added the prompt : "Add steam punk goggles" which then generated the following
Trained on line drawings, this tool can translate lineart images into full-color images. It works best if the reference image is an inverted outline-style image (white lines on a black background). You can use the 'Sketch' and 'Invert' preprocessors to prepare an inverted outline-style reference from any image you drop into the ControlNet module.
LineArt optimized for anime.
(Mobile Line Segment Detection) is a straight-line detector. It is useful for extracting outlines with straight edges like interior designs, buildings, street scenes, picture frames, and paper edges. It works best if the reference image is an inverted outline-style image (white lines on a black background). You can use the 'Sketch' and 'Invert' preprocessors to prepare an inverted outline-style reference from any image you drop into the ControlNet module.
Reproduce depth relationships using surface normal depth maps. The Bae normal map tends to render details in both background and foreground, while the Depth model might ignore more background objects.
A fast human key-point detection model that can extract human poses like positions of hands, legs, and head. Use it in conjunction with the face and body capture preprocessors.
Generate images from scribbles or freehand drawings. Works great with the Sketch preprocessor.
Find and reuse distinct areas and subjects. Works well with the Segmentation and Depth preprocessors.
Randomly Find and reorder major elements in an image.
In contrast with the LineArt models, SoftEdge finds and reuses soft edges to generate the new image. Works great without preprocessing.
Can be used for : Adding details to an image: Set the reference image in the ControlNet module, and re-generate it using the same model. Feel free to adjust the prompt, steps, guidance, etc. Changing the style of an image: Set the reference image in the ControlNet module, switch to a model that is specialized in the desired style, adjust the prompt, and generate. Detail adjustments: You can change specific details such as clothing or hair color. Simply set the reference image in the ControlNet module, and enter the details you wish to change in the prompt field, for example: 'red hair, green outfit.' Creative font generation: Tile Resample can also do a great job blending fonts into your generated images. Simply drag and drop an image with a solid white background and black text into the ControlNet module, edit the generation settings, and generate. Experiment with the "Invert" preprocessor. Higher steps and guidance values are recommended.
Need help? Ask in our Discord!
For any questions regarding the use of Sogni, tips, suggestions, bugs, sharing art, discussing AI and life, or simply hanging out... Join our Discord community! Join Sogni Discord ✨
Optional Step Two : Use Depth Map Preprocessor
Optional Step Three : Invert Colors Learn how
Step Four : Select Depth in CN Model