Preprocessors
Learn about using Preprocessors for ControlNet Models
Last updated
Learn about using Preprocessors for ControlNet Models
Last updated
Positioned to the left of the image or video you have uploaded within the ControlNet field as shown above, you can see a selection of icons.
You can select from these to preprocess your image before selecting your ControlNet model.
The preprocessor you use will depend on the image or video you're working with and what you're trying to achieve.
Locates all faces in the input image, analyzes each face to detect facial features, and generates a facial landmarks map that can be used in conjunction with the OpenPose CN-model. This provides you with the ability to transfer facial expressions from one image to another. The Face Capture and Pose Capture preprocessors can be used simultaneously.
Image within ControlNet | After Face Capture Preprocessor |
---|---|
By selecting Face Capture Preprocessor and then within the CN (ControlNet) Model dropdown selecting OpenPose, you can use the face positioning of your ControlNet image within new creations. Note : Face Capture works for one or more faces
Example creations made using the Face Capture Preprocessor with the OpenPose ControlNet Model :
This module locates human subjects in the input image, analyzes each one to capture their pose, and generates a 'bones-joints' type map that can be used in combination with the OpenPose CN-model. This gives you the ability to control the pose of subjects in your generated images.
Image within ControlNet | Add Pose Capture Preprocessor |
---|---|
By selecting Pose Capture Preprocessor and then within the CN (ControlNet) Model dropdown selecting OpenPose, you can use the pose positioning of your ControlNet image within new creations.
Example creations made using the Pose Capture Preprocessor with the OpenPose ControlNet Model :
Creates a sketch-style image from the reference image with the help of an AI model.
Image within ControlNet | Add Sketch/Outline Preprocessor |
---|---|
For the best results, it's recommended to invert the colours to be black with white lines. As shown in screen recording below 👇
For reference images with the Sketch Preprocessor applied, the following ControlNet Models work best: Depth, LineArt, LineArt Anime, M-LSD, Normal Bae, Scribble, Segmentation & SoftEdge.
Example creations made using the Sketch/Outline Preprocessor with the M-LSD ControlNet Model :
Utilizes the MiDaS Depth model to generate a depth-map from the reference image.
For best results when using the Depth Map preprocessor, select the Depth ControlNet Model from the CN Model Dropdown Example creations made using the Depth Map Preprocessor with the Depth ControlNet Model :
Utilizes the IS-Net and U2Net Segmentation models to generate a segmentation map from the reference image based on its distinct areas and subjects. When you select the Segmentation preprocessor, you also have the option to mask only subjects or backgrounds.
Using Segmentation with Subject Only Mask This allows you to take the subject of the reference image, use it as a reference and have full control over the background or environment.
Example creations made using the Segmentation Preprocessor with Subject Only Mask, along with the Segmentation ControlNet Model :
Using Segmentation with Background Only Mask This allows you to have full control over the contents of the Subject whilst maintaining strong references from the contents of the background.
Example creations made using the Segmentation Preprocessor with Background Only Mask, along with the Segmentation ControlNet Model :
Use this preprocessor to invert the colours of your active image (original, depth map, sketch etc)
Now we've got the hang of preprocessors, let's learn about each ControlNet model that's available at a click of a button within Sogni 👇
Need help? Ask in our Discord!
For any questions regarding the use of Sogni, tips, suggestions, bugs, sharing art, discussing AI and life, or simply hanging out... Join our Discord community! Join Sogni Discord ✨