Sogni: Learn
Visit Sogni.ai
  • 👋Welcome to Sogni
  • Learn
    • ❓FAQs
    • 😍Basics
      • Prompts
        • Dynamic Prompts
        • Prompt Length Token Limits
      • Switching Processing
      • Generating Batches
      • Generation Seed
      • Inference Steps
      • Guidance Scale
      • RealTime Generation (MacOS)
      • Guide Image
      • ControlNet
        • Preprocessors
        • ControlNet Models
      • Canvas
      • Masks
      • Creating Animations & Videos
    • 🤓Advanced
      • Preview Count
      • Upscaling Images
      • Schedulers
      • Processing
        • On-Device
          • Optimizing Memory
        • Supernet
      • Safe Content Filter
    • 🤖Models
      • About AI Models
      • Model Types
      • Available Models
      • Importing Stable Diffusion Models
      • Model Storage & Deleting Models
      • About Switching Models on Supernet
    • Telegram AI Sticker Bot
  • Tutorials
    • 🎥Video Tutorials
    • 🎓Guides
      • Controlling the Output
      • How to Transfer a Face From One Image to Another
      • How to generate Photorealistic Faces
      • How to Transform a Simple Sketch into Stunning Images
      • How to Create Subject-Steady Animations with Dynamic Backgrounds
      • How to use multiple ControlNets to generate a Single Image
      • How to generate larger batches of images (more than 16 at a time)
      • ▶️How to use Mask Tool
      • ▶️How to Mix Images
      • ▶️How to Extend Images with Canvas
  • Testnet
    • 💫About the Sogni Testnet
      • 💫Joining the Sogni Testnet
      • ⭐Spark Points
    • About Sogni Supernet
  • Sogni Studio Pro
    • Sogni Studio vs Sogni Studio Pro
    • ⬇️Installing Sogni Studio Pro
    • 🔒Why It's Safe to Download Sogni Studio Pro
    • ✅Updating Sogni Studio Pro
    • Account & Wallet
      • Supernet Generation Costs
    • 🐝Worker Mode
    • 🚧How to Recover Your Password
    • 🦊How to Add Your Sogni Wallet to MetaMask
  • Sogni Pocket
    • 📲Downloading Sogni Pocket
  • SOGNI FAST WORKER
    • 🐝About Sogni Fast Worker
    • ⬇️Running Sogni Fast Worker Locally
    • ⬇️Running Sogni Fast Worker Remotely
    • ❓Fast Worker FAQ
    • 🧠Sogni Fast Worker Advanced Configuration
  • Rewards
    • 🙋Account Creation & Verification Reward
    • 🌟Daily Boost Reward
    • 🌟Referral Reward
Powered by GitBook
On this page
  • Canny
  • LineArt
  • LineArt Anime
  • Scribble
  • Depth
  • InPaint
  • Instruct Pixel2Pixel
  • M-LSD
  • Normal Bae
  • OpenPose
  • Segmentation
  • Shuffle
  • Soft Edge
  • Tile Resample

Was this helpful?

  1. Learn
  2. Basics
  3. ControlNet

ControlNet Models

Learn about using ControlNet Models

PreviousPreprocessorsNextCanvas

Last updated 2 months ago

Was this helpful?

Sogni offers 14 different ControlNet models, each with different purposes and offering different results. You can easily select which model to use within the CN Model dropdown below the ControlNet reference image or video field.

Unlike with Preprocessors, the magic of ControlNet models happens behind the scenes and is applied to your generations. Let's break down what each ControlNet model (CN model) does and how it can be best used to further your creations ✨


Canny

Works with the outlines of an image. It works best if the reference image is an inverted outline-style image (white lines on a black background). You can use the 'Sketch' and 'Invert' preprocessors to prepare an inverted outline-style reference from any image you drop into the ControlNet module.

LineArt

Trained on line drawings, this tool can translate lineart images into full-color images. It works best if the reference image is an inverted outline-style image (white lines on a black background). You can use the 'Sketch' and 'Invert' preprocessors to prepare an inverted outline-style reference from any image you drop into the ControlNet module.

LineArt Anime

LineArt optimized for anime.

Scribble

Generate images from scribbles or freehand drawings. Works great with the Sketch preprocessor.


Depth

Reproduces depth relationships from the reference image. Works well with original images and depth-maps created using the 'Depth Map' preprocessor.

Step One : Add Reference

Once you've completed the above steps, enter your prompt, select your style and generate ✨

Using the Depth CN Model can help you to create beautiful images that replicates shapes within both the foreground and background.


InPaint

Utilize masks to define and modify specific areas. Employ reference images with clear transparency areas to be filled with the InPaint model. ‍ Tip: To zoom out an image and fill the empty areas, use InPaint. Just add the image to ControlNet, activate 'Camera: zoom, pan, roll', zoom out to your desired level, select the InPaint model, and click the "Imagine" button.

Use the Mask tool to draw an area you would like to InPaint and select InPaint on the right as shown below.

This will then load the image with the mask into the ControlNet panel with the InPaint CN Model active (you may need to wait for this to download if it's your first time using)

Enter your prompt within the prompt field, for the below example we used : Prompt : "Cat wearing suit. bird on shoulder" Style : InkPunk Model : Sogni Artist v1.0

You will notice that other elements of the image have changed slightly from the original, this is in the nature of working with ControlNet models at present but we are working on a solution!


Instruct Pixel2Pixel

Capable of making direct changes to the reference image from text instructions. Use the prompt and style fields to instruct ControlNet to change something in the image. You can type things like "Make them look like robots", "Add boats on the water", "Change sunglasses for ski goggles", "Make it look like winter", etc.

For this example we generated our first image - Prompt : 3d ugly evil purple goblin with big nose, big eyes. Hoop earring. Head and shoulders Style : (Custom) 3d game design Model : Sogni Artist v1.0

We gave us this good looking fella 👇

We then added this image to the ControlNet tab and selected Instruct Pixel2Pixel CN Model

We then added the prompt : "Add steam punk goggles" which then generated the following


M-LSD

(Mobile Line Segment Detection) is a straight-line detector. It is useful for extracting outlines with straight edges like interior designs, buildings, street scenes, picture frames, and paper edges. It works best if the reference image is an inverted outline-style image (white lines on a black background). You can use the 'Sketch' and 'Invert' preprocessors to prepare an inverted outline-style reference from any image you drop into the ControlNet module.

Normal Bae

Reproduce depth relationships using surface normal depth maps. The Bae normal map tends to render details in both background and foreground, while the Depth model might ignore more background objects.

OpenPose

A fast human key-point detection model that can extract human poses like positions of hands, legs, and head. Use it in conjunction with the face and body capture preprocessors.

Segmentation

Find and reuse distinct areas and subjects. Works well with the Segmentation and Depth preprocessors.

Shuffle

Randomly Find and reorder major elements in an image.

Soft Edge

In contrast with the LineArt models, SoftEdge finds and reuses soft edges to generate the new image. Works great without preprocessing.

Tile Resample

Can be used for : Adding details to an image: Set the reference image in the ControlNet module, and re-generate it using the same model. Feel free to adjust the prompt, steps, guidance, etc. ‍ Changing the style of an image: Set the reference image in the ControlNet module, switch to a model that is specialized in the desired style, adjust the prompt, and generate. ‍ Detail adjustments: You can change specific details such as clothing or hair color. Simply set the reference image in the ControlNet module, and enter the details you wish to change in the prompt field, for example: 'red hair, green outfit.' ‍ Creative font generation: Tile Resample can also do a great job blending fonts into your generated images. Simply drag and drop an image with a solid white background and black text into the ControlNet module, edit the generation settings, and generate. Experiment with the "Invert" preprocessor. Higher steps and guidance values are recommended.


Need help? Ask in our Discord!

Optional Step Two : Use Depth Map Preprocessor

Optional Step Three : Invert Colors

Step Four : Select Depth in CN Model

▶️ Watch the Tutorial video:

For any questions regarding the use of Sogni, tips, suggestions, bugs, sharing art, discussing AI and life, or simply hanging out... Join our Discord community! ✨

😍
InPaint & Canvas Mode: Targeted Edits & Expanding Horizons
Join Sogni Discord
Learn how