Sogni: Learn
Visit Sogni.ai
  • 👋Welcome to Sogni
  • Learn
    • ❓FAQs
    • 😍Basics
      • Prompts
        • Dynamic Prompts
        • Prompt Length Token Limits
      • Switching Processing
      • Generating Batches
      • Generation Seed
      • Inference Steps
      • Guidance Scale
      • RealTime Generation (MacOS)
      • Guide Image
      • ControlNet
        • Preprocessors
        • ControlNet Models
      • Canvas
      • Masks
      • Creating Animations & Videos
    • 🤓Advanced
      • Preview Count
      • Upscaling Images
      • Schedulers
      • Processing
        • On-Device
          • Optimizing Memory
        • Supernet
      • Safe Content Filter
    • 🤖Models
      • About AI Models
      • Model Types
      • Available Models
      • Importing Stable Diffusion Models
      • Model Storage & Deleting Models
      • About Switching Models on Supernet
    • Telegram AI Sticker Bot
  • Tutorials
    • 🎥Video Tutorials
    • 🎓Guides
      • Controlling the Output
      • How to Transfer a Face From One Image to Another
      • How to generate Photorealistic Faces
      • How to Transform a Simple Sketch into Stunning Images
      • How to Create Subject-Steady Animations with Dynamic Backgrounds
      • How to use multiple ControlNets to generate a Single Image
      • How to generate larger batches of images (more than 16 at a time)
      • ▶️How to use Mask Tool
      • ▶️How to Mix Images
      • ▶️How to Extend Images with Canvas
  • Testnet
    • 💫About the Sogni Testnet
      • 💫Joining the Sogni Testnet
      • ⭐Spark Points
    • About Sogni Supernet
  • Sogni Studio Pro
    • Sogni Studio vs Sogni Studio Pro
    • ⬇️Installing Sogni Studio Pro
    • 🔒Why It's Safe to Download Sogni Studio Pro
    • ✅Updating Sogni Studio Pro
    • Account & Wallet
      • Supernet Generation Costs
    • 🐝Worker Mode
    • 🚧How to Recover Your Password
    • 🦊How to Add Your Sogni Wallet to MetaMask
  • Sogni Pocket
    • 📲Downloading Sogni Pocket
  • SOGNI FAST WORKER
    • 🐝About Sogni Fast Worker
    • ⬇️Running Sogni Fast Worker Locally
    • ⬇️Running Sogni Fast Worker Remotely
    • ❓Fast Worker FAQ
    • 🧠Sogni Fast Worker Advanced Configuration
  • Rewards
    • 🙋Account Creation & Verification Reward
    • 🌟Daily Boost Reward
    • 🌟Referral Reward
Powered by GitBook
On this page
  • Setting Your .env File Preferences
  • AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT=6
  • MAX_MODEL_FOLDER_SIZE_GB=300
  • DATA_DOG_API_KEY=
  • COOLDOWN_EVERY_X_JOBS_COUNT=100
  • COOLDOWN_EVERY_X_JOBS_SEC=30
  • Setting Up Your docker-compose.yaml File Preferences
  • Running Fast Worker on a machine with multiple GPUs
  • Need help? Join our Discord!

Was this helpful?

  1. SOGNI FAST WORKER

Sogni Fast Worker Advanced Configuration

This page outlines additional settings you can tweak with your Sogni Fast Worker Install

Setting Your .env File Preferences

These first two options are available for the Stable Diffusion worker and automatically set during the worker-install process but you can also update them manually through the .env file. If changed manually you'll need to restart the worker to apply the changes either through the .bat / .sh files or by running docker compose down and then docker compose up from the command line.

AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT=6

The Stable Diffusion worker comes bundled with 1 model by default (Sogni Artist SD1.5) and all 14 supported Controlnets. You may set the number of models you want your worker to download right away when it starts. Set this to 0 to disable automatic runtime model downloads.

While the count of models you are hosting is less than AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT the worker will automatically download new models one at a time until this count of models is met. Models are downloaded based on real-time model popularity on the network with some models being 2gb and others being 6.7gb.

After this count is met, if your model folder size is still less than MAX_MODEL_FOLDER_SIZE_GB and AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT is greater than 0 the worker will continue to automatically download new models but only as they are requested by the network. Sogni Supernet will only request new model downloads when the job queue for a model exceeds the amount of workers available to support it.

Models are cached on your local machine the first time they are downloaded so you won't have to download them again, unless you delete the models folder.

MAX_MODEL_FOLDER_SIZE_GB=300

Set the maximum size for your models folder in GB when using the Stable Diffusion worker. This includes Controlnets and other models. Any automated model downloads you configure with AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT will stop before this limit is reached.

DATA_DOG_API_KEY=

Input the DATA_DOG API key assigned to you if participating in Sogni Supernet debug logs.

COOLDOWN_EVERY_X_JOBS_COUNT=100

By default workers are configured to take a "cool down" break after every 100 successful jobs. You may change this cool down to be at any interval you wish. Note that a value of 0 is considered the default of 100 jobs. We have provided a general default but it is your responsibility to ensure your cooldown values are appropriate for your machine's needs considering some popular models can keep machines active 24/7.

COOLDOWN_EVERY_X_JOBS_SEC=30

During a worker "cool down" this value determines for how many seconds the worker should sit before receiving a new job. The default is 30 seconds. You may change this cool down to be any duration in seconds. Note that a value of 0 will use the default value. During a cooldown a worker will remain connected but just won't receive jobs. There is no penalty to the worker on job priority after leaving cooldown based on cooldown settings. On the contrary, if your cooldown values are too low, and your machine thermally throttles it's performance due to inefficient cooling, that can impact the rate at which it receives jobs, so please cool adequately and set realistic cooldown settings for your scenario.

Setting Up Your docker-compose.yaml File Preferences

  • restart: unless-stopped: Remove or comment this line with # if you don't want the worker to start automatically when your computer starts. There are additional options outlined within the docker-compose.yaml file.

  • pull_policy: always: This setting ensures Docker Hub checks for the latest version of the worker each time it runs. Comment it out if you want to stay on the current version (not recommended).

  • ./data:/data: This maps where Sogni Fast Worker persists configuration. Comment it out if you want all settings to reset with each launch.

  • ./data-models:/data-models: This maps where downloaded models are stored between reboots. Comment it out if you don't want to persist models across reboots. You can also manually place models here or map this folder to another location, like an existing Automatic1111 install, to share models.

Running Fast Worker on a machine with multiple GPUs

Running Fast Worker on multiple GPUs is fully supported but the install process is not quite as streamlined as our single GPU installs. Please reach out if you have any difficulty.

Running Fast Worker on multi-GPU machines works by simply running multiple independent copies of the Fast Worker Docker software, each targeting a separate GPU. Model cache and disk space is shared between the instances but each instance will require 30gb of system ram. For example: 3 GPUs = 90gb system ram required. Anything less may present instability and lack of jobs directed at the workers.

  1. RUN THE MULTI-GPU INSTALLER In the unzipped folder run ./worker-generator.bat for Windows or ./worker-generator.sh. for Linux. This will generate a separate .env configuration file for each GPU detected and then a single docker-compose-generated.yml file which maps these .env files to docker instances so you can run all worker instances at the same time through this single Docker Compose file.

  2. CONFIGURE WORKER TYPE By default each worker runs the Stable-diffusion worker image. If you would like to switch instances to the Flux worker you can update the image: sogni/sogni-stable-diffusion-worker:latest line in the generated docker-compose to image: sogni/sogni-flux-worker:latest

  3. START ALL WORKER INSTANCES

    docker compose -f docker-compose-generated.yml up -d
  4. STOP ALL WORKER INSTANCES

    docker compose -f docker-compose-generated.yml down

You can additionally view the health status of all worker Docker instances locally via:

docker compose ps

To tail the logs of a particular worker instance for troubleshooting grab the ID from docker compose psand run the following:

docker logs -f ID_OF_DOCKER_INSTANCE_FROM_DOCKER_PS

Need help? Join our Discord!

PreviousFast Worker FAQNextAccount Creation & Verification Reward

Last updated 5 days ago

Was this helpful?

DOWNLOAD INSTALLER FOR YOUR OS The multi-GPU worker installer is bundled with the regular and Sogni Fast Worker installer. Just download and unzip the installer files into the folder you want to run the workers from.

CONFIGURE AUTHENTICATION FOR EACH GPU Open each .envX file in a text editor and add API_KEY and NFT_TOKEN_ID values. Note that for multi-GPU installs the .env file names have a zero-based index appended to the file name. For example if you are running 2 GPUs then the files will be named .env0 and .env1. Note that you will need a separate Fast Worker NFT for each worker. So each should have a separate NFT_TOKEN_ID. The API_KEY can be the same if the NFTs are under the same Sogni Account. See details on acquiring an API_KEY and NFT_TOKEN_ID in the .

Once connected to the Sogni Supernet you can view live worker status and activity on the :

If you have any issues or questions setting up your Sogni Fast Worker you can reach out on Discord or via email to app@sogni.ai for technical support. ✨

Windows
Linux
FAQ
Fast Worker Dashboard
Join Sogni Discord
🧠
Page cover image