Page cover

🧠Sogni Fast Worker Advanced Configuration

This page outlines additional settings you can tweak with your Sogni Fast Worker Install

Setting Your .env File Preferences

These first two options are available for the Stable Diffusion worker and automatically set during the worker-install process but you can also update them manually through the .env file. If changed manually you'll need to restart the worker to apply the changes either through the .bat / .sh files or by running docker compose down and then docker compose up from the command line.

AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT=6

The Stable Diffusion worker comes bundled with 1 model by default (Sogni Artist SD1.5) and all 15 supported Controlnets. You may set the number of models you want your worker to download right away when it starts. Set this to 0 to disable automatic runtime model downloads.

While the count of models you are hosting is less than AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT the worker will automatically download new models one at a time until this count of models is met. Models are downloaded based on real-time model popularity on the network with some models being 2gb and others being 6.7gb.

After this count is met, if your model folder size is still less than MAX_MODEL_FOLDER_SIZE_GB and AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT is greater than 0 the worker will continue to automatically download new models but only as they are requested by the network. Sogni Supernet will only request new model downloads when the job queue for a model exceeds the amount of workers available to support it.

Models are cached on your local machine the first time they are downloaded so you won't have to download them again, unless you delete the models folder. The models are downloaded to the local folder /data-models/Stable-diffusion in the same directory as where your docker-compose file is ran. This folder is loaded as a persistent Docker Volume where both run-time downloaded and existing model files are automatically picked up by Fast Worker.

If you need to download all the models ahead of time, for example to manually seed the model folder of multiple separate Fast Worker machines during setup you can reference this api endpoint for the list of download links to use with your own download shell script.

You could also set AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT to a very high number like 999 and wait till it automatically downloads all the models for you to copy through your network.

MAX_MODEL_FOLDER_SIZE_GB=300

Set the maximum size for your models folder in GB when using the Stable Diffusion worker. This includes Controlnets and other models. Any automated model downloads you configure with AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT will stop before this limit is reached.

PREFERRED_MODEL_IDS=

You can use this option instead of AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT to tell the worker exactly which models you want it to host by Sogni Model ID. If your worker model folder already has tons of models and is switching models too frequently you can use this feature to only share certain models. You can also specify models which you have not downloaded yet and they will automatically be downloaded in the order they are present in this list. Any invalid models will be skipped. You can find the list of supported model IDs for Stable Diffusion worker here: https://socket.sogni.ai/api/v1/config/models/nvidia. You can find the list of supported Flux model IDs here: https://api.sogni.ai/v1/worker/config/flux

For example if you wanted to host only the 8 SDXL Turbo models used by the Photobooth app you could configure that as: PREFERRED_MODEL_IDS=coreml-sogniXLturbo_alpha1_ad,coreml-dreamshaperXL_v21TurboDPMSDE,coreml-juggernautXL_v9Rdphoto2Lightning,coreml-wildcardxXLLIGHTNING_wildcardxXL,coreml-realvisxlV40_v40LightningBakedvae,coreml-realDream_sdxlLightning1,coreml-fenrisxl_SDXLLightning,coreml-epicrealismXL_VXIAbeast4SLightning

DEFAULT_MODEL_ID=

Tells the worker to select a specific model by default whenever it boots up. This will be the model that is loaded when it connects to the Supernet and starts waiting for jobs. If the value is invalid it will be ignored and the default model for the worker type will be used. You may specify a model that is not yet available but is listeed in your PREFERRED_MODEL_IDS. For example in the Photobooth case we set: DEFAULT_MODEL_ID=coreml-sogniXLturbo_alpha1_ad

DISABLE_AUTOMATIC_DOWNLOADS=1

By default workers that don't have AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT set explicitly to 0 and MAX_MODEL_FOLDER_SIZE_GB set explicitly to 0 may be called upon at runtime by the Supernet, while connected, to download a new model based on demand. To opt-out of automatic download of new models you set this value to 1.

DATA_DOG_API_KEY=

Input the DATA_DOG API key assigned to you if participating in Sogni Supernet debug logs.

COOLDOWN_EVERY_X_JOBS_COUNT=100

By default workers are configured to take a "cool down" break after every 100 successful jobs. You may change this cool down to be at any interval you wish. Note that a value of 0 is considered the default of 100 jobs. We have provided a general default but it is your responsibility to ensure your cooldown values are appropriate for your machine's needs considering some popular models can keep machines active 24/7.

COOLDOWN_EVERY_X_JOBS_SEC=30

During a worker "cool down" this value determines for how many seconds the worker should sit before receiving a new job. The default is 30 seconds. You may change this cool down to be any duration in seconds. Note that a value of 0 will use the default value. During a cooldown a worker will remain connected but just won't receive jobs. There is no penalty to the worker on job priority after leaving cooldown based on cooldown settings. On the contrary, if your cooldown values are too low, and your machine thermally throttles it's performance due to inefficient cooling, that can impact the rate at which it receives jobs, so please cool adequately and set realistic cooldown settings for your scenario.

Setting Up Your docker-compose.yaml File Preferences

  • restart: unless-stopped: Remove or comment this line with # if you don't want the worker to start automatically when your computer starts. There are additional options outlined within the docker-compose.yaml file.

  • pull_policy: always: This setting ensures Docker Hub checks for the latest version of the worker each time it runs. Comment it out if you want to stay on the current version (not recommended).

  • ./data:/data: This maps where Sogni Fast Worker persists configuration. Comment it out if you want all settings to reset with each launch.

  • ./data-models:/data-models: This maps where downloaded models are stored between reboots. Comment it out if you don't want to persist models across reboots. You can also manually place models here or map this folder to another location, like an existing Automatic1111 install, to share models.

Running Fast Worker on a machine with multiple GPUs

Running Fast Worker on multiple GPUs is fully supported but the install process is not quite as streamlined as our single GPU installs. Please reach out if you have any difficulty.

  1. DOWNLOAD INSTALLER FOR YOUR OS The multi-GPU worker installer is bundled with the regular Windows and Linux Sogni Fast Worker installer. Just download and unzip the installer files into the folder you want to run the workers from.

  2. RUN THE MULTI-GPU INSTALLER In the unzipped folder run ./worker-generator.bat for Windows or ./worker-generator.sh. for Linux. This will generate a separate .env configuration file for each GPU detected and then a single docker-compose-generated.yml file which maps these .env files to docker instances so you can run all worker instances at the same time through this single Docker Compose file.

  3. CONFIGURE AUTHENTICATION FOR EACH GPU Open each .envX file in a text editor and add API_KEY and NFT_TOKEN_ID values. Note that for multi-GPU installs the .env file names have a zero-based index appended to the file name. For example if you are running 2 GPUs then the files will be named .env0 and .env1. Note that you will need a separate Fast Worker NFT for each worker. So each should have a separate NFT_TOKEN_ID. The API_KEY can be the same if the NFTs are under the same Sogni Account. See details on acquiring an API_KEY and NFT_TOKEN_ID in the FAQ.

  4. CONFIGURE WORKER TYPE By default each worker runs the Stable-diffusion worker image. If you would like to switch instances to the Flux worker you can update the image: sogni/sogni-stable-diffusion-worker:latest line in the generated docker-compose to image: sogni/sogni-flux-worker:latest

  5. START ALL WORKER INSTANCES

    docker compose -f docker-compose-generated.yml up -d
  6. STOP ALL WORKER INSTANCES

    docker compose -f docker-compose-generated.yml down

Once connected to the Sogni Supernet you can view live worker status and activity on the Fast Worker Dashboard:

You can additionally view the health status of all worker Docker instances locally via:

docker compose ps

To tail the logs of a particular worker instance for troubleshooting grab the ID from docker compose psand run the following:

docker logs -f ID_OF_DOCKER_INSTANCE_FROM_DOCKER_PS

Need help? Join our Discord!

If you have any issues or questions setting up your Sogni Fast Worker you can reach out on Discord or via email to [email protected] for technical support. Join Sogni Discord

Last updated

Was this helpful?