Sogni Fast Worker Advanced Configuration
This page outlines additional settings you can tweak with your Sogni Fast Worker Install
Setting Your .env
File Preferences
.env
File PreferencesThese first two options are available for the Stable Diffusion worker and automatically set during the worker-install process but you can also update them manually through the .env file. If changed manually you'll need to restart the worker to apply the changes either through the .bat / .sh files or by running docker compose down
and then docker compose up
from the command line.
AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT=6
AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT=6
The Stable Diffusion worker comes bundled with 1 model by default (Sogni Artist SD1.5) and all 14 supported Controlnets. You may set the number of models you want your worker to download right away when it starts. Set this to 0
to disable automatic runtime model downloads.
While the count of models you are hosting is less than AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT
the worker will automatically download new models one at a time until this count of models is met. Models are downloaded based on real-time model popularity on the network with some models being 2gb and others being 6.7gb.
After this count is met, if your model folder size is still less than MAX_MODEL_FOLDER_SIZE_GB
and AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT
is greater than 0
the worker will continue to automatically download new models but only as they are requested by the network. Sogni Supernet will only request new model downloads when the job queue for a model exceeds the amount of workers available to support it.
Models are cached on your local machine the first time they are downloaded so you won't have to download them again, unless you delete the models folder.
MAX_MODEL_FOLDER_SIZE_GB=300
MAX_MODEL_FOLDER_SIZE_GB=300
Set the maximum size for your models folder in GB when using the Stable Diffusion worker. This includes Controlnets and other models. Any automated model downloads you configure with AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT
will stop before this limit is reached.
DATA_DOG_API_KEY=
DATA_DOG_API_KEY=
Input the DATA_DOG API key assigned to you if participating in Sogni Supernet debug logs.
Setting Up Your docker-compose.yaml
File Preferences
docker-compose.yaml
File Preferencesrestart: unless-stopped
: Remove or comment this line with#
if you don't want the worker to start automatically when your computer starts. There are additional options outlined within the docker-compose.yaml file.pull_policy: always
: This setting ensures Docker Hub checks for the latest version of the worker each time it runs. Comment it out if you want to stay on the current version (not recommended)../data:/data
: This maps where Sogni Fast Worker persists configuration. Comment it out if you want all settings to reset with each launch../data-models:/data-models
: This maps where downloaded models are stored between reboots. Comment it out if you don't want to persist models across reboots. You can also manually place models here or map this folder to another location, like an existing Automatic1111 install, to share models.
Running Fast Worker on a machine with multiple GPUs
Running Fast Worker on multiple GPUs is fully supported but the install process is not quite as streamlined as our single GPU installs. Please reach out if you have any difficulty.
Running Fast Worker on multi-GPU machines works by simply running multiple independent copies of the Fast Worker Docker software, each targeting a separate GPU. Model cache and disk space is shared between the instances but each instance will require 30gb of system ram. For example: 3 GPUs = 90gb system ram required. Anything less may present instability and lack of jobs directed at the worker.
After downloading the correct Fast Worker installer for your OS run the generator script which will create a
docker-compose-generated.yml
file. By default the script usesnvidia-smi
to detect the number of worker config for the number of GPUs you need to run:./worker-generator.bat
or./worker-generator.sh.
You'll need to open the generated
docker-compose-generated.yml
file and update all the .env references to .env1 and .env2 etc for each GPU.In the install folder duplicate the .env to match your pattern in step 2 then popualte each one with the unique vars for each worker. Note that you will need a separate Fast Worker NFT for each worker. So each should have a separate NFT_TOKEN_ID and API_KEY specified in each separate .env file. If you are running all workers under the same Sogni account then all NFTs need to be moved to that Sogni account address and the same API_KEY for that address can be used but each worker needs to still specify a different NFT_TOKEN_ID that exists in the account tied to that API_KEY.
By default each worker runs the Stable-diffusion worker image. If you would like to switch any to the flux worker you can update this line in the generated docker-compose to `sogni/sogni-flux-worker:latest`
Run this command from the installer folder to start all workers:
Run this command from the installer folder to stop all workers:
To view the health status of all running docker workers
To tail the logs of a particular worker instance for troubleshooting grab the ID from
docker compose ps
and run the following:
Need help? Join our Discord!
If you have any issues or questions setting up your Sogni Fast Worker you can reach out on Discord or via email to app@sogni.ai for technical support. Join Sogni Discord ✨
Last updated
Was this helpful?