π§ Sogni Fast Worker Advanced Configuration
This page outlines additional settings you can tweak with your Sogni Fast Worker Install
Setting Your .env File Preferences
.env File PreferencesThese configuration options are available for the ComfyUI/Wan video worker and can be set in the .env file. After making manual changes, restart the worker to apply them either through the .bat/.sh files or by running docker compose down and then docker compose up from the command line.
These first two options are available for the Stable Diffusion worker only, as it hosts the most model files currently. These are set automatically set during the worker-install process but you can also update them manually through the .env file.
AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT=6
Supported by: Stable Diffusion Worker
The Stable Diffusion worker comes bundled with 1 model by default (Sogni Artist SD1.5) and all 15 supported Controlnets. You may set the number of models you want your worker to download right away when it starts. Set this to 0 to disable automatic runtime model downloads.
While the count of models you are hosting is less than AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT the worker will automatically download new models one at a time until this count of models is met. Models are downloaded based on real-time model popularity on the network with some models being 2gb and others being 6.7gb.
After this count is met, if your model folder size is still less than MAX_MODEL_FOLDER_SIZE_GB and AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT is greater than 0 the worker will continue to automatically download new models but only as they are requested by the network. Sogni Supernet will only request new model downloads when the job queue for a model exceeds the amount of workers available to support it.
Models are cached on your local machine the first time they are downloaded so you won't have to download them again, unless you delete the models folder. The models are downloaded to the local folder /data-models/Stable-diffusion in the same directory as where your docker-compose file is ran. This folder is loaded as a persistent Docker Volume where both run-time downloaded and existing model files are automatically picked up by Fast Worker.
If you need to download all the models ahead of time, for example to manually seed the model folder of multiple separate Fast Worker machines during setup you can reference this api endpoint for the list of all Stable Diffusion worker model download links to use with your own download shell script.
You could also set AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT to a very high number like 999 and wait till it automatically downloads all the models for you to copy through your network.
MAX_MODEL_FOLDER_SIZE_GB=300
Supported by: Stable Diffusion Worker
Set the maximum size for your models folder in GB when using the Stable Diffusion worker. This includes Controlnets and other models. Any automated model downloads you configure with AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT will stop before this limit is reached.
PREFERRED_MODEL_IDS=
Supported by: Stable Diffusion Worker, Flux Worker
You can use this option instead of AUTO_DOWNLOAD_TO_MIN_MODEL_COUNT to tell the worker exactly which models you want it to host by Sogni Model ID. If your worker model folder already has tons of models and is switching models too frequently you can use this feature to only share certain models. You can also specify models which you have not downloaded yet and they will automatically be downloaded in the order they are present in this list. Any invalid models will be skipped. Your worker instance will automatically connect to the Sogni Supernet and start taking on jobs as soon as one complete model is downloaded, it does not have to wait for all models.
You can find the list of supported model IDs for Stable Diffusion worker here: https://socket.sogni.ai/api/v1/config/models/nvidia. You can find the list of supported Flux model IDs and resources (Text Encoder / VAE) here: https://api.sogni.ai/v1/worker/config/flux
For example if you wanted to host only the 8 SDXL Turbo models used by the Photobooth app you could configure that as:
PREFERRED_MODEL_IDS=coreml-sogniXLturbo_alpha1_ad,coreml-dreamshaperXL_v21TurboDPMSDE,coreml-juggernautXL_v9Rdphoto2Lightning,coreml-wildcardxXLLIGHTNING_wildcardxXL,coreml-realvisxlV40_v40LightningBakedvae,coreml-realDream_sdxlLightning1,coreml-fenrisxl_SDXLLightning,coreml-epicrealismXL_VXIAbeast4SLightning
Note: Sogni Comfy Worker uses PREFERRED_WORKFLOW_IDS instead. See below.
PREFERRED_WORKFLOW_IDS=
Supported by: Comfy Worker
This feature works the same way as PREFERRED_MODEL_IDS except against supported Comfy Workflows instead of model IDs. Model IDs and Workflow IDs look the same but the latter may involve several models, or the same models between several workflows. The full list of workflows and depenencies are in this configuration file: https://api.sogni.ai/v1/worker/config/comfy
Control exactly which workflows your worker will support. If not set, your worker will automatically download models for and support all workflows that fit within your GPU's VRAM capacity. When to use this:
You want to reduce total model storage requirements
You want to prevent automatic downloads of all models on startup
You want to minimize model switching which can be significant with large video files by focusing on specific popular workflows instead of all of them. For example you could put
wan_v2.2-14b-fp8_i2v_lightx2v,wan_v2.2-14b-fp8_i2vto only download and support image-to-video workflow jobs. Note that almost every WAN workflow shares the same VAE and Text Encoder but loads approximately 28gb of different model files, even between text-to-video and image-to-video.
How it works:
Provide a comma-separated list of workflow IDs you want to host
Models that are not yet downloaded will be automatically downloaded at startup in the order specified
Your worker will only receive jobs for the workflows you specify
If a workflow is invalid or requires more VRAM than available, it will be skipped
If this value is not populated then the Comfy worker will automatically download all supported models compatible with the amount of GPU VRAM you have and disk space you have available. This is about 200gb of model files for 24gb VRAM+ machines. The full list of currently supported workow IDs:
Available workflow IDs for Comfy Worker
Video Generation Workflows Note: There are 5 Wan 2.2 video workflows and they are all supported by GPUs that have at least 24gb of VRAM. However due to speed opitmizations on the platform >=24gb of VRAM GPUs are restricted to the "Lightning" version of models and only <=40 GB of VRAM GPUs receive jobs that are 720p in resolution.

Image generation workflows:

DEFAULT_MODEL_ID=
Supported by: Stable Diffusion and Flux Worker
Tells the worker to select a specific model by default whenever it boots up. This will be the model that is loaded when it connects to the Supernet and starts waiting for jobs. If the value is invalid it will be ignored and the default model for the worker type will be used. You may specify a model that is not yet available but is listed in your PREFERRED_MODEL_IDS. For example in the Photobooth case we set: DEFAULT_MODEL_ID=coreml-sogniXLturbo_alpha1_ad
DEFAULT_WORKFLOW_ID=
Supported by: Comfy Worker
Same functionality as DEFAULT_MODEL_ID but unique to the "workflows" supported by Sogni Comfy Worker. For example this would make sure worker always starts up with the Z-Image Turbo image model "hot loaded" and ready for work: DEFAULT_WORKFLOW_ID=z_image_turbo_bf16
DISABLE_AUTOMATIC_DOWNLOADS=1
Supported by: All Worker Types
Prevents the worker from downloading any models automatically, either on startup or when jobs are requested.When to use this:
You have manually pre-seeded your models folder
You want complete control over which models are present
You're running in an environment where downloads should not occur
Important notes:
With this setting enabled, the worker will only support workflows for which all required models are already present in your models folder
DATA_DOG_API_KEY=
Supported by: All Worker Types
Input the DATA_DOG API key assigned to you if participating in Sogni Supernet debug logs.
COOLDOWN_EVERY_X_JOBS_COUNT=100
Supported by: All Worker Types
Configure your worker to take periodic "cool down" breaks after every N successful jobs. By default, workers cool down after every 100 jobs. Set to any positive integer to change the interval. Important: A value of 0 uses the default of 100 jobs. We provide a general default, but it is your responsibility to ensure your cooldown values are appropriate for your machine's thermal characteristics. Note: Video generation workflows are computationally intensive and can generate significant heat. Some popular workflows could keep machines active 24/7 during periods of heavy network job requests, so proper cooling configuration intervals (and proper physical cooling hardware) are essential for hardware longevity.
COOLDOWN_EVERY_X_JOBS_SEC=30
Supported by: All Worker Types
During a worker "cool down" period, this value determines how many seconds the worker should wait before accepting new jobs. The default is 30 seconds.How cooldowns work:
The worker remains connected to the Supernet during cooldown
No new jobs are accepted during the cooldown period
There is no penalty to job priority after exiting cooldown
However, if your cooldown settings are inadequate and your machine thermally throttles due to insufficient cooling, this can impact your job assignment rate
Important: Set realistic cooldown settings for your specific hardware and cooling setup. Video generation workloads are significantly more intensive than image generation.
Setting Up Your docker-compose.yaml File Preferences
docker-compose.yaml File Preferencesrestart: unless-stopped: Remove or comment this line with#if you don't want the worker to start automatically when your computer starts. There are additional options outlined within the docker-compose.yaml file.pull_policy: always: This setting ensures Docker Hub checks for the latest version of the worker each time it runs. Comment it out if you want to stay on the current version (not recommended)../data:/data: This maps where Sogni Fast Worker persists configuration. Comment it out if you want all settings to reset with each launch../data-models:/data-models: This maps where downloaded models are stored between reboots. Comment it out if you don't want to persist models across reboots. You can also manually place models here or map this folder to another location, like an existing Automatic1111 install, to share models.
Running Fast Worker on a machine with multiple GPUs
Running Fast Worker on multiple GPUs is fully supported but the install process is not quite as streamlined as our single GPU installs. Please reach out if you have any difficulty.
Running Fast Worker on multi-GPU machines works by simply running multiple independent copies of the Fast Worker Docker software, each targeting a separate GPU. Your first GPU worker instance is your Primary Workers and handles model downloads while secondary workers monitor the shared models folder. This prevents duplicate downloads and enables efficient multi-GPU setups. Model cache and disk space is shared between the instances but each instance will require 30gb of system ram. For example: 3 GPUs = 90gb system ram required. Anything less may present instability and lack of jobs directed at the workers. 40gb+ preferred for video generation model hosting. Note: Multi-GPU support is well tested and supported with Stable Diffusion and Flux worker but is still being tested on Comfy worker.
RUN THE MULTI-GPU INSTALLER In the unzipped folder run
./worker-generator.batfor Windows or./worker-generator.sh.for Linux. This will generate a separate.envconfiguration file for each GPU detected and then a singledocker-compose-generated.ymlfile which maps these .env files to docker instances so you can run all worker instances at the same time through this single Docker Compose file.CONFIGURE AUTHENTICATION FOR EACH GPU Open each
.envXfile in a text editor and addAPI_KEYandNFT_TOKEN_IDvalues. Note that for multi-GPU installs the .env file names have a zero-based index appended to the file name. For example if you are running 2 GPUs then the files will be named.env0and.env1. Note that you will need a separate Fast Worker NFT for each worker. So each should have a separate NFT_TOKEN_ID. The API_KEY can be the same if the NFTs are under the same Sogni Account. See details on acquiring anAPI_KEYandNFT_TOKEN_IDin the FAQ.CONFIGURE WORKER TYPE By default each worker runs the Stable-diffusion worker image. If you would like to switch instances to the Flux worker you can update the
image: sogni/sogni-stable-diffusion-worker:latestline in the generated docker-compose toimage: sogni/sogni-flux-worker:latestSTART ALL WORKER INSTANCES
STOP ALL WORKER INSTANCES
Once connected to the Sogni Supernet you can view live worker status and activity on the Fast Worker Dashboard:

You can additionally view the health status of all worker Docker instances locally via:
To tail the logs of a particular worker instance for troubleshooting grab the ID from docker compose psand run the following:
Note that you can also send job requests directly to your workers to troubleshoot them.
Worker Analytics API Endpoints
Sogni provides several REST API endpoints for you to pull analytics and earnings data for your workers from. Many of these power the existing Sogni Dashboards like the Fast Worker Dashboard but you may also call them directly. These are public HTTP GET endpoints:
Account address UTC day analytics https://api.sogni.ai/v1/analytics/day/2025-11-10?address=0x91eD6f531d6de80c22C55D2c223BE919CfC3E08a
Need help? Join our Discord!
If you have any issues or questions setting up your Sogni Fast Worker you can reach out on Discord or via email to [email protected] for technical support. Join Sogni Discord β¨
Last updated
Was this helpful?
