Sogni SDK
The Sogni SDK (@sogni-ai/sogni-client) is a TypeScript/JavaScript library that gives developers direct access to the Sogni Supernet — a DePIN protocol for creative AI inference. It supports image generation, image editing, video generation, LLM chat with streaming and tool calling, and Sogni Platform Tools that let an LLM generate images, video, and music through natural language. The SDK works in Node.js and browser environments and communicates via a WebSocket connection with an event-based API for real-time progress tracking.
#Who it's for
The Sogni SDK is for developers, startups, agencies, agent builders, and product teams who want to embed Sogni's creative AI capabilities directly in their own applications — without going through the Sogni web or Mac apps. Use it to build image generators, video tools, photobooth experiences, agent workflows, or any product that needs fast, decentralized creative inference.
#Installation
npm install @sogni-ai/sogni-client
or with Yarn:
yarn add @sogni-ai/sogni-client
#Authentication
To use the Sogni Supernet you need a Sogni account with a positive SOGNI or Spark token balance. Create a free account at app.sogni.ai or the Mac App — you get tokens just for signing up and confirming your email.
#Option 1: API Key (Recommended)
Get your API key at dashboard.sogni.ai — click your Username dropdown in the top-right corner to provision it.
import { SogniClient } from '@sogni-ai/sogni-client';
const sogni = await SogniClient.createInstance({
appId: 'your-app-id', // Required — unique string, UUID recommended
network: 'fast', // 'fast' or 'relaxed'
apiKey: 'your-api-key'
});
// No login() call needed — authenticated automatically
const models = await sogni.projects.waitForModels();
#Option 2: Username and Password
const sogni = await SogniClient.createInstance({
appId: 'your-app-id',
network: 'fast',
});
await sogni.account.login('your-username', 'your-password');
const models = await sogni.projects.waitForModels();
Note:
appIdmust be unique — UUID is recommended. Only one connection perappIdis allowed; a second connection with the same ID closes the first. These samples use ES modules (top-levelawait); wrap in an async function if you use CommonJS.
#Networks
Two worker networks are available:
fast— High-end GPU workers, optimized for speed. Required for video generation. Higher token cost.relaxed— Apple Mac workers, optimized for cost. Supports image generation only. Lower token cost.
Cost scales with total pixel count and inference steps.
#Quickstart: Generate an Image
import { SogniClient } from '@sogni-ai/sogni-client';
const sogni = await SogniClient.createInstance({
appId: 'my-app-uuid',
network: 'fast',
apiKey: 'your-api-key'
});
await sogni.projects.waitForModels();
const mostPopularModel = sogni.projects.availableModels.reduce((a, b) =>
a.workerCount > b.workerCount ? a : b
);
const project = await sogni.projects.create({
type: 'image',
modelId: mostPopularModel.id,
positivePrompt: 'A cat wearing a hat',
negativePrompt: 'low quality, bad quality, watermark',
steps: 20,
guidance: 7.5,
numberOfMedia: 1,
tokenType: 'spark',
network: 'fast'
});
const imageUrls = await project.waitForCompletion();
console.log('Image URLs:', imageUrls);
// Images are available for 24 hours
#Image Generation
#Progress Events
Instead of await project.waitForCompletion(), you can listen to events for real-time updates:
project.on('progress', (progress) => {
console.log('Progress:', progress);
});
project.on('jobCompleted', (job) => {
console.log('Job done:', job.id, job.resultUrl);
});
project.on('completed', (images) => {
console.log('All done:', images);
});
project.on('failed', (errorData) => {
console.error('Failed:', errorData);
});
#FLUX.1 [schnell] Defaults
const fluxDefaults = {
modelId: 'flux1-schnell-fp8',
steps: 4,
guidance: 1
};
#Connection Events
sogni.client.on('connected', ({ network }) => {
console.log('Connected to Supernet:', network);
});
sogni.client.on('disconnected', ({ code, reason }) => {
console.log('Disconnected:', code, reason);
});
Full ProjectParams type definition: sdk-docs.sogni.ai
#Image Editing
Use models like qwen_image_edit_2511_fp8 or qwen_image_edit_2511_fp8_lightning for reference-based editing:
import fs from 'fs';
const cnImage = fs.readFileSync('./source.jpg');
const project = await sogni.projects.create({
type: 'image',
network: 'fast',
modelId: 'qwen_image_edit_2511_fp8_lightning',
numberOfMedia: 1,
positivePrompt: 'make the background a snowy mountain',
steps: 4,
guidance: 7.5,
controlNet: {
name: 'instrp2p',
image: cnImage
}
});
const urls = await project.waitForCompletion();
See the workflow_image_edit.mjs example for the full reference implementation.
#Video Generation
Video generation requires the fast network and uses Wan 2.2 14B FP8 models. Five workflow types are supported:
| Workflow | Model suffix | Description |
|---|---|---|
| Text-to-Video | _t2v |
Generate video from a text prompt |
| Image-to-Video | _i2v |
Animate a static image |
| Sound-to-Video | _s2v |
Video + audio sync with lip-sync |
| Animate-Move | _animate-move |
Transfer motion from a reference video |
| Animate-Replace | _animate-replace |
Swap a subject while preserving motion |
Add _lightx2v suffix for faster (4-step) inference; omit for highest quality.
#Text-to-Video
const project = await sogni.projects.create({
type: 'video',
network: 'fast',
modelId: 'wan_v2.2-14b-fp8_t2v_lightx2v',
positivePrompt: 'A serene ocean wave crashing on a beach at sunset',
fps: 16,
frames: 81, // ~5 seconds at 16fps
width: 512,
height: 512
});
const videoUrls = await project.waitForCompletion();
console.log('Video URL:', videoUrls[0]);
#Image-to-Video
const referenceImage = fs.readFileSync('./input-image.png');
const project = await sogni.projects.create({
type: 'video',
network: 'fast',
modelId: 'wan_v2.2-14b-fp8_i2v_lightx2v',
positivePrompt: 'camera zooms in slowly',
referenceImage: referenceImage,
fps: 16,
frames: 81
});
#Sound-to-Video
const referenceImage = fs.readFileSync('./image.jpg');
const referenceAudio = fs.readFileSync('./audio.m4a');
const project = await sogni.projects.create({
type: 'video',
network: 'fast',
modelId: 'wan_v2.2-14b-fp8_s2v_lightx2v',
referenceImage: referenceImage,
referenceAudio: referenceAudio,
fps: 16,
frames: 81
});
See the examples directory for Animate-Move, Animate-Replace, and other full workflow examples.
#LLM Chat and Tool Calling
The SDK includes an OpenAI-compatible chat completions API running on Sogni Supernet workers. Supports streaming, multi-turn conversations, reasoning mode, and tool calling.
#Chat Completion
const response = await sogni.projects.chatCompletion({
model: 'qwen3.5-35b-a3b-gguf-q4km',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the Sogni Supernet?' }
],
max_tokens: 4096,
temperature: 0.7
});
console.log(response.choices[0].message.content);
#Tool Calling
const tools = [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' }
},
required: ['location']
}
}
}];
const response = await sogni.projects.chatCompletion({
model: 'qwen3.5-35b-a3b-gguf-q4km',
messages: [{ role: 'user', content: "What's the weather in Austin?" }],
tools: tools,
tool_choice: 'auto'
});
#Sogni Platform Tools
Combine LLM tool calling with Sogni's media generation. The LLM detects when a user wants to create an image, video, or music, enhances the prompt, and calls Sogni's generation APIs automatically:
- "Create an image of a cyberpunk city at night"
- "Generate a video of ocean waves at sunset"
- "Compose a jazz song about the rain"
See workflow_text_chat_sogni_tools.mjs for a complete implementation.
#Featured Models
| Model ID | Description |
|---|---|
flux1-schnell-fp8 |
FLUX.1 [schnell] — fast image generation (4 steps) |
z_image_turbo_bf16 |
Z-Image Turbo — ultra-fast 8-step generation |
z_image_bf16 |
Z-Image — high quality 20-step generation |
qwen_image_edit_2511_fp8_lightning |
Qwen Image Edit Lightning — fast 4-step editing |
wan_v2.2-14b-fp8_t2v_lightx2v |
Wan 2.2 Text-to-Video — fast variant |
qwen3.5-35b-a3b-gguf-q4km |
Qwen3.5 35B VLM — LLM chat, tool calling, vision |
#Further Reading
- GitHub Repository — full source, examples, changelog
- TypeScript API Docs — complete
ProjectParams,Job,Projectreference - Developer Tutorials — step-by-step recipes for common workflows
- Sogni Intelligence — LLM, agent, and tool-calling capabilities overview
- API Reference — REST API documentation