distributed async ai workflows

Dynamically branch workflows at runtime, automatically scale compute resources, and reliably persist agent state so your AI agents can handle any workload.

uv add exospherehost

from our partners

Exosphere offers distributed runtimes and customizable task nodes that can be seamlessly linked to build larger pipelines, aligning well with our broader DataOps and AIOps strategy. Unlike existing options such as Airflow, which suffer from high latency and a cumbersome interface, Exosphere provides a streamlined solution for building and managing AI pipelines in a unified environment.

Arvind Iyer

AI Software Architect, Nokia

unbounded parallel threads

Fanout to 1000s of parallel threads automatically at runtime, scaling independently to your workload. Built for your agents to handle any workload effortlessly.

Parallel Threads
Card 1
Card 2

graphs that take shape dynamically at runtime

Branching out and converging on the fly, adapting to your workload and data dynamically with reliability.

Card 1

states that revive themselves

Revive from any failure, recover from any error, and continue your workflow seamlessly. State persistence on every node by default.

Card 1

signals to help nodes communicate

Nodes shape the workflow dynamically at runtime, allowing for complex and dynamic workflows.

central data context for all nodes in an execution

Describe context and trigger a flow.

Central Data Context

scheduling

chunking

polling

and many more pre-implemented nodes to build your workflow

dev-ex to unlock speed

Define your nodes and graphs and get started.

from exospherehost import BaseNode
from pydantic import BaseModel

class HelloWorldNode(BaseNode):
    class Inputs(BaseModel):
        name: str

    class Outputs(BaseModel):
        message: str

    class Secrets(BaseModel):
        pass

    async def execute(self) -> Outputs:
        return self.Outputs(
            message=f"Hello, {self.inputs.name}!"
        )

node

A Node is the fundamental building block in Exosphere, representing a single unit of computation. Each node defines clear inputs and outputs, making your workflow components reusable and testable.

from exospherehost import GraphNodeModel

workflow_graph = {
    "nodes": [
        {
            "node_name": "hello-world",
            "namespace": "simple-workflow", 
            "identifier": "greeter",
            "inputs": {
                "name": "store.name"
            }
        }
    ]
}

graph

A Graph orchestrates the flow of data between multiple nodes, defining the execution sequence and dependencies. It's your workflow blueprint that ensures proper data flow and execution order.

async def trigger_hello_world(name: str):
    store = {"name": name}
    return await self.state_manager.trigger(
        'simple-workflow', 
        {"name": name}, 
        store
    )

trigger

A Trigger initiates workflow execution based on specific conditions or events. It's the entry point that determines when and how your graphs should run, enabling reactive and scheduled processing.

from exospherehost import Runtime
from nodes import HelloWorldNode

class WorkflowRuntime:
    def __init__(self, state_manager_uri: str, api_key: str):
        self.runtime = Runtime(
            'simple-workflow',
            'hello-world',
            [HelloWorldNode],
            {
                'stateManagerUri': state_manager_uri,
                'key': api_key,
                'stateManagerVersion': 'v0'
            }
        )

    async def start(self):
        await self.runtime.start()

get runtime started

Deploying a Runtime brings your entire workflow to life. It manages the execution environment, handles scaling, and ensures your nodes, graphs, and triggers work together seamlessly in production.

plug into exosphere

Offload autoscaling, branching logic, and state storage. Exosphere keeps every agent running reliably from your first prototype to millions of tasks.