Skip to main content
Combine the power of LLM orchestration with secure, isolated execution environments. This guide shows how to build a “swarm” of agents—where multiple worker agents generate and execute code in parallel sandboxes to analyze a problem from different perspectives, and a lead agent synthesizes their findings.

How it works

  1. Define Worker Agents: Create a function that uses an LLM to generate code for a specific perspective (e.g., Scientific, Economic).
  2. Execute in Sandboxes: Each worker spins up a secure Tensorlake Sandbox to run the generated code and capture the output.
  3. Map (Parallelize): Launch multiple instances of the worker agent in parallel.
  4. Reduce (Aggregate): A lead agent receives all the reports and synthesizes a final insight.

Prerequisites

pip install tensorlake openai pydantic python-dotenv

Full example

This example simulates a Mars mission planning scenario where “scout” agents analyze different risks (Scientific, Economic, Ethical, etc.) by writing and running simulations in isolated sandboxes.
from dotenv import load_dotenv
load_dotenv()  # Load environment variables from .env file

from tensorlake.sandbox import SandboxClient
from pydantic import BaseModel
from typing import List
from openai import OpenAI
from concurrent.futures import ThreadPoolExecutor

class ScoutReport(BaseModel):
    agent_id: int
    raw_data: str

class FinalInsight(BaseModel):
    summary: str

# 1. Worker Agent: LLM + Sandbox Execution
def scout_agent(task_id: int) -> ScoutReport:
    """Each scout analyzes a specific aspect of the mission."""
    perspectives = ["Scientific", "Economic", "Ethical", "Logistical", "Psychological"]
    perspective = perspectives[task_id % len(perspectives)]

    print(f"🕵️  Scout {task_id}: Analyzing {perspective} perspective...")
    client = OpenAI()
    # Step A: LLM decides what to do
    prompt = f"""
You are a {perspective} analyst for a Mars mission.
Write a Python script to perform a simple simulation using the 'numpy' library.
The simulation should model a key factor from your perspective (e.g., scientific sensor data, economic cost projection, logistical supply levels).

The script MUST print a single valid JSON string to standard output. This JSON should contain:
'perspective': '{perspective}',
'score': an integer from 0-100 derived from your simulation (higher is better),
'insight': a brief, unique risk or opportunity revealed by the simulation.
Do NOT use markdown blocks."""
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}]
    )
    
    # Clean up markdown formatting (remove ```python ... ```)
    generated_code = response.choices[0].message.content.replace("```python", "").replace("```", "").strip()
    print(f"🕵️  Scout {task_id}: Generated code -> {generated_code}")

    # Step B: Secure execution in a Sandbox
    sb_client = SandboxClient()
    with sb_client.create_and_connect() as sandbox:
        print(f"🕵️  Scout {task_id}: Installing dependencies in Sandbox...")
        sandbox.run("pip", ["install", "numpy", "--user", "--break-system-packages"])
        print(f"🕵️  Scout {task_id}: Running simulation in Sandbox...")
        execution = sandbox.run("python3", ["-c", generated_code])
        output = execution.stdout.strip()
        print(f"🕵️  Scout {task_id}: Execution complete. Output: {output}")
        return ScoutReport(agent_id=task_id, raw_data=output)

# 2. Lead Agent: LLM Aggregator (The "Reducer")
def lead_aggregator(reports: List[ScoutReport]) -> FinalInsight:
    """The Lead LLM reviews all sandbox outputs to find patterns."""
    print(f"👑 Lead Agent: Received {len(reports)} scout reports. Aggregating...")
    client = OpenAI()
    combined_reports = "\n".join([f"Report {r.agent_id}: {r.raw_data}" for r in reports])
    
    # The 'Intelligence' step: synthesizing multiple sources
    prompt = (
        f"You are the Mission Commander for Mars Colonization. Review these viability reports:\n{combined_reports}\n\n"
        "1. Calculate the average viability score.\n"
        "2. Synthesize a strategic Go/No-Go recommendation.\n"
        "3. Summarize key risks."
    )
    
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}]
    )
    
    return FinalInsight(summary=response.choices[0].message.content)

# 3. The Swarm Application
def intelligence_swarm(count: int) -> str:
    print(f"🚀 Launching a swarm of {count} scouts...")
    # Parallel Map: Launch multiple sandboxed scouts
    with ThreadPoolExecutor() as executor:
        reports = list(executor.map(scout_agent, range(count)))
    
    # Reduce: Use the Lead Agent to combine results
    final_insight = lead_aggregator(reports)
    
    return final_insight.summary

if __name__ == "__main__":
    # This runs 3 parallel LLMs, 3 parallel Sandboxes, and 1 Aggregator LLM
    result = intelligence_swarm(count=5)
    print(f"\n--- SWARM INTELLIGENCE REPORT ---\n{result}")

Workflow: Step-by-Step Execution

StepComponentAction
1OrchestratorTriggers 5 parallel scout tasks using the scout_agent.map() function.
2Scout AgentLeverages GPT-4o to draft a custom simulation script based on a specific perspective.
3SandboxSecurely installs numpy, handles dependencies, and executes the script in isolation.
4Scout AgentCompiles simulation data into a structured ScoutReport for return.
5Lead AgentAggregates all reports and prompts GPT-4o for a final Go/No-Go decision.

This example uses the python-dotenv library to load your Tensorlake API key from a .env file. Create a file named .env in your project root and add your key:
TENSORLAKE_API_KEY="your-api-key-here"
The SandboxClient will automatically use this key.

Production Tips

Reduce Latency with Snapshots

The example above runs pip install numpy inside every scout’s sandbox. In a real swarm with dozens of agents, this adds unnecessary latency and bandwidth usage. For production, create a “base” sandbox, install your common dependencies, and create a Snapshot. Then, have your agents initialize from that snapshot instantly.
# 1. Create a snapshot ID (do this once)
# snapshot = client.snapshot_and_wait(base_sandbox_id)

# 2. Use it in your agent
with sb_client.create_and_connect(snapshot_id="snps_abc123") as sandbox:
    # Numpy is already installed!
    sandbox.run("python", ["-c", generated_code])
See the Snapshots guide for details.

Security: Lock down the network

Since the scouts run code generated by an LLM, it is safer to disable internet access to prevent data exfiltration or malicious downloads.
with sb_client.create_and_connect(allow_internet_access=False) as sandbox:
    # ...

What to build next

AI Code Execution

Learn how to build a stateful code interpreter for a single agent.

Snapshots

Optimize your swarm’s startup time by pre-baking dependencies.