Skip to main content
Agents that generate and execute code need a workspace — a computer where they can run code, install packages, and access files. That workspace needs to be isolated so the agent can’t access your credentials, files, or network. Sandboxes provide this isolation. The question isn’t whether to use sandboxes — it’s how to integrate them with your agent. There are two architectural patterns, based on where the agent runs: inside the sandbox or outside of it.

Pattern 1: Agent in Sandbox

The agent runs inside an isolated container. Your application communicates with it over the network. This is what Tensorlake’s @function() does. When you deploy a function, your agent code runs inside an isolated container with its own filesystem, dependencies, and resource limits. The agent has direct access to its environment — it can read and write files, install packages, and execute code, all within the container boundary.
from tensorlake.applications import application, function, Image

agent_image = Image().run("pip install openai")

@application()
@function(image=agent_image, timeout=1800, memory=4, ephemeral_disk=10)
def coding_agent(task: str) -> str:
    """Agent runs inside the container with full filesystem access."""
    from openai import OpenAI
    import subprocess

    client = OpenAI()
    messages = [{"role": "user", "content": task}]

    for _ in range(20):
        response = client.chat.completions.create(model="gpt-4o", messages=messages)
        reply = response.choices[0].message

        if not reply.tool_calls:
            return reply.content

        for tool_call in reply.tool_calls:
            if tool_call.function.name == "run_code":
                # Code executes directly — agent is already in the sandbox
                result = subprocess.run(
                    ["python", "-c", tool_call.function.arguments],
                    capture_output=True, text=True, timeout=30
                )
                messages.append({"role": "tool", "content": result.stdout or result.stderr})
When to use this pattern:
  • The agent and execution environment are tightly coupled
  • The agent needs persistent filesystem access across tool calls
  • You want production to mirror local development — same code, same environment
Trade-offs:
  • API keys must live inside the container for the agent to make inference calls
  • Updating agent logic requires redeploying the function
With Tensorlake, every @function() is already a sandbox. You get process isolation, resource limits, timeout enforcement, and dependency isolation without any extra setup.

Pattern 2: Sandbox as Tool

The agent runs in a Tensorlake function and gets sandboxes as tools it can use for code execution. When the agent needs to run untrusted or LLM-generated code, it creates a sandbox on demand, executes code there, and reads the results back. Tensorlake’s Sandbox API provides this pattern. Your agent logic runs in a @function(), and when it needs to execute code, it creates a sandbox with the SandboxClient and uses it as a tool.
from tensorlake.applications import application, function, Image

agent_image = Image().run("pip install openai tensorlake")

@application()
@function(image=agent_image, timeout=1800)
def coding_agent(task: str) -> str:
    """Agent uses a sandbox as a tool for code execution."""
    from openai import OpenAI
    from tensorlake.sandbox import SandboxClient

    client = OpenAI()
    sandbox_client = SandboxClient()

    # Create an on-demand sandbox for code execution
    sandbox = sandbox_client.create(
        image="python:3.11-slim",
        cpus=1.0,
        memory_mb=512,
        timeout_secs=60,
    )

    try:
        messages = [{"role": "user", "content": task}]

        for _ in range(20):
            response = client.chat.completions.create(model="gpt-4o", messages=messages)
            reply = response.choices[0].message

            if not reply.tool_calls:
                return reply.content

            for tool_call in reply.tool_calls:
                if tool_call.function.name == "run_code":
                    # Code executes in the remote sandbox, not here
                    result = sandbox.execute(tool_call.function.arguments)
                    messages.append({"role": "tool", "content": result.output})
    finally:
        sandbox_client.delete(sandbox.sandbox_id)
When to use this pattern:
  • You need to execute untrusted or LLM-generated code
  • API keys should stay outside the code execution environment
  • You want to spin up multiple sandboxes in parallel for concurrent code execution
  • The agent needs to create, inspect, and tear down environments dynamically
Trade-offs:
  • Network latency on each execution call
  • Two layers of containers (agent function + sandbox)

Learn More