Skip to main content
Tensorlake is a compute platform for agents — it runs your agents, it doesn’t replace your agent framework. You bring the agent logic (OpenAI Agents SDK, LangGraph, Claude SDK, or plain Python), and Tensorlake provides the infrastructure: serverless containers, durable execution, sandboxes, and observability.

Patterns

Agent Loop in a Single Function

The simplest pattern: your entire agent loop runs inside one @function(). Tensorlake handles deployment, scaling, and durability.
from tensorlake.applications import application, function

@application()
@function(timeout=3600)
def research_agent(topic: str) -> str:
    from agents import Agent, Runner, WebSearchTool

    agent = Agent(
        name="ResearchAgent",
        instructions="Thoroughly research the given topic using web search.",
        tools=[WebSearchTool()]
    )
    result = Runner.run_sync(agent, topic)
    return result.final_output
This works well for agents that:
  • Run a single loop with tool calls
  • Don’t need to fan out work to other agents
  • Have predictable resource requirements

Sandboxing Functions

When your agent calls tools with different resource needs (CPU, memory, GPU, dependencies), wrap each tool in its own @function(). Each function runs in its own container with its own resource limits and dependencies.
from tensorlake.applications import application, function, Image

heavy_image = Image().run("pip install torch transformers")

@function(image=heavy_image, memory=8, gpu="T4")
def classify_image(image_url: str) -> str:
    """Runs in a GPU container with 8GB memory."""
    from transformers import pipeline
    classifier = pipeline("image-classification")
    return classifier(image_url)[0]["label"]

@function()
def search_web(query: str) -> list[str]:
    """Runs in a lightweight container."""
    import requests
    # Call search API
    return ["result1", "result2"]

@application()
@function(timeout=1800)
def research_agent(topic: str) -> dict:
    # Agent loop calls tools that run in separate containers
    image_label = classify_image("https://example.com/photo.jpg")
    web_results = search_web(topic)
    return {"image": image_label, "web": web_results}
Each @function():
  • Runs in its own isolated container
  • Has its own dependencies, CPU, memory, and GPU allocation
  • Is independently retryable and durable
  • Scales independently based on demand

Harness Pattern: Agent as Orchestrator

For complex agents, separate the harness (orchestration logic) from the work (tool execution). The harness is a lightweight function that coordinates heavier worker functions.
from tensorlake.applications import application, function, Image

worker_image = Image().run("pip install openai langchain")

@application()
@function(timeout=3600)
def analyst_agent(query: str) -> dict:
    """Lightweight harness that orchestrates worker functions."""
    from openai import OpenAI
    client = OpenAI()

    # Agent decides what to do
    plan = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": f"Plan research for: {query}"}]
    ).choices[0].message.content

    # Dispatch to worker functions
    data = fetch_data(query)
    analysis = analyze_data(data)
    return {"plan": plan, "analysis": analysis}

@function(image=worker_image, cpu=4, memory=8)
def fetch_data(query: str) -> dict:
    """Heavy data fetching in a dedicated container."""
    ...

@function(image=worker_image, cpu=2, memory=4)
def analyze_data(data: dict) -> str:
    """Analysis with different resource needs."""
    ...

Running Agent Frameworks on Tensorlake

OpenAI Agents SDK

from tensorlake.applications import application, function

@application()
@function(timeout=1800)
def openai_agent(prompt: str) -> str:
    from agents import Agent, Runner, WebSearchTool

    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        tools=[WebSearchTool()]
    )
    result = Runner.run_sync(agent, prompt)
    return result.final_output

LangGraph

from tensorlake.applications import application, function, Image

image = Image().run("pip install langgraph langchain-openai")

@application()
@function(image=image, timeout=1800)
def langgraph_agent(query: str) -> str:
    from langgraph.prebuilt import create_react_agent
    from langchain_openai import ChatOpenAI

    model = ChatOpenAI(model="gpt-4")
    agent = create_react_agent(model, tools=[])
    result = agent.invoke({"messages": [("human", query)]})
    return result["messages"][-1].content

Claude SDK

from tensorlake.applications import application, function

@application()
@function(timeout=3600, ephemeral_disk=4)
def claude_agent(prompt: str) -> str:
    import asyncio
    from claude_agent_sdk import query, ClaudeAgentOptions

    async def run():
        options = ClaudeAgentOptions(
            system_prompt="You are an expert developer.",
            permission_mode="acceptEdits",
            cwd="/tmp/workspace"
        )
        result = ""
        async for message in query(prompt=prompt, options=options):
            result = str(message)
        return result

    return asyncio.run(run())

Parallel Sub-Agents

When your workflow involves multiple specialist agents, fan them out using futures or async functions so they run in parallel:
@application()
@function()
def analyze_proposal(text: str) -> dict:
    financial = financial_agent.future(text)
    legal = legal_agent.future(text)
    technical = technical_agent.future(text)
    return synthesize.future(financial, legal, technical)
See Parallel Sub-Agents for detailed patterns.

Core Concepts

Functions

Building blocks of applications. Functions are Python functions that run in isolated containers with their own dependencies, compute, and storage.

Applications

HTTP-triggered entry points. Applications are functions exposed as HTTP endpoints that receive requests and orchestrate work across multiple functions.

Durable Execution

Resume from failures, not restart. Checkpoints are automatically created so retries continue from the last successful step instead of starting over.

Sandboxes

Run untrusted code safely. Every function runs in an isolated sandbox with configurable resource limits and network restrictions.

Map-Reduce

Parallel data processing. Fan out work across a list in parallel, then aggregate results—no queue setup required.

Observability

Built-in tracing and logging. Every function call is automatically traced with timing, logs, and execution timelines.