Use this file to discover all available pages before exploring further.
Every agent framework has converged on the same pattern: break a complex task into independent subtasks, run specialist agents on each subtask in parallel, and synthesize the results. LangGraph does this with Send and @task futures. OpenAI Agents SDK uses asyncio.gather and agent.as_tool(). Claude Agent SDK spawns subagents via the Task tool. Deep Agents dispatches parallel task tool calls.On Tensorlake, you get the same fan-out/fan-in pattern — but each sub-agent runs in its own container with dedicated resources, independent retries, and durable checkpointing. No asyncio plumbing, no graph DSL, no shared memory coordination.
These patterns are inspired by what teams are building in production with LangGraph, OpenAI Agents SDK, Claude Agent SDK, and Deep Agents — reimplemented on Tensorlake with container isolation, independent scaling, and durable execution.
The most common multi-agent pattern across every framework: decompose a research question into subtopics, investigate each in parallel, and synthesize the findings. This is the pattern behind GPT Researcher, Exa’s web research system, and Anthropic’s multi-agent research system.
from tensorlake.applications import application, function, Imageresearch_image = Image().run("pip install openai requests beautifulsoup4")@function(image=research_image, timeout=900, retries=2)def research_subtopic(topic: str, subtopic: str) -> dict: """Each researcher runs in its own container, searches the web, reads sources, and produces a structured summary.""" from openai import OpenAI client = OpenAI(max_retries=0) # Step 1: Generate search queries for this subtopic queries = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": f"Generate 3 search queries to research '{subtopic}' in the context of '{topic}'."}], ).choices[0].message.content # Step 2: Search and gather sources sources = search_and_read(queries) # Step 3: Analyze and summarize analysis = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "Summarize research findings with citations."}, {"role": "user", "content": f"Topic: {subtopic}\n\nSources:\n{sources}"}, ], ).choices[0].message.content return {"subtopic": subtopic, "analysis": analysis, "source_count": len(sources)}@function(image=research_image, timeout=300)def synthesize_research(results: list[dict], topic: str) -> dict: """Combine all parallel research into a cohesive report.""" from openai import OpenAI combined = "\n\n---\n\n".join( f"## {r['subtopic']}\n{r['analysis']}" for r in results ) report = OpenAI(max_retries=0).chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "Synthesize research findings into a cohesive report. Resolve contradictions and highlight consensus."}, {"role": "user", "content": f"Topic: {topic}\n\nFindings:\n{combined}"}, ], ).choices[0].message.content return {"topic": topic, "report": report, "sections": len(results)}@application()@function(image=research_image, timeout=120)def deep_research(topic: str) -> dict: """Orchestrator: decompose, fan out, synthesize.""" from openai import OpenAI import json # Plan the research plan = OpenAI(max_retries=0).chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": f"Break this topic into 3-5 independent research subtopics: {topic}"}], response_format={"type": "json_object"}, ).choices[0].message.content subtopics = json.loads(plan)["subtopics"] # Fan out — each subtopic researched in parallel findings = [research_subtopic.future(topic, sub) for sub in subtopics] # Synthesize — runs after all research completes return synthesize_research.future(findings, topic)
Each researcher runs in its own container with its own 15-minute timeout and 2 retries. If one subtopic’s research fails (rate limit, network error), only that subtopic is retried — the other researchers’ work is preserved.
Multiple specialist agents examine the same input from different analytical perspectives — a pattern used in production for investment analysis, proposal review, and compliance checks.
This mirrors the multi-agent portfolio collaboration pattern from the OpenAI Agents SDK cookbook — but each analyst runs in an isolated container with its own timeout and retry policy.
After extraction, classification and compliance checking run in parallel — they both depend on the extracted text but not on each other. If the compliance check hits a rate limit, it retries independently without re-running OCR or classification.
Each sub-agent can use whatever framework you want internally. The @function() boundary is a container boundary — what runs inside is up to you. Define each specialist as a focused function using its framework, then fan them out with .future().
from tensorlake.applications import application, function, Image# Each framework gets its own container image with its own dependencieslanggraph_image = Image().run("pip install langgraph langchain-openai tavily-python")openai_image = Image().run("pip install openai-agents")claude_image = Image().run("pip install claude-agent-sdk")deep_image = Image().run("pip install deepagents langchain-openai")@function(image=langgraph_image, timeout=600)def market_researcher(company: str) -> str: """Market research using a LangGraph ReAct agent with web search.""" from langgraph.prebuilt import create_react_agent from langchain_openai import ChatOpenAI from langchain_community.tools import TavilySearchResults agent = create_react_agent( ChatOpenAI(model="gpt-4o"), tools=[TavilySearchResults(max_results=5)], ) result = agent.invoke({"messages": [ ("human", f"Research the market position, competitors, and recent news for {company}.") ]}) return result["messages"][-1].content@function(image=openai_image, timeout=600)def financial_analyst(company: str) -> str: """Financial analysis using an OpenAI Agents SDK agent with tool use.""" from agents import Agent, Runner, WebSearchTool agent = Agent( name="FinancialAnalyst", instructions=( "You are a financial analyst. Analyze revenue, margins, cash flow, " "and valuation metrics. Use web search to find the latest filings." ), tools=[WebSearchTool()], ) result = Runner.run_sync(agent, f"Analyze the financials for {company}") return result.final_output@function(image=claude_image, timeout=900, ephemeral_disk=4)def risk_assessor(company: str) -> str: """Risk assessment using a Claude agent with deep reasoning.""" import asyncio from claude_agent_sdk import query, ClaudeAgentOptions async def run(): result = "" async for message in query( prompt=f"Assess regulatory, operational, and market risks for {company}.", options=ClaudeAgentOptions( system_prompt="You are a risk analyst. Identify and score key risks.", permission_mode="acceptEdits", cwd="/tmp/workspace", ), ): result = str(message) return result return asyncio.run(run())@function(image=deep_image, timeout=900)def technical_reviewer(company: str) -> str: """Technical deep-dive using a Deep Agent with planning and web search.""" from deepagents import create_deep_agent agent = create_deep_agent( model="openai:gpt-4o", system_prompt="Evaluate the company's technology stack, patents, and engineering culture.", ) result = agent.invoke({ "messages": [{"role": "user", "content": f"Technical review of {company}"}] }) return result["messages"][-1].content@function(timeout=300)def compile_analysis(market: str, financials: str, risks: str, technical: str, company: str) -> dict: """Combine all analyst reports into a final recommendation.""" return { "company": company, "market_research": market, "financial_analysis": financials, "risk_assessment": risks, "technical_review": technical, }@application()@function()def analyze_company(company: str) -> dict: # Four frameworks, four containers, all running in parallel market = market_researcher.future(company) financials = financial_analyst.future(company) risks = risk_assessor.future(company) technical = technical_reviewer.future(company) return compile_analysis.future(market, financials, risks, technical, company)
Each agent runs in its own container with its own dependencies — no version conflicts, no shared memory, no asyncio event loop contention. If the risk assessment takes longer than the others, the completed agents’ results are checkpointed and preserved.