Use this file to discover all available pages before exploring further.
View Source Code
Check out the full source code for this example on GitHub.
This tutorial demonstrates how to build a Deep Research Agent using Tensorlake and the OpenAI Agents SDK. This application orchestrates multiple agents to plan, search, and write comprehensive research reports on any given topic.
deep-research/├── app.py # Main application logic and Tensorlake functions├── models.py # Pydantic data models for structured inputs/outputs├── prompts.py # System prompts for the agents└── requirements.txt
In app.py, we define our Tensorlake functions. Each function represents a stage in the pipeline and utilizes an OpenAI agent.
import osfrom typing import Listfrom openai import OpenAIfrom tensorlake import application, function, Imagefrom pydantic import BaseModelfrom models import ResearchPlan, SearchResult, ResearchReport# Import your prompts here# from prompts import PLANNER_PROMPT, SEARCH_PROMPT, WRITER_PROMPT# Define the runtime imageimage = Image(name="deep-research-agent").run("pip install openai tensorlake pydantic")@application()@function(image=image, secrets=["OPENAI_API_KEY"])def deep_research_pipeline(topic: str) -> ResearchReport: """ Orchestrates the deep research pipeline. """ print(f"Starting deep research on: {topic}") # Phase 1: Planning plan = create_research_plan(topic) print(f"Plan created with {len(plan.search_queries)} queries.") # Phase 2: Searching (Parallel Execution) # We map the search function over the queries to run them in parallel search_results = execute_search.map(plan.search_queries) print(f"Completed {len(search_results)} searches.") # Phase 3: Writing report = write_report(topic, search_results) print("Report generation complete.") return report@function(image=image, secrets=["OPENAI_API_KEY"])def create_research_plan(topic: str) -> ResearchPlan: client = OpenAI() completion = client.beta.chat.completions.parse( model="gpt-4o", messages=[ {"role": "system", "content": "You are an expert research planner."}, {"role": "user", "content": f"Create a research plan for: {topic}"} ], response_format=ResearchPlan ) return completion.choices[0].message.parsed@function(image=image, secrets=["OPENAI_API_KEY"])def execute_search(query_obj) -> SearchResult: # In a real implementation, you would use a search tool or API here. # For this example, we'll simulate a search result. # You could use tools like Tavily, Serper, or a custom scraper. client = OpenAI() # Simulate processing the query summary = f"Simulated search results for: {query_obj.query}" return SearchResult( url="https://example.com", title=f"Results for {query_obj.query}", content="Full content would go here...", summary=summary )@function(image=image, secrets=["OPENAI_API_KEY"])def write_report(topic: str, results: List[SearchResult]) -> ResearchReport: client = OpenAI() # Compile context from search results context = "".join([f"Source: {r.url}Summary: {r.summary}" for r in results]) completion = client.beta.chat.completions.parse( model="gpt-4o", messages=[ {"role": "system", "content": "You are an expert research writer. Write a comprehensive report based on the provided context."}, {"role": "user", "content": f"Topic: {topic}Context:{context}"} ], response_format=ResearchReport ) return completion.choices[0].message.parsed
To test your pipeline locally, add this code block to the end of app.py and run it with python.
if __name__ == "__main__": from tensorlake.applications import run_local_application # Run the application locally run_local_application(deep_research_pipeline, "The Future of Quantum Computing")