1
Install the Tensorlake SDK
2
Get an API Key
You can get an API key from the Tensorlake Dashboard.
3
Create an application
Applications are defined by Python functions. Let’s start with a template, that greets a user by name.This creates a file named
hello_world/hello_world.py with the following content:hello_world.py
4
Deploy It
Deploy your application referencing your application’s source file.
Call Applications
Tensorlake gives you an HTTP endpoint, for calling your application remotely.1
Get an API Key
Fetch a key from the Tensorlake Dashboard and export it as an environment variable:
2
Make a request
3
Check progress
Requests may run seconds to hours depending on your workload.The
outcome field will be success or failure depending on whether the request completed successfully. It will be null if the request is still in progress.4
Get the output
Testing Locally
Tensorlake Applications can run locally on your laptop. You can run them like regular python scripts.hello_world.py
Building an Agentic Code Interpreter
Now let’s build a real agentic application. We will build a code interpreter agent with OpenAI Agent SDK. The tensorlake application function will be the main agentic loop, and we will use a Tensorlake function to execute code, and pass it as a tool to the agent. Whenever the agent needs to execute code, it will call the Tensorlake function and pass the code as a tool call. The Tensorlake function will execute the code in an isolated container and return the output to the agent.1
Add Your OpenAI API Key as a Secret
The agent needs access to the OpenAI API. Add your API key as a secret using the Tensorlake CLI:This securely stores your API key so it can be injected into your application at runtime. The secret is referenced in the function decorator which uses the OpenAI Agent SDK and
will be available as an environment variable.
2
Create the Application
code_interpreter.py
3
Deploy and Run
Deploy your application and call it:
On Lambda or Vercel, running arbitrary code execution would require complex sandboxing, security policies, and resource management — all in the same container as your main application.With Tensorlake, the
execute_code function runs in a completely isolated container with its own CPU, memory, and dependencies. If code execution needs heavy compute or specialized libraries, it scales independently from your agent logic.You get secure, isolated code execution without managing infrastructure.Tensorlake handles the infrastructure complexity so you can focus on building powerful AI tools.
Next Steps
Here are some of the next things to learn about:Programming Guide
Learn more about customizing functions and compute resources for applications.
Dependency management
Learn how to add dependencies for your applications.
Durable Execution
Learn how Tensorlake Applications provide durability across function calls.
Secrets
Learn how to manage secrets that your applications access.
Map-Reduce
Learn how to use map-reduce to process large datasets.
Awaitables and tail calls
Learn how to use awaitables and tail calls to reduce bill and latency in your applications.
Futures and parallel function calls
Learn how to run multiple function calls in parallel using Futures.