Use this file to discover all available pages before exploring further.
View Source Code
Check out the full source code for this example on GitHub.
This tutorial demonstrates how to build a Code Interpreter Agent that can safely execute Python code generated by an LLM. By leveraging Tensorlake’s isolated sandboxing, you can run arbitrary code without compromising your local environment or production servers.
To test the interpreter locally, add this code block to the end of app.py:
if __name__ == "__main__": from tensorlake.applications import run_local_application # Run the application locally result = run_local_application(code_interpreter, "Calculate the sum of the first 50 prime numbers.") print(f"Code Interpreter Output:\n{result}")
Deploy your secure code interpreter to the cloud with a single command:
tl secrets set OPENAI_API_KEY=your_key_heretl deploy app.py
This deployment creates a dedicated, isolated environment for every execution request, ensuring complete safety and scalability for your code interpretation tasks.