Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tensorlake.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Python SDK ships an async-native variant of the sandbox API on top of asyncio. Every method on the sync Sandbox handle has a one-to-one async counterpart on AsyncSandbox — same names, same parameters, just async def and awaited.

When to use it

Reach for the async API when:
  • You’re driving multiple sandboxes concurrently (e.g. fanning out work with asyncio.gather).
  • Your application is already asyncio-based — FastAPI, aiohttp, an LLM agent loop, etc. — and you don’t want to mix in blocking calls.
  • You’re streaming output from many processes at once.
If you only ever use one sandbox at a time and your code is otherwise synchronous, the sync Sandbox API is simpler and equivalent.

The shape of the API

from tensorlake.sandbox import AsyncSandbox
AsyncSandbox is the runtime handle for a single sandbox. Use await AsyncSandbox.create(...) to provision and connect, or await AsyncSandbox.connect(sandbox_id) to attach to an existing one. Every instance method is awaited:
sandbox = await AsyncSandbox.create()
result = await sandbox.run("python", ["-c", "print('hello')"])
await sandbox.write_file("/workspace/data.csv", b"name,score\nAlice,95\n")
content = await sandbox.read_file("/workspace/data.csv")
Refer to the SDK Reference for the full method list — the names, parameters, and return types are identical to the sync API. The pages below walk through the same workflow with async syntax.

Create and run

import asyncio
from tensorlake.sandbox import AsyncSandbox

async def main():
    sandbox = await AsyncSandbox.create(cpus=2.0, memory_mb=2048)
    try:
        result = await sandbox.run("python", ["-c", "print('hello')"])
        print(result.stdout)
    finally:
        await sandbox.terminate()

asyncio.run(main())
AsyncSandbox is also an async context manager — use async with to terminate the sandbox automatically when the block exits:
async with await AsyncSandbox.create(cpus=2.0, memory_mb=2048) as sandbox:
    result = await sandbox.run("python", ["-c", "print('hello')"])
    print(result.stdout)
# sandbox is terminated here

Run many sandboxes in parallel

The async API is designed for fan-out. Use asyncio.gather to start and run sandboxes concurrently:
import asyncio
from tensorlake.sandbox import AsyncSandbox

async def evaluate(prompt: str) -> str:
    async with await AsyncSandbox.create(cpus=1.0, memory_mb=1024) as sandbox:
        result = await sandbox.run("python", ["-c", prompt])
        return result.stdout

async def main():
    prompts = [
        "print(2 + 2)",
        "print(sum(range(100)))",
        "import math; print(math.pi)",
    ]
    outputs = await asyncio.gather(*(evaluate(p) for p in prompts))
    for out in outputs:
        print(out.strip())

asyncio.run(main())
Each evaluate call creates, executes against, and terminates its own sandbox in parallel with the others.

Connect to an existing sandbox

Reattach to a named sandbox after resume, or operate on a sandbox another process created:
sandbox = await AsyncSandbox.connect("my-env")
info = await sandbox.info()
print(info.sandbox_id)  # sandbox.sandbox_id is now populated too
Unlike the sync Sandbox.sandbox_id property, which transparently fetches sandbox info on first access, the async AsyncSandbox.sandbox_id cannot block on a network call. Call await sandbox.info() (or any other awaited method that resolves the sandbox, like status()) once before reading sandbox.sandbox_id on a freshly connected handle.

Background processes and streaming output

Start a process, keep the handle, and collect its output once it finishes:
proc = await sandbox.start_process("python", ["-c", """
import time
for i in range(5):
    print(f'tick {i}')
    time.sleep(1)
"""])
print(proc.pid)

# follow_output blocks until the process exits, then returns a TracedIterator
# of the captured events you can iterate normally.
events = await sandbox.follow_output(proc.pid)
for event in events:
    print(event.line, end="")
For long-running processes you want to stop yourself, send a signal directly — don’t follow_output first, since it would block waiting for the process to exit:
import signal

proc = await sandbox.start_process("python", ["-m", "http.server", "8080"])
# ... do work that talks to the server ...
await sandbox.send_signal(proc.pid, signal.SIGTERM)

File operations

await sandbox.write_file("/workspace/data.csv", b"name,score\nAlice,95\n")
content = await sandbox.read_file("/workspace/data.csv")
print(content.value.decode("utf-8"))

listing = await sandbox.list_directory("/workspace")
for entry in listing.value.entries:
    print(entry.name, entry.is_dir, entry.size)

Suspend, resume, and snapshot

Suspend and resume require a named sandbox — pass name= at creation time. checkpoint works on any sandbox, including ephemeral ones.
sandbox = await AsyncSandbox.create(name="my-env", cpus=1.0)
await sandbox.suspend()
await sandbox.resume()

snapshot = await sandbox.checkpoint()
restored = await AsyncSandbox.create(snapshot_id=snapshot.snapshot_id)

Learn more

SDK Reference

Full method list — applies to both sync and async APIs.

Lifecycle

State machine, suspend/resume, timeouts.

Process Management

Background processes, stdin, signals.

Snapshots

Capture and restore full VM state.