Harbor is a framework from the creators of Terminal-Bench for evaluating and optimizing agents and language models. With Harbor you can evaluate arbitrary agents (Claude Code, OpenHands, Codex CLI, and others) against curated datasets like Terminal-Bench, SWE-Bench, and Aider Polyglot, build and share your own benchmarks, run thousands of trials in parallel across cloud providers, and generate rollouts for RL optimization. Harbor abstracts the execution backend behind anDocumentation Index
Fetch the complete documentation index at: https://docs.tensorlake.ai/llms.txt
Use this file to discover all available pages before exploring further.
--env flag. Tensorlake plugs in as one of those providers — alongside other sandboxes and local Docker — so the same Harbor commands run on Tensorlake sandboxes without changing your tasks, agents, or evaluators.
This guide focuses on running CLI-agent evaluations against benchmarks like Terminal-Bench. Harbor also supports generating rollouts for RL optimization — we’ll cover those workflows in follow-up guides.
Quick start
Get a Tensorlake API key
Grab one from the Tensorlake Dashboard. You’ll also need an API key for whichever agent provider you want to evaluate (e.g., Anthropic).
Install Harbor with the Tensorlake provider
The
harbor[tensorlake] extra installs the TensorLakeEnvironment provider alongside Harbor.- uv
- pip
Run a Terminal-Bench task
Run a single Terminal-Bench task on Tensorlake with Claude Code as the agent:Drop
--include-task-name to run the full Terminal-Bench 2.0 suite. --ae KEY=VALUE forwards an environment variable from your shell into the sandbox where the agent runs — add more --ae flags for any other secrets the agent needs.Why Tensorlake for Harbor
Harbor’s value comes from running large fleets of environments in parallel and trusting the results. Tensorlake’s runtime is designed for exactly that workload:- Per-trial sandboxes — each task starts on a clean machine and is destroyed at the end. No shared kernel state between trials, which matters for both eval reproducibility and RL reward integrity.
- Pre-warmed snapshots — environments with heavy
apt/pipinstalls (PyTorch, CUDA toolchains, full Linux desktops) can be built once, snapshotted, and restored under a second for every subsequent trial or rollout. - Independent verification — Harbor’s test script runs inside the sandbox and writes
1.0or0.0toreward.txt. The agent never sees or touches the verifier, so “the agent said it worked” is never confused with “the tests pass.” - Parallel scale — Tensorlake schedules thousands of sandboxes concurrently, which is what RL rollout generation and full benchmark sweeps need.
Anatomy of a Harbor task
Harbor expects each task to be laid out like this - take gcode-to-text as an example:environment/Dockerfiledefines the base image and any setup steps.instruction.mdis the prompt the agent receives.solution/is an oracle reference used to validate the environment itself.tests/test.shruns after the agent finishes and producesreward.txt.
Tune sandbox resources
Each task’stask.toml controls the sandbox Harbor provisions on Tensorlake. Set resources in the [environment] block:
task.toml
| Field | Default | Forwarded to Tensorlake |
|---|---|---|
cpus | 1 | cpus |
memory_mb | 2048 | memory_mb |
storage_mb | 10240 | ephemeral_disk_mb |
allow_internet | true | allow_internet_access |
Tensorlake requires
memory_mb to be between 1024 and 8192 MB per CPU core.- Large or heavy images — if your
environment/Dockerfilepulls in big toolchains (PyTorch, CUDA, full Linux desktops, large datasets), bumpcpusandmemory_mbso the build and runtime have headroom, and raisestorage_mbpast the image size plus working-set room. Underprovisioned sandboxes show up as build timeouts or OOMs mid-trial. - Lock down
allow_internet— setallow_internet = falseto stop the agent from searching the web for answers. If the verifier needs network access, bake those dependencies into the Dockerfile. Per-host allowlists are coming soon, so you’ll be able to block search engines while leaving package mirrors reachable.
Interactive debugging
When a trial fails and you want to poke around the live environment, attach to the session:Structured logs
Each trial produces structured artifacts, e.g.:- The agent’s actions and outputs
- What the verifier checked
- Why the trial passed or failed
What to build next
Snapshots
Build an environment once, snapshot it, and restore in seconds for every trial.
Reproducible RL Environments
Use sandboxes as a deterministic reward oracle for RL training loops.