Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tensorlake.ai/llms.txt

Use this file to discover all available pages before exploring further.

Harbor is a framework from the creators of Terminal-Bench for evaluating and optimizing agents and language models. With Harbor you can evaluate arbitrary agents (Claude Code, OpenHands, Codex CLI, and others) against curated datasets like Terminal-Bench, SWE-Bench, and Aider Polyglot, build and share your own benchmarks, run thousands of trials in parallel across cloud providers, and generate rollouts for RL optimization. Harbor abstracts the execution backend behind an --env flag. Tensorlake plugs in as one of those providers — alongside other sandboxes and local Docker — so the same Harbor commands run on Tensorlake sandboxes without changing your tasks, agents, or evaluators.
This guide focuses on running CLI-agent evaluations against benchmarks like Terminal-Bench. Harbor also supports generating rollouts for RL optimization — we’ll cover those workflows in follow-up guides.
New to Tensorlake? Sign up at the dashboard — new accounts include free credits, enough to run a full Terminal-Bench sweep before you pay for anything.

Quick start

1

Get a Tensorlake API key

Grab one from the Tensorlake Dashboard. You’ll also need an API key for whichever agent provider you want to evaluate (e.g., Anthropic).
2

Install Harbor with the Tensorlake provider

The harbor[tensorlake] extra installs the TensorLakeEnvironment provider alongside Harbor.
uv pip install "harbor[tensorlake]"
3

Set your environment variables

export TENSORLAKE_API_KEY="tl_..."
export ANTHROPIC_API_KEY="sk-ant-..."   # or another agent provider
4

Run a Terminal-Bench task

Run a single Terminal-Bench task on Tensorlake with Claude Code as the agent:
harbor run --env tensorlake \
  --include-task-name pytorch-model-cli \
  --dataset terminal-bench@2.0 \
  --agent claude-code \
  --model anthropic/claude-sonnet-4-6 \
  --ae ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY
Drop --include-task-name to run the full Terminal-Bench 2.0 suite. --ae KEY=VALUE forwards an environment variable from your shell into the sandbox where the agent runs — add more --ae flags for any other secrets the agent needs.

Why Tensorlake for Harbor

Harbor’s value comes from running large fleets of environments in parallel and trusting the results. Tensorlake’s runtime is designed for exactly that workload:
  • Per-trial sandboxes — each task starts on a clean machine and is destroyed at the end. No shared kernel state between trials, which matters for both eval reproducibility and RL reward integrity.
  • Pre-warmed snapshots — environments with heavy apt/pip installs (PyTorch, CUDA toolchains, full Linux desktops) can be built once, snapshotted, and restored under a second for every subsequent trial or rollout.
  • Independent verification — Harbor’s test script runs inside the sandbox and writes 1.0 or 0.0 to reward.txt. The agent never sees or touches the verifier, so “the agent said it worked” is never confused with “the tests pass.”
  • Parallel scale — Tensorlake schedules thousands of sandboxes concurrently, which is what RL rollout generation and full benchmark sweeps need.

Anatomy of a Harbor task

Harbor expects each task to be laid out like this - take gcode-to-text as an example:
gcode-to-text
├── environment
│   ├── Dockerfile
│   └── text.gcode.gz
├── instruction.md
├── solution
│   └── solve.sh
├── task.toml
└── tests
    ├── test_outputs.py
    └── test.sh
  • environment/Dockerfile defines the base image and any setup steps.
  • instruction.md is the prompt the agent receives.
  • solution/ is an oracle reference used to validate the environment itself.
  • tests/test.sh runs after the agent finishes and produces reward.txt.

Tune sandbox resources

Each task’s task.toml controls the sandbox Harbor provisions on Tensorlake. Set resources in the [environment] block:
task.toml
[environment]
cpus = 2
memory_mb = 4096
storage_mb = 20480
allow_internet = true
FieldDefaultForwarded to Tensorlake
cpus1cpus
memory_mb2048memory_mb
storage_mb10240ephemeral_disk_mb
allow_internettrueallow_internet_access
Tensorlake requires memory_mb to be between 1024 and 8192 MB per CPU core.
A few rules of thumb:
  • Large or heavy images — if your environment/Dockerfile pulls in big toolchains (PyTorch, CUDA, full Linux desktops, large datasets), bump cpus and memory_mb so the build and runtime have headroom, and raise storage_mb past the image size plus working-set room. Underprovisioned sandboxes show up as build timeouts or OOMs mid-trial.
  • Lock down allow_internet — set allow_internet = false to stop the agent from searching the web for answers. If the verifier needs network access, bake those dependencies into the Dockerfile. Per-host allowlists are coming soon, so you’ll be able to block search engines while leaving package mirrors reachable.

Interactive debugging

When a trial fails and you want to poke around the live environment, attach to the session:
harbor env attach <session_id>
Drop directly into the running sandbox to inspect state, rerun tests by hand, and confirm whether the failure was the agent or the environment.

Structured logs

Each trial produces structured artifacts, e.g.:
gcode-to-text__UFALMLv
├── agent/
├── verifier/
├── result.json
└── trial.log
So you can trace:
  • The agent’s actions and outputs
  • What the verifier checked
  • Why the trial passed or failed

What to build next

Snapshots

Build an environment once, snapshot it, and restore in seconds for every trial.

Reproducible RL Environments

Use sandboxes as a deterministic reward oracle for RL training loops.