Your First Workflow
Workflows in Tensorlake use futures to define function calls without executing them immediately. This allows Tensorlake to optimize execution by running independent steps in parallel. When you return a future from a function (called a tail call), the function completes immediately without blocking, and Tensorlake orchestrates the remaining work. Here’s a simple workflow that processes and formats data from multiple sources:enrich_recordstarts and immediately returns (doesn’t block)fetch_profile("rec_123")andfetch_history("rec_123")run in parallel- When both complete,
merge_dataruns with both results - Final response contains the merged data
Key benefits:
- Parallel execution where possible (lower latency)
- No blocking — the orchestrator container is freed immediately
- Automatic dependency tracking — no manual coordination needed
- Built-in durability — failures resume from checkpoints
Each function in your workflow can be configured with retry policies. If a step fails, Tensorlake automatically retries it based on your retry configuration.
Best Practices
Design for Parallelism
Identify steps that can run independently:Use Tail Calls for Efficiency
Return futures instead of blocking. When you return a future as a tail call, the current function’s container is freed immediately — you’re not paying for idle containers waiting for downstream results.Process Lists with Map-Reduce
For workflows that process collections of items, use map-reduce operations to parallelize the work:Learn More
Futures
Deep dive on futures, tail calls, and parallel execution.
Async Functions
Async functions are another way to define workflows with parallel execution.
Durable Execution
How workflows recover from failures.