- Queues like Kafka, SQS
- Durable Execution engines like Temporal
- Kubernetes, Docker, Terraform, etc for managing compute resources.
- Spark/Ray for distributed data processing.
Deploy your first application
1
Install the Tensorlake SDK
2
Authenticate With Tensorlake
3
Create an application
Applications are defined by Python functions. Let’s start with a template, that greets a user by name.This creates a file named
hello_world/hello_world.py with the following content:hello_world.py
4
Deploy It
Deploy your application referencing your application’s source file.
Call Applications
Tensorlake gives you an HTTP endpoint, for calling your application remotely.1
Get an API Key
Fetch a key from the Tensorlake Dashboard and export it as an environment variable:
2
Make a request
3
Check progress
Requests may run seconds to hours depending on your workload.The
outcome field will be successful or failed depending on whether the request completed successfully. It will be null if the request is still in progress.4
Get the output
Testing Locally
Tensorlake Applications can run locally on your laptop. You can run them like regular python scripts.hello_world.py
Complex Applications
If you are wondering what makes Tensorlake Applications different than other serverless function platforms, consider this example:complex_app.py
On Lambda or Vercel, both functions would normally live in the same container. That means they share CPU, memory, dependencies, and storage — even if their workloads have nothing in common.
If one task needs lots of CPU and another barely uses any, you’re stuck provisioning for the worst case. Costs go up, performance tuning goes sideways.To fix that, you’d split them into separate services. But now you have to wire them together using queues, retries, state passing, and whatever glue logic keeps them in sync. That’s operational overhead, not application logic.Tensorlake removes all of that.You define your functions together in one application, but each function can declare its own compute, storage, and dependency image. Tensorlake runs them in separate containers, scales them independently, and handles all communication between them.
- The functions
topk_wordsandrfc_topkrun in separate containers with their own dependencies and resource allocations. - Data flows between them automatically — no RPC calls, no queues, no persistence layer, no orchestration code.
Tensorlake also provides durability across function calls. If topk_words fails mid-pipeline, the request doesn’t start over; rfc_topk resumes from the last successful step.
- Queues: Internal function-to-function communication is managed for you. Traffic spikes are buffered by Tensorlake.
- Durable Execution: State is persisted across steps, so failures don’t reset the whole workflow/application.
- Kubernetes/Terraform: Tensorlake manages the containers, autoscaling, and resource allocation.
Next Steps
Here are some of the next things to learn about:Programming Guide
Learn more about customizing functions and compute resources for applications.
Dependency management
Learn how to add dependencies for your applications.
Durable Execution
Learn how Tensorlake Applications provide durability across function calls.
Secrets
Learn how to manage secrets that your applications access.
Map-Reduce
Learn how to use map-reduce to process large datasets.
Awaitables and tail calls
Learn how to use awaitables and tail calls to reduce bill and latency in your applications.
Futures and parallel function calls
Learn how to run multiple function calls in parallel using Futures.