Skip to main content
The Tensorlake SDK lets you write applications in Python which are capable of processing large amounts of data, running durable tasks, and orchestrating data workflows. Functions defined in your application are executed in a distributed manner transparently. Tensorlake automatically spins up your functions as needed, and scales them down to zero when they are not used. Functions can specify different compute and storage resources, based on the nature of the work they are doing. Tensorlake Applications are durable across function calls. Failures in the middle of a request don’t require re-running previous function calls, you can resume from the last successful function call. This guide will walk you through the process of creating a simple application that greets a user.

Defining your first application

This guide assumes that you have Python installed in your environment.
1

Install the Tensorlake SDK to build and deploy applications.

pip install tensorlake
2

Get an API key for your application

Fetch a key from the Tensorlake Dashboard and export it as an environment variable:
export TENSORLAKE_API_KEY=<API_KEY>
3

Check that the API key is working

tensorlake whoami
4

Create a new application

Applications are defined by Python functions. The entrypoint function has to be decorated with the @application decorator.
hello_world.py
from tensorlake.applications import application, function

@application()
@function()
def greet(name: str) -> str:
    return f"Hello, {name}!"
5

Deploy Application

tensorlake deploy hello_world.py
That’s all you need to build and deploy your first application. Let’s invoke it next!

Calling Applications

Once you have deployed your application, it’s automatically available as an HTTP endpoint. You don’t need to deploy cloud resources or manage servers. The HTTP entrypoint for your application always starts with https://api.tensorlake.ai/applications/{application_name}. Making a request to your application will return a Request ID. You can use it to track the progress of each request, and get the output when your application is done processing the request.
1

Send a request to the application

curl -X POST https://api.tensorlake.ai/applications/hello_world \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
-d 'John'

# {"id":"beae8736ece31ef9"}
2

Wait for the application to complete the request

You can then use the request ID to wait for the application to complete. The progress endpoint will return a stream of events as the application progresses. If you’re using the Python SDK, you can skip this step all together.
curl -X GET https://api.tensorlake.ai/applications/hello_world/requests/{request_id}/progress \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
-H "Accept: text/event-stream"
3

Get the output for your request

Once the application has completed your request, you can get the final output of the request by requesting the output of the greet function.
curl -X GET https://api.tensorlake.ai/applications/hello_world/requests/{request_id}/output/greet \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
If the function hasn’t produced any output, you will get an empty response with HTTP status code of 204. The SDK returns a None value in this case. Generally, you would want to check if the request is complete before getting the output.
You can also request the output of any intermediary function via the HTTP endpoint in your application. This is useful if you want to inspect the intermediary outputs and inputs of your application.

Testing Applications

Tensorlake Applications can run locally on your laptop. You can run them like regular python scripts using the run_application function. Since we don’t specify the remote parameter, the application will run locally by default.
hello_world.py
from tensorlake.applications import run_application, Request
if __name__ == "__main__":
    request: Request = run_application(greet, 'John')
    output = request.output()
    print(output)
You can build very complex and massively scalable near-real time data applications with Tensorlake Applications. Here are some of the next things to learn about:
I