Skip to main content
Tensorlake Applications lets you write services in Python which are capable of processing data, run background jobs which don’t time-out, and data orchestration APIs. Functions in applications are executed in a distributed manner transparently, and the platform automatically spins up your functions as needed, and scales down when they are not used. Each function can request different compute and storage resources, based on the nature of the work they are doing. Applications are indefinitely durable across function calls, so failures in the middle won’t require re-running previous functions during retries.

Prerequisites

  1. Install the Tensorlake SDK to build and deploy applications.
pip install tensorlake
  1. Get an API key from the Tensorlake Dashboard.
export TENSORLAKE_API_KEY=<API_KEY>
Test whether the API key is working:
tensorlake auth status

Hello World

Applications are defined by writing Python functions. The entrypoint function has to be decorated with @api decorator.
hello_world.py
from pydantic import BaseModel
from tensorlake.applications import api, function

class Request(BaseModel):
    name: str

@function()
def greet(name: str) -> str:
    return f"Hello, {name}!"

@api()
@function()
def say_hello(name: str) -> str:
    greet = greet(name)
    return greet

Deploy Application

tensorlake deploy hello_world.py
That’s it! Your application is now deployed on Tensorlake Cloud and available at https://api.tensorlake.ai/v1/applications/hello_world.

Calling Applications

Once you have deployed your application, it’s available as an HTTP endpoint. Making a request to your application will return a Request ID. You can use this Request ID to track the status of the request and get the output when your application is done processing the request.
curl -X POST https://api.tensorlake.ai/applications/hello_world \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{"name": "John"}'

# {"id":"beae8736ece31ef9"}
You can then use the request ID to get the status of the request.
curl -X GET https://api.tensorlake.ai/applications/hello_world/requests/{request_id} \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
-H "Accept: application/json"
If the application is still running, you will get a response back with the request metadata with a status of “running”. At some point, the status will be changed to “complete”.

Get the Output of an Application

Once the application is complete, you can get the output of any function in your application. This is useful because often the intermediate stages of an application might have useful data that you want to use, or you might want them for debugging purposes.
curl -X GET https://api.tensorlake.ai/applications/hello_world/requests/{request_id}/output/say_hello \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
-H "Accept: application/json"
If the function hasn’t produced any output, you will get an empty response with HTTP status code of 204. The SDK returns a None value in this case. Generally, you would want to check if the request is complete before getting the output.

Test Applications

Applications can run locally on your laptop, so you can test them before deploying them to the cloud.
hello_world.py
if __name__ == "__main__":
    request = say_hello(name="John")
    output = request.output()
    print(output)

Building a Structured Extraction Application

Most real world applications will require python packages and access to secrets. For example, if you are building an application to extract personal information from Driving licenses using OpenAI’s structured output - you will need the OpenAI package, and your OpenAI API key.
structured_extraction.py
import os
import base64

import requests
from pydantic import BaseModel
from tensorlake.applications import api, function, Image, RequestException

image = Image().run("pip install openai pydantic requests")

class DrivingLicense(BaseModel):
    name: str
    date_of_birth: str
    address: str
    license_number: str
    license_expiration_date: str

@function(
    image=image, secrets=["OPENAI_API_KEY"]
)
def extract_driving_license_data(url: str) -> DrivingLicense:
    from openai import OpenAI

    # Download image from URL
    response = requests.get(url)
    try:
        response.raise_for_status()
    except Exception as e:
        raise RequestException(f"Failed to download image from URL: {url}: {e}")

    # Encode image as base64
    image_base64 = base64.b64encode(response.content).decode("utf-8")

    # Determine image format from content type or URL
    content_type = response.headers.get("content-type", "")
    if "jpeg" in content_type or "jpg" in content_type:
        image_format = "jpeg"
    elif "png" in content_type:
        image_format = "png"
    else:
        # Default to jpeg if can't determine
        image_format = "jpeg"

    openai = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

    response = openai.beta.chat.completions.parse(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "system",
                "content": "Extract the personal information from the driving license image.",
            },
            {
                "role": "user",
                "content": [
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": f"data:image/{image_format};base64,{image_base64}"
                        },
                    }
                ],
            },
        ],
        response_format=DrivingLicense,
    )
    dl = response.choices[0].message.parsed
    return dl.model_dump()


graph = Graph(name="driving_license_extractor", start_node=extract_driving_license_data)
This function uses a custom image with some additional python packages, and it uses the OPENAI_API_KEY secret to authenticate with OpenAI. Building custom images allows you to install pretty much anything you want in your function. The Image api allows running any commands, or other standard commands Docker allows. The secrets argument is used to specify the secrets that are available to the functions. These secrets have to be specified before the functions are deployed. So lets deploy the secrets first, get your OpenAI API key and then run the following command:
tensorlake secrets set OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
Now you can deploy the application.
tensorlake deploy structured_extraction.py
You should see the tensorlake stream build logs as your image is being built. Once the image is built, you can invoke the application as before.
curl -N -X POST https://api.tensorlake.ai/applications/driving_license_extractor \
-H "Authorization: Bearer "$TENSORLAKE_API_KEY"" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{"url": "https://tlake.link/dl"}'
You can build very complex and massively scalable near-real time data applications with Tensorlake Applications. Here are some of the next things to learn about: