In this introductory tutorial, we will:

  1. Create a Tensorlake Graph
  2. Test Locally
  3. Deploy to Tensorlake Serverless
  4. Invoke the Graph Remotely
  5. Troubleshoot Remote Executions

Prerequisites

Before proceeding, ensure you have the following:

  • Python Environment: Python 3.9 or higher installed.
  • Tensorlake Account: Sign up at Tensorlake.
  • API Key: After creating your account, generate an API key for the Tensorlake CLI and set it as an environment variable:
    export TENSORLAKE_API_KEY=<your-api-key>
    
  • Tensorlake SDK: Install the Tensorlake SDK using pip:
    pip install tensorlake
    

Step 1: Create the Graph

We’ll create a graph that takes a name as input and returns a personalized greeting.

Define the Functions

In workflow.py, start by importing the necessary components and defining the functions:

from tensorlake import Graph, tensorlake_function

@tensorlake_function()
def hello_name(name: str) -> str:
    return f"Hello {name}!"

@tensorlake_function()
def hello_world(sentence: str) -> str:
    return f"{sentence} Hello world!"

Construct the Graph

Next, construct the graph by specifying the nodes and their connections:

graph = Graph(name="hello-world", start_node=hello_name)
graph.add_edge(hello_name, hello_world)

Here, the graph consists of two nodes: hello_name and hello_world, with hello_name as the starting node and an edge directing the flow to hello_world.

Step 2: Test Locally

To test the graph locally, add the following code:

if __name__ == "__main__":
    invocation = graph.local().queue("Tensorlake")
    outputs = invocation.outputs("hello_world")
    print(outputs[0])

Running python workflow.py will execute the workflow locally and print the output.

Step 3: Deploying the Graph

Deploy the graph to the Tensorlake Serverless platform using the following command:

tensorlake deploy workflow.py

This command uploads your code to the Tensorlake cloud, making it ready for remote invocations distributed across multiple machines.

Step 4: Invoking the Graph Remotely

Once the graph is deployed, you can invoke it remotely by modifying the main code:

if __name__ == "__main__":
    invocation = graph.queue("Tensorlake")
    outputs = invocation.outputs("hello_world")
    print(outputs[0])

Alternatively, you can obtain a reference to the deployed graph and invoke it:

from tensorlake import TensorlakeClient

if __name__ == "__main__":
    client = TensorlakeClient()
    graph = client.get_graph("hello-world")
    invocation = graph.queue("Tensorlake")
    outputs = invocation.outputs("hello_world")
    print(outputs[0])

The Graph is called with the input of the starting node of the graph, in this case hello_name, so the input to the graph is the name parameter.

The result of calling a graph is an Invocation. Since data applications can take a long time to complete, calling outputs on an invocation will wait for the invocation to be complete.

In either case, the result of the individual functions can be retrieved using the invocation id, and the name of the function.

Step 5: Monitoring and Troubleshooting

Monitor your graph’s invocations and logs using the Tensorlake CLI:

tensorlake invocations list
tensorlake invocations logs <invocation-id> --function-name <function-name>

These commands help you track executions and diagnose any issues that may arise during remote invocations.

Conclusion

By following this tutorial, you’ve successfully created, deployed, and invoked a simple “Hello World” workflow using Tensorlake Serverless. This foundation enables you to build more complex data-intensive AI workflows with ease.