Hello World
Running an Hello World workflow on Tensorlake Serverless.
In this introductory tutorial, we will:
- Create a Tensorlake Graph
- Test Locally
- Deploy to Tensorlake Serverless
- Invoke the Graph Remotely
- Troubleshoot Remote Executions
Prerequisites
Before proceeding, ensure you have the following:
- Python Environment: Python 3.9 or higher installed.
- Tensorlake Account: Sign up at Tensorlake.
- API Key: After creating your account, generate an API key for the Tensorlake CLI and set it as an environment variable:
- Tensorlake SDK: Install the Tensorlake SDK using pip:
Step 1: Create the Graph
We’ll create a graph that takes a name as input and returns a personalized greeting.
Define the Functions
In workflow.py
, start by importing the necessary components and defining the functions:
Construct the Graph
Next, construct the graph by specifying the nodes and their connections:
Here, the graph consists of two nodes: hello_name
and hello_world
, with hello_name
as the starting node and an edge directing the flow to hello_world
.
Step 2: Test Locally
To test the graph locally, add the following code:
Running python workflow.py
will execute the workflow locally and print the output.
Step 3: Deploying the Graph
Deploy the graph to the Tensorlake Serverless platform using the following command:
This command uploads your code to the Tensorlake cloud, making it ready for remote invocations distributed across multiple machines.
Step 4: Invoking the Graph Remotely
Once the graph is deployed, you can invoke it remotely by modifying the main code:
Alternatively, you can obtain a reference to the deployed graph and invoke it:
The Graph is called with the input of the starting node of the graph, in this case hello_name
, so
the input to the graph is the name
parameter.
The result of calling a graph is an Invocation
. Since data applications can take a long time to complete,
calling outputs
on an invocation will wait for the invocation to be complete.
In either case, the result of the individual functions can be retrieved using the invocation id, and the name of the function.
Step 5: Monitoring and Troubleshooting
Monitor your graph’s invocations and logs using the Tensorlake CLI:
These commands help you track executions and diagnose any issues that may arise during remote invocations.
Conclusion
By following this tutorial, you’ve successfully created, deployed, and invoked a simple “Hello World” workflow using Tensorlake Serverless. This foundation enables you to build more complex data-intensive AI workflows with ease.
Was this page helpful?