Use Python async/await with Tensorlake async functions. Run them concurrently to optimize resource usage and reduce latency.
An async Tensorlake function behaves like a regular Python async function. Calling it returns a coroutine
that doesn’t run until it’s awaited or started with asyncio.create_task() or other asyncio module functions.
Copy
Ask AI
from tensorlake.applications import application, function@function()async def capitalize(text: str) -> str: return text.upper()@application()@function()async def greet(name: str) -> str: # Calling an async Tensorlake function `capitalize` returns a coroutine. # `await` is available inside async Tensorlake functions `greet`. # `await` starts the `capitalize` coroutine and waits for it to complete, returning the result. capitalized: str = await capitalize(name) return f"Hello, {capitalized}!"
coroutines returned by async Tensorlake functions behave almost the same way as Futures
used with sync Tensorlake functions.
Use asyncio.create_task() to run a coroutine in the background without blocking on it. This returns an asyncio.Task
that can be awaited later to get the result.
Copy
Ask AI
import asynciofrom tensorlake.applications import application, function@function()async def double(x: int) -> int: return x * 2@application()@function()async def my_app(x: int) -> int: coroutine = double(x) # Starts the coroutine in the background and returns an asyncio.Task. task: asyncio.Task = asyncio.create_task(coroutine) # Do something else and then await the task to get the result. return await task
Running coroutines in parallel with asyncio.gather
Use asyncio.gather() to run multiple coroutines in parallel and collect their results. This is the
standard Python way to run async functions concurrently.
Copy
Ask AI
import asynciofrom tensorlake.applications import application, function@function()async def capitalize(text: str) -> str: return text.upper()@function()async def make_joke(name: str) -> str: return f"Why did {name} cross the road? To get to the other side!"@application()@function()async def greet(name: str) -> str: # Start both function calls in parallel. capitalized, joke = await asyncio.gather( capitalize(name), make_joke(name), ) return f"Hello, {capitalized}! {joke}"
Calling function.map(...) or function.reduce(...) on an async function returns a coroutine.
Copy
Ask AI
from tensorlake.applications import application, function@function()async def double(x: int) -> int: return x * 2@function()async def add(a: int, b: int) -> int: return a + b@application()@function()async def process_numbers(numbers: list[int]) -> int: # Calling .map() on an async function returns a coroutine. # `await` runs the map operation and blocks until all items are processed. doubled: list[int] = await double.map(numbers) # Calling .reduce() on an async function also returns a coroutine. total: int = await add.reduce(doubled) return total
The coroutines returned by function.map() or function.reduce() behave exactly the same as coroutines returned
by async function(...) calls.
Coroutines returned from async Tensorlake functions and asyncio.Task objects created with asyncio.create_task() from
such coroutines can be passed as arguments to other function calls.
Tensorlake automatically runs the coroutines or asyncio.Task objects, waits for them to complete, and uses their results
as the argument values. This works exactly like passing Futures as inputs.
Copy
Ask AI
from tensorlake.applications import application, function@function()async def double(x: int) -> int: return x * 2@function()async def add(a: int, b: int) -> int: return a + b@application()@function()async def my_app(x: int) -> int: a = double(x) b = double(x + 1) # Pass coroutines as function call arguments. Tensorlake runs both in parallel, # waits for them to complete, and uses their results as the arguments for `add`. return await add(a, b)
All input coroutines that don’t depend on each other run in parallel, allowing Tensorlake to optimize resource usage and
reduce overall application latency. A function call or a map-reduce operation are only blocked while their input coroutines
are running. Once all input coroutines complete, Tensorlake automatically runs the function call or the map-reduce operation.
Wrapping coroutines and asyncio.Tasks into Python objects is not allowed
When passing Tensorlake coroutines or asyncio.Task objects create from them as arguments to function calls,
or returning them as tail calls, they cannot be wrapped into other Python objects. For example, returning a list with a
coroutine inside is not allowed. Tensorlake will not recognize the coroutine wrapped into the list.
This is the same restriction as with Futures.
Map and reduce operations accept a Future/coroutine/asyncio.Task or a list as input items.
If a list is passed then the Futures/coroutines/asyncio tasks in the list are recognized by
Tensorlake and run automatically.
Returning a Tensorlake function coroutine or its asyncio.Task makes a tail call.
The returning function completes immediately and its function container is freed to process the next request.
Tensorlake runs the returned coroutine or task and uses its result as the function’s return value.
This works exactly like returning a Future as a tail call.
Copy
Ask AI
from tensorlake.applications import application, function@function()async def double(x: int) -> int: return x * 2@application()@function()async def my_app(x: int) -> int: # Returns a coroutine as a tail call. The function completes immediately # and Tensorlake runs the coroutine in the background. return double(x)
Futures can also be returned as tail calls from async functions.
Copy
Ask AI
from tensorlake.applications import application, function@function()def double(x: int) -> int: return x * 2@application()@function()async def my_app(x: int) -> int: return double.future(x)
Sync Tensorlake functions can be called directly from async functions. The call blocks the asyncio event loop
until the sync function completes. No other asyncio tasks can run while the asyncio event loop is blocked.
Because of this, calling sync Tensorlake functions directly is an anti-pattern and should be avoided.Use function.future() to call sync functions without blocking the event loop. Call future.run() to start the Future
in the background. Use await future to wait for the Future to complete and get its result. If this doesn’t fit the use case,
use future.coroutine() to convert the Future into a coroutine that can be used the same way as any coroutine returned by
an async Tensorlake function.
Sync functions cannot await coroutines. To call an async Tensorlake function from a sync function,
use function.future() to create a Future and call .result() to block until it completes.
Copy
Ask AI
from tensorlake.applications import application, function@function()async def async_double(x: int) -> int: return x * 2@function()async def async_add(a: int, b: int) -> int: return a + b@application()@function()def my_app(x: int) -> int: doubled: int = async_double.future(x).result() return async_add.future(x, doubled).result()