A Future object defines, runs and tracks execution of a function call or another operation like map or reduce
It is created using the function.future factory. i.e. calling my_function.future(1, 2, 3) returns a Future object
for the my_function(1, 2, 3) function call. The Future doesn’t start running until it’s started with its .run() or
.result() methods, used as a function call argument, or returned from a function. .result() method blocks until the
Future completes and returns the value returned by the function call or raises an exception on failure.
from tensorlake.applications import application, function, Future
@application()
@function()
def my_application(name: str) -> str:
# Creates a Future object for the `capitalize(name)` function call and runs it immediately.
# `Future.run()` blocks the calling function to start the function call, not to finish it.
capitalized_name_future: Future = capitalize.future(name).run()
# `Future.result()` blocks until the `capitalize` function call completes.
# It returns the value returned by the function call or raises an exception on failure.
capitalized_name: str = capitalized_name_future.result()
return f"Hello, {capitalized_name}!"
@function()
def capitalize(text: str) -> str:
return text.upper()
The main purpose of Futures is to allow running multiple function calls in parallel and getting their results later.
This allows building applications that can process multiple independent tasks concurrently, reducing overall latency.
Class method Future.wait(futures: Iterable[Future]) can be used to wait for multiple Futures to complete.
See more details at waiting for multiple Futures to complete.
Example: Running multiple function calls in parallel
from tensorlake.applications import application, function, Future, RETURN_WHEN
@function()
def capitalize(text: str) -> str:
return text.upper()
@application()
@function()
def greet(name: str) -> str:
# Start two function calls in parallel.
capitalized_name: Future = capitalize.future(name).run()
joke: Future = make_joke.future(name).run()
# Wait for both function calls to complete.
Future.wait([capitalized_name, joke], return_when=RETURN_WHEN.ALL_COMPLETED)
# Call `say_hello_and_say_joke` with the values returned by both function calls.
# Block until `say_hello_and_say_joke` completes and return its return value.
return say_hello_and_say_joke(capitalized_name.result(), joke=joke.result())
@function()
def say_hello_and_say_joke(name: str, joke: str) -> str:
return f"Hello, {name}! Here's a joke for you: {joke}"
@function()
def make_joke(name: str) -> str:
return f"Why did {name} cross the road? To get to the other side!"
Example: Non-blocking map and reduce operations
Use function.future.map(...) and function.future.reduce(...) to create Futures for map and reduce operations.
The arguments of these methods are the same for function.map(...) and function.reduce(...) described at
Map-Reduce page.
from tensorlake.applications import application, function, Future
@application()
@function()
def process_numbers(numbers: list[int]) -> int:
# Start a map operation to double the numbers in parallel with another function call.
doubled_numbers: Future = double_number.future.map(numbers).run()
# Start another function call in parallel.
log_processing.future(len(numbers)).run()
# Wait for the map operation to complete and get the doubled numbers.
doubled_numbers_result: list[int] = doubled_numbers.result()
# Make sure that log_processing call is completed.
log_processing.result()
# Start a reduce operation to sum the doubled numbers and return its result.
return sum.future.reduce(doubled_numbers_result).result()
@function()
def double_number(number: int) -> int:
return number * 2
@function()
def sum(a: int, b: int) -> int:
return a + b
@function()
def log_processing(count: int) -> None:
print(f"Processing {count} numbers")
Waiting for multiple Futures to complete
Future.wait class method can be used to wait for multiple Futures to complete. This class method is inspired by the standard concurrent.futures.wait in Python.
It’s full signature is:
from tensorlake.applications import Future, RETURN_WHEN
Future.wait(
futures: Iterable[Future],
timeout: float|None = None,
return_when=RETURN_WHEN.ALL_COMPLETED
) -> tuple[list[Future], list[Future]]
futures: An iterable of Future objects to wait for.
timeout: An optional timeout in seconds. If specified, the method will return after the timeout even if not all Futures have completed.
return_when: A flag indicating when to return. It can be one of the following values from the RETURN_WHEN enum:
RETURN_WHEN.ALL_COMPLETED: Wait until all Futures have completed.
RETURN_WHEN.FIRST_COMPLETED: Wait until at least one Future has completed.
RETURN_WHEN.FIRST_EXCEPTION: Wait until at least one Future has raised an exception or all have completed.
The method returns a tuple of two lists: (done, not_done), where done is a list of Futures that have completed, and not_done is a list of Futures that have not completed yet.
If a future is not running yet, it’s started automatically when passed to Future.wait.
Future object
Future object has the following methods and properties:
exception -> TensorlakeError|None: If the function call or another operation associated with this Future failed then this property will return the exception associated with the failure.
Otherwise, it will return None. If the operation is not yet complete, this property will also return None.
result(timeout: float|None = None) -> Any: Blocks until the operation completes and returns the result of the operation (i.e. value returned by function call).
If the operation fails, the FunctionError will be raised. See more about error handling.
An optional timeout in seconds can be specified. If timeout is reached before the Future completes, a TimeoutError will be raised.
done() -> bool: Returns True if the operation has completed (either successfully or with an exception), otherwise returns False.
run() -> Future: Starts the Future’s operation. Returns the same Future object for chaining. A Future that hasn’t been started
with .run() will be started automatically when passed as another operation input or returned as a tail call.
__await__() -> Generator[Any]: Allows awaiting the Future in async functions. This is equivalent to calling .result(), but the call will not block the
async event loop.
coroutine() -> Coroutine: Converts the Future into a coroutine that can be used the same way as any coroutine returned by an async Tensorlake function.
Returns the same coroutine object if called multiple times on the same Future. Can only be called before a Future is started with .run().
Futures can be passed as arguments to function calls. When a Future gets passed this way, Tensorlake automatically runs it if not running,
waits for the Future to complete and uses its result as the function call argument value. This allows building applications that can run
multiple function calls in parallel without blocking on their results until it’s necessary.
from tensorlake.applications import application, function, Future
@function()
def capitalize(text: str) -> str:
return text.upper()
@function()
def make_joke(name: str) -> str:
return f"Why did {name} cross the road? To get to the other side!"
@function()
def say_hello_and_say_joke(name: str, joke: str) -> str:
return f"Hello, {name}! Here's a joke for you: {joke}"
@application()
@function()
def lazy_greet(name: str, capitalize: bool) -> str:
# Create a Future for `capitalize(name)` function call without running it.
capitalized_name_future: Future = capitalize.future(name)
name = capitalized_name_future if capitalize else name
# Pass the name Future or str as an argument to `say_hello_and_say_joke` function call. Tensorlake will
# automatically run the Future, wait for it to complete, and use its result as the argument values for
# `say_hello_and_say_joke` and `make_joke` functions.
return say_hello_and_say_joke(name, make_joke(name))
The same happens for inputs of map and reduce operations.
from tensorlake.applications import application, function, Future
@function()
def double_number(number: int) -> int:
return number * 2
@function()
def sum(a: int, b: int) -> int:
return a + b
@application()
@function()
def lazy_sum(numbers: list[int], double: bool) -> int:
# Create a map operation Future without running it.
doubled_numbers_future: Future = double_number.future.map(numbers)
numbers = doubled_numbers_future if double else numbers
# Pass the numbers Future or list as input to `sum` reduce operation.
# If numbers is a Future, Tensorlake will automatically run its map operation,
# wait for it to complete, and use its result as input for the reduce operation.
return sum.reduce(numbers)
All input futures that don’t depend on each other run in parallel, allowing Tensorlake to optimize resource usage and
reduce overall application latency. A function call or a map-reduce operation are only blocked while their input Futures
are running. Once all input Futures complete, Tensorlake automatically runs the function call or the map-reduce operation.
Wrapping Futures into Python objects is not allowed
When passing Futures as arguments to function calls, or returning them as tail calls,
the Futures cannot be wrapped into other Python objects. For example:
from tensorlake.applications import application, function, Future
@function()
def capitalize(text: str) -> str:
return text.upper()
@application()
@function()
def my_application(name: str) -> list[str]:
capitalized_name: Future = capitalize.future(name)
names: list[str | Future] = [capitalized_name, name]
# Passing Python list with a Future as an argument here is not allowed.
# Tensorlake will not recognize the Future wrapped into the list
# and will not run it or wait for it to complete.
return concat(names)
@function()
def concat(strings: list[str]) -> str:
return "".join(strings)
Map and reduce operations expect iterables, so they recognize Futures that are wrapped into lists
and process them as expected.
Tail calls
When a Tensorlake function calls another Tensorlake function or calls future.result(), the calling function blocks until the
function call or the future completes and returns its result.
Applications that make many of such calls can face multiple challenges:
- Wasted Resources: While waiting for the result, the calling function container cannot perform other tasks while still consuming its compute resources.
- Higher Resource Usage: More function containers are required to handle the same number of concurrent application requests if each request blocks multiple function containers.
- Higher Latency: Sequential blocking function calls or
future.result() calls can lead to increased overall latency, especially when multiple function calls are involved.
To address these challenges, Tensorlake introduced Tail Calls. A function makes a tail call when it returns a Future.
The result of the future, when available, becomes the return value of the function. Once the Future is returned, it immediately
starts running and frees the calling function container to process next tasks. This allows building applications that can run multiple
function calls in parallel without blocking on their results until it’s necessary, significantly reducing overall latency and resource usage.
With tail calls the example greet(...) application doesn’t have to wait for completion of any of its function calls.
greet(...) just returns almost immediately after telling Tensorlake what it needs to do for the request.
greet(...) then frees its container to process another request while Tensorlake is orchestrating the execution the most efficient
way possible. Once all function calls complete, Tensorlake will return the final result to the user.
from tensorlake.applications import application, function, Future
@function()
def capitalize(text: str) -> str:
return text.upper()
@function()
def make_joke(name: str) -> str:
return f"Why did {name} cross the road? To get to the other side!"
@function()
def say_hello_and_say_joke(name: str, joke: str) -> str:
return f"Hello, {name}! Here's a joke for you: {joke}"
@application()
@function()
def greet(name: str) -> str:
# Returns a future for `say_hello_and_say_joke(capitalize(name), make_joke(name))` function call.
# This is a tail call. `greet` doesn't block waiting for any of the function calls to complete.
# Once returned Tensorlake will run the Future and use `say_hello_and_say_joke` return value as the
# return value of `greet`. The `say_hello_and_say_joke` function call will run as soon as both its
# arguments are available. Both arguments are computed in parallel because they don't depend on each other.
capitalized_name: Future = capitalize.future(name)
joke: Future = make_joke.future(name)
return say_hello_and_say_joke.future(capitalized_name, joke=joke)
Same as with input futures, wrapping a Future returned from a function into another Python object is not allowed.
For example, returning a list with a Future inside is not allowed. Tensorlake will not recognize the Future wrapped into the list.
See Also