@tensorlake_function
decorator.
@tensorlake_function
are the most basic way to express compute in Tensorlake Workflows. The decorator allows you to specify the following attributes:
image
- The image to use for the function.input_encoding
- The encoding to use for the input of the function.output_encoding
- The encoding to use for the output of the function.secrets
- The secrets to use for the function.cpus
- The number of CPUs to use for the function.memory
- The memory to use for the function.timeout
- The timeout for the function.retries
- The number of retries for the function.disk
- The disk to use for the function.tensorlake_function
converts your functions into a TensorlakeCompute
object in the runtime.
tensorlake_function
decorator allows you to specify are available to the TensorlakeCompute
class.
cloudpickle
if you want to complex Python objects between functions, such as Pandas dataframes, Pytorch Tensors, PIL
images, etc.
The input_encoding
and output_encoding
attributes can be used to change the serialization format. Currently supported formats are:
json
- JSON serializationcloudpickle
- Cloudpickle serializationuse_ctx
flag is used to indicate that the function should be injected with the request context. It is always injected with the ctx
variable in your function.
my_function
is the start node of the workflow. The input to the workflow is passed to my_function
.
Workflows are exposed as HTTP endpoints, the body of the request will be passed to the start node of the workflow, in this case my_function
.