Documentation Index Fetch the complete documentation index at: https://docs.tensorlake.ai/llms.txt
Use this file to discover all available pages before exploring further.
Progress API
Getting the Request Context
First, get access to the request context in your function:
from tensorlake.applications import RequestContext, function
@function ()
def my_function ( data : str ) -> str :
# Get the current request context
ctx = RequestContext.get()
# Now you can use ctx.progress.update()
ctx.progress.update( 1 , 10 , "Starting processing..." )
return "done"
Method: progress.update()
Stream progress updates to monitoring systems and frontends.
ctx.progress.update(
current: int | float ,
total: int | float ,
message: str | None = None ,
attributes: dict[ str , str ] | None = None
)
Parameters:
Parameter Type Required Description currentint | float✅ Yes Current step or percentage complete totalint | float✅ Yes Total steps or 100 for percentage messagestr | None❌ No Human-readable progress message attributesdict[str, str] | None❌ No Additional metadata (key-value pairs)
Basic Usage
Simple progress tracking:
@function ()
def process_items ( items : list ) -> dict :
ctx = RequestContext.get()
for i, item in enumerate (items):
# Update progress: current step, total steps
ctx.progress.update(i + 1 , len (items))
process(item)
return { "processed" : len (items)}
With a message:
@function ()
def multi_step_workflow () -> str :
ctx = RequestContext.get()
ctx.progress.update( 1 , 3 , "Fetching data from API..." )
data = fetch_data()
ctx.progress.update( 2 , 3 , "Processing data..." )
processed = process_data(data)
ctx.progress.update( 3 , 3 , "Storing results..." )
store_results(processed)
return "complete"
With additional metadata:
@function ()
def batch_processor ( items : list ) -> dict :
ctx = RequestContext.get()
errors = 0
for i, item in enumerate (items):
try :
process(item)
except Exception :
errors += 1
# Include metadata about the processing
ctx.progress.update(
current = i + 1 ,
total = len (items),
message = f "Processing item { i + 1 } " ,
attributes = {
"error_count" : str (errors),
"success_rate" : f " { ((i + 1 - errors) / (i + 1 ) * 100 ) :.1f} %"
}
)
return { "total" : len (items), "errors" : errors}
Using Percentages
You can use percentages instead of step counts:
@function ()
def long_operation () -> str :
ctx = RequestContext.get()
# 0-100 scale
ctx.progress.update( 0 , 100 , "Starting..." )
# 25% complete
ctx.progress.update( 25 , 100 , "Quarter way through..." )
# 50% complete
ctx.progress.update( 50 , 100 , "Halfway done..." )
# 100% complete
ctx.progress.update( 100 , 100 , "Finished!" )
return "done"
Consuming Progress Streams
Progress updates are available through the Tensorlake API in real-time.
Polling for Progress Updates
# Get progress updates for a specific request
curl -X GET \
"https://api.tensorlake.ai/applications/{application}/requests/{request_id}/progress" \
-H "Authorization: Bearer $TENSORLAKE_API_KEY "
Response:
{
"current" : 45 ,
"total" : 100 ,
"message" : "Processing batch 3 of 10" ,
"attributes" : {
"batch_id" : "batch_003" ,
"records_processed" : "4500"
},
"timestamp" : 1704067200000
}
Learn More
Request Context Full context API reference.