Skip to main content

Progress API

Getting the Request Context

First, get access to the request context in your function:
from tensorlake.applications import RequestContext, function

@function()
def my_function(data: str) -> str:
    # Get the current request context
    ctx = RequestContext.get()

    # Now you can use ctx.progress.update()
    ctx.progress.update(1, 10, "Starting processing...")

    return "done"

Method: progress.update()

Stream progress updates to monitoring systems and frontends.
ctx.progress.update(
    current: int | float,
    total: int | float,
    message: str | None = None,
    attributes: dict[str, str] | None = None
)
Parameters:
ParameterTypeRequiredDescription
currentint | float✅ YesCurrent step or percentage complete
totalint | float✅ YesTotal steps or 100 for percentage
messagestr | None❌ NoHuman-readable progress message
attributesdict[str, str] | None❌ NoAdditional metadata (key-value pairs)

Basic Usage

Simple progress tracking:
@function()
def process_items(items: list) -> dict:
    ctx = RequestContext.get()

    for i, item in enumerate(items):
        # Update progress: current step, total steps
        ctx.progress.update(i + 1, len(items))
        process(item)

    return {"processed": len(items)}
With a message:
@function()
def multi_step_workflow() -> str:
    ctx = RequestContext.get()

    ctx.progress.update(1, 3, "Fetching data from API...")
    data = fetch_data()

    ctx.progress.update(2, 3, "Processing data...")
    processed = process_data(data)

    ctx.progress.update(3, 3, "Storing results...")
    store_results(processed)

    return "complete"
With additional metadata:
@function()
def batch_processor(items: list) -> dict:
    ctx = RequestContext.get()
    errors = 0

    for i, item in enumerate(items):
        try:
            process(item)
        except Exception:
            errors += 1

        # Include metadata about the processing
        ctx.progress.update(
            current=i + 1,
            total=len(items),
            message=f"Processing item {i + 1}",
            attributes={
                "error_count": str(errors),
                "success_rate": f"{((i + 1 - errors) / (i + 1) * 100):.1f}%"
            }
        )

    return {"total": len(items), "errors": errors}

Using Percentages

You can use percentages instead of step counts:
@function()
def long_operation() -> str:
    ctx = RequestContext.get()

    # 0-100 scale
    ctx.progress.update(0, 100, "Starting...")

    # 25% complete
    ctx.progress.update(25, 100, "Quarter way through...")

    # 50% complete
    ctx.progress.update(50, 100, "Halfway done...")

    # 100% complete
    ctx.progress.update(100, 100, "Finished!")

    return "done"

Common Use Cases & Examples

Now let’s see how to use progress tracking in real-world scenarios:

1. Multi-Step Agent Workflows

Show which step the agent is currently executing:
from tensorlake.applications import application, function, RequestContext

@application()
@function()
def research_agent(topic: str) -> dict:
    ctx = RequestContext.get()

    ctx.progress.update(1, 4, "Searching web sources...")
    web_results = search_web(topic)

    ctx.progress.update(2, 4, "Searching academic papers...")
    papers = search_papers(topic)

    ctx.progress.update(3, 4, "Analyzing results...")
    analysis = analyze_sources(web_results, papers)

    ctx.progress.update(4, 4, "Generating report...")
    report = generate_report(analysis)

    return {"report": report}

2. Batch Data Processing

Track progress through large datasets:
@function()
def process_documents(doc_urls: list[str]) -> dict:
    ctx = RequestContext.get()
    results = []

    for i, url in enumerate(doc_urls):
        ctx.progress.update(
            i + 1,
            len(doc_urls),
            f"Processing document {i + 1}/{len(doc_urls)}"
        )
        result = process_document(url)
        results.append(result)

    return {"processed": len(results), "results": results}

3. Iterative AI Agent Loops

Monitor agent iterations and tool calls:
@function()
def code_generation_agent(spec: str) -> str:
    ctx = RequestContext.get()
    code = initial_code_generation(spec)

    for iteration in range(10):
        ctx.progress.update(
            iteration,
            10,
            f"Iteration {iteration}: Reviewing and refining code...",
            attributes={
                "lines_of_code": str(len(code.split('\n'))),
                "iteration": str(iteration)
            }
        )

        if is_code_complete(code):
            break

        code = refine_code(code, spec)

    return code

4. Data Pipeline Status

Show progress through multi-stage pipelines:
@function()
def etl_pipeline(source: str) -> dict:
    ctx = RequestContext.get()

    ctx.progress.update(1, 5, "Extracting data from source...")
    raw_data = extract(source)

    ctx.progress.update(2, 5, f"Transforming {len(raw_data)} records...")
    transformed = transform(raw_data)

    ctx.progress.update(3, 5, "Validating data quality...")
    validated = validate(transformed)

    ctx.progress.update(4, 5, "Enriching with external data...")
    enriched = enrich(validated)

    ctx.progress.update(5, 5, "Loading into destination...")
    load(enriched)

    return {"status": "complete", "records": len(enriched)}
Progress updates also reset function timeouts automatically. See Timeouts for details.

Consuming Progress Streams

Progress updates are available through the Tensorlake API in real-time.

Polling for Progress Updates

# Get progress updates for a specific request
curl -X GET \
  "https://api.tensorlake.ai/applications/{application}/requests/{request_id}/progress" \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY"
Response:
{
  "current": 45,
  "total": 100,
  "message": "Processing batch 3 of 10",
  "attributes": {
    "batch_id": "batch_003",
    "records_processed": "4500"
  },
  "timestamp": 1704067200000
}

Server-Sent Events (SSE) - Coming Soon

Real-time streaming via SSE for frontend applications:
const eventSource = new EventSource(
  `https://api.tensorlake.ai/applications/${app}/requests/${reqId}/stream`,
  { headers: { Authorization: `Bearer ${apiKey}` } }
);

eventSource.addEventListener('progress', (event) => {
  const progress = JSON.parse(event.data);
  updateUI(progress.current, progress.total, progress.message);
});

Frontend Integration Examples

React Progress Bar

import { useEffect, useState } from 'react';

function TaskProgress({ applicationName, requestId }) {
  const [progress, setProgress] = useState({ current: 0, total: 100, message: '' });

  useEffect(() => {
    const interval = setInterval(async () => {
      const response = await fetch(
        `https://api.tensorlake.ai/applications/${applicationName}/requests/${requestId}/progress`,
        { headers: { Authorization: `Bearer ${apiKey}` } }
      );
      const data = await response.json();
      setProgress(data);
    }, 1000); // Poll every second

    return () => clearInterval(interval);
  }, [applicationName, requestId]);

  const percentage = (progress.current / progress.total) * 100;

  return (
    <div>
      <div className="progress-bar">
        <div style={{ width: `${percentage}%` }} />
      </div>
      <p>{progress.message}</p>
      <p>{percentage.toFixed(0)}% complete</p>
    </div>
  );
}

Vue.js Progress Tracker

<template>
  <div class="progress-container">
    <progress :value="progress.current" :max="progress.total" />
    <p>{{ progress.message }}</p>
    <span>{{ progressPercent }}% complete</span>
  </div>
</template>

<script>
export default {
  props: ['applicationName', 'requestId'],
  data() {
    return {
      progress: { current: 0, total: 100, message: '' },
      pollInterval: null
    };
  },
  computed: {
    progressPercent() {
      return ((this.progress.current / this.progress.total) * 100).toFixed(0);
    }
  },
  async mounted() {
    this.pollInterval = setInterval(async () => {
      const response = await fetch(
        `https://api.tensorlake.ai/applications/${this.applicationName}/requests/${this.requestId}/progress`,
        { headers: { Authorization: `Bearer ${apiKey}` } }
      );
      this.progress = await response.json();
    }, 1000);
  },
  beforeUnmount() {
    clearInterval(this.pollInterval);
  }
};
</script>

Learn More