Skip to main content
Every Tensorlake function call is automatically traced. You get execution timelines, logs, metrics, and error details without configuring any observability infrastructure.

Execution Timelines

When a request flows through your application, Tensorlake records every function call in an execution timeline. You can see:
  • Function call sequence — which functions ran and in what order
  • Timing — how long each function took, including cold start time
  • Dependencies — which function calls ran in parallel vs. sequentially
  • Status — success, failure, or retry for each function call
This is available in the Tensorlake Dashboard for every application request.

Structured Logging

Use Python’s standard print() or logging module inside your functions. Logs are captured automatically and associated with the specific function call and request.
from tensorlake.applications import function
import logging

logger = logging.getLogger(__name__)

@function()
def process_data(data: str) -> str:
    logger.info(f"Processing {len(data)} characters")
    result = transform(data)
    logger.info(f"Transformation complete, output size: {len(result)}")
    return result
Logs are available in the dashboard and through the Logging guide for configuration details.

Custom Metrics

Record custom metrics using the request context. Metrics are available in the dashboard for monitoring.
import time
from tensorlake.applications import RequestContext, function

@function()
def my_agent(prompt: str) -> str:
    ctx = RequestContext.get()

    start = time.monotonic()
    result = call_llm(prompt)
    ctx.metrics.timer("llm_call_duration", time.monotonic() - start)
    ctx.metrics.counter("llm_calls")

    return result
MethodDescription
metrics.timer(name, value)Record a duration in seconds
metrics.counter(name, value)Increment a counter (starts at 0)
See SDK Reference — Request Metrics for details.

Learn More