Document Ingestion

Tensorlake helps you turn unstructured documents into structured, actionable data. This guide covers the essential concepts you’ll need to understand when parsing documents and extracting data with Tensorlake.


Document AI Client

What it is: The main entry point for interacting with Tensorlake. It provides methods for uploading documents, creating parsing jobs, and retrieving results.

Why it matters: This is where you configure your parsing options, upload files, and manage the parsing workflow.

from tensorlake.documentai import DocumentAI

API_KEY="tl__apiKey_xxxx"
doc_ai = DocumentAI(api_key=API_KEY)
Learn how to get your API key from Tensorlake Cloud.

Document Upload

What it is: The first step in any ingestion workflow. Tensorlake accepts PDF, images, raw-text, presentations, and more. Once your document (or data) is uploaded, it is considered a file. Each file is assigned a file_id, which is used in parsing jobs.

Why it matters: Uploading documents enables asynchronous processing and orchestration.

file_id = doc_ai.upload(path="/path/to/file.pdf")

Parsing Jobs

What it is: A parsing job is the process Tensorlake uses to analyze a document and return structured output. It uses the configured ParsingOptions to determine how the document should be processed.

Why it matters: This is where you define behaviors like schema extraction, signature detection, table parsing, and more.

job_id = doc_ai.parse(file_id, ParsingOptions())

Parsing Options

What it is: Controls how Tensorlake parses the document. This includes chunking, table strategies, signature detection, OCR preferences, and more.

Why it matters: You can fine-tune performance and accuracy by customizing your parsing strategy.

options = ParsingOptions(
    page_range='1',
    chunk_strategy=ChunkingStrategy.NONE,
    table_parsing_strategy=TableParsingStrategy.TSR,
    table_output_mode=TableOutputMode.MARKDOWN,
    form_detection_mode=FormDetectionMode.VLM,
    table_summarization=True,
    extraction_option=ExtractionOptions(
        skip_ocr=True,
    )
)
Learn more about Parsing Options, including Signature Detection, Strikethrough Detection, and Table Parsing.

Schemas

What it is: Schemas define what structured data you want extracted. They can include keys like buyer_name, coverage_type, or signature_status, and can be supplied as JSON or an inline string.

Why it matters: Schemas make Tensorlake deterministic. No fuzzy guesses, just structured fields mapped to your business logic.

signature_status_schema.json
{
  "buyer": {
    "buyer_name": "string",
    "buyer_signed": {
        "description": "Determine if the buyer signed the agreement",
        "type": "boolean"
    }
  },
  "seller": {
    "seller_name": "string",
    "seller_signed": {
        "description": "Determine if the seller signed the agreement",
        "type": "boolean"
    }
  }
}
Learn how to define schemas here.

Structured Output

What it is: The output returned by Tensorlake after parsing. Output includes a structured, schema-aligned JSON representation of your document data, including bounding boxes, page numbers, fragment types. If you provided a schema, the output will also include structured data that matches your schema.

Why it matters: This output is machine-readable, auditable, and easy to plug into downstream systems like LangGraph, Slack, or CRMs.

For example, here is a snippet based on this document, specifying the schema example above.

{
  "id": "job-***",
  "status": "successful",
  "file_name": "file_name.pdf",
  "file_id": "tensorlake-***",
  "trace_id": "***",
  "createdAt": null,
  "updatedAt": null,
  "outputs": {
    "chunks": [
      {
        "page_number": 0,
        "content": "Full text of the document as markdown, broken down by page"
      }
    ],
    "document": {
      "pages": [
        {
          "page_number": 1,
          "page_fragments": [
            {...}
            {
              "fragment_type": "text",
              "content": {
                "content": "XXIV. GOVERNING LAW. This Agreement shall be interpreted in accordance with the laws in the state of California (\"Governing Law\")."
              },
              "reading_order": null,
              "page_number": null,
              "bbox": {
                "x1": 71.0,
                "x2": 527.0,
                "y1": 238.0,
                "y2": 264.0
              }
            },
            {...}
          ],
          "layout": {}
        },
      ]
    },
    "num_pages": 10,
    "structured_data": {
      "pages": [
        {
          "page_number": 1,
          "data": {
            "buyer": {
              "buyer_name": "Nova Ellison",
              "buyer_signature_date": "September 10, 2025",
              "buyer_signed": true
            },
            "seller": {
              "seller_name": "Juno Vega",
              "seller_signature_date": "September 10, 2025",
              "seller_signed": true
            }
          }
        }
      ]
    },
    "error_message": ""
  }
}

Visual Layout & Bounding Boxes

What it is: Each field extracted includes optional layout metadata — such as its position on the page, size, and surrounding context.

Why it matters: Useful for visual validation, audit trails, redlining, and debugging extraction behavior.

See the bounding boxes in the Playground:

And see the location of the bounding boxes for each fragmentin the structured output:

{
    "fragment_type": "text",
    "content": {
        "content": "XXIV. GOVERNING LAW."
    },
    "reading_order": null,
    "page_number": null,
    "bbox": {
    "x1": 71.0,
    "x2": 527.0,
    "y1": 238.0,
    "y2": 264.0
    }
}

Workflows

Tensorlake Workflows are a powerful way to automate and orchestrate complex tasks. They allow you to define a series of functions that can be executed in parallel or sequentially, depending on your needs.

Graphs

Workflows are created by connecting multiple functions together in a Graph.

Graph contains:

  • Node: Represents a function that operates on data.
  • Start Node: which is the first function that is executed when the graph is invoked.
  • Edges: Represents data flow between functions.
  • Conditional Edge: Evaluates input data from the previous function and decide which edges to take. They are like if-else statements in programming.

Graphs are workflows that has functions that can be executed in parallel, while Pipelines are linear workflows that execute functions serially.

Functions

They are regular Python functions, decorated with @tensorlake_function() decorator.

Function can be executed in a distributed manner, and the output is stored so that if downstream functions fail, they can be resumed from the output of the function.

There are various other parameters, in the decorator that can be used to configure retry behavior, placement constraints, and more.

Programming Model

Pipeline

Transforming the input of the graph so that every node transforms the output of the previous node until reaching the end node.

@tensorlake_function()
def node1(input: int) -> int:
    return input + 1

@tensorlake_function()
def node2(input2: int) -> int:
    return input2 + 2

@tensorlake_function()
def node3(input3: int) -> int:
    return input3 + 3

graph = Graph(name="pipeline", start_node=node1)
graph.add_edge(node1, node2)
graph.add_edge(node2, node3)

Use Cases: Transforming a video into text by first extracting the audio, and then doing Automatic Speech Recognition (ASR) on the extracted audio.

Parallel Branching

Generating more than one graph output for the same graph input in parallel.

@tensorlake_function()
def start_node(input: int) -> int:
    return input + 1

@tensorlake_function()
def add_two(input: int) -> int:
    return input + 2

@tensorlake_function()
def is_even(input: int) -> int:
    return input % 2 == 0

graph = Graph(name="pipeline", start_node=start_node)
graph.add_edge(start_node, add_two)
graph.add_edge(start_node, is_even)

Use Cases: Extracting embeddings and structured data from the same unstructured data.

Map

Automatically parallelize functions across multiple machines when a function returns a sequence and the downstream function accepts only a single element of that sequence.

@tensorlake_function()
def fetch_urls() -> list[str]:
    return [
        'https://example.com/page1',
        'https://example.com/page2',
        'https://example.com/page3',
    ]

# scrape_page is called in parallel for every element of fetch_url across
# many machines in a cluster or across many worker processes in a machine
@tensorlake_function()
def scrape_page(url: str) -> str:
    content = requests.get(url).text
    return content

Use Cases: Generating Embedding from every single chunk of a document.

Map Reduce - Reducing/Accumulating from Sequences

Reduce functions in Tensorlake Serverless aggregate outputs from one or more functions that return sequences. They operate with the following characteristics:

  • Lazy Evaluation: Reduce functions are invoked incrementally as elements become available for aggregation. This allows for efficient processing of large datasets or streams of data.
  • Stateful Aggregation: The aggregated value is persisted between invocations. Each time the Reduce function is called, it receives the current aggregated state along with the new element to be processed.
@tensorlake_function()
def fetch_numbers() -> list[int]:
    return [1, 2, 3, 4, 5]

class Total(BaseModel):
    value: int = 0

@tensorlake_function(accumulate=Total)
def accumulate_total(total: Total, number: int) -> Total:
    total.value += number
    return total

Use Cases: Aggregating a summary from hundreds of web pages.

Dynamic Routing

Functions can route data to different nodes based on custom logic, enabling dynamic branching.

@tensorlake_function()
def handle_error(text: str):
    # Logic to handle error messages
    pass

@tensorlake_function()
def handle_normal(text: str):
    # Logic to process normal text
    pass

# The function routes data into the handle_error and handle_normal based on the
# logic of the function.
@tensorlake_router()
def analyze_text(text: str) -> List[Union[handle_error, handle_normal]]:
    if 'error' in text.lower():
        return [handle_error]
    else:
        return [handle_normal]

Use Cases: Processing outputs differently based on classification results.