Datasets allow you to apply the same document parsing configuration to multiple documents. This makes it easy to version schemas, and OCR configurations, and apply them to new documents in your workflows.
All the code examples on this page use the Tensorlake Python SDK. For other languages, please consult our API Reference.

Quick Start

The example can be run in a Google Colab notebook.
1

Create a dataset

from tensorlake.documentai.client import DocumentAI

doc_ai = DocumentAI(api_key="YOUR_TENSORLAKE_API_KEY")

# Create a dataset. The only required argument for a dataset is its name.
# A dataset name may only contain alphanumeric characters, hyphens or underscores.
#
# Not specifying parsing, or extraction options will create a dataset used for parsing
# documents with our recommended defaults.
dataset = doc_ai.create_dataset(
    name="your_dataset_name"
)
2

Parse a file

# Use a publicly accessible URL or upload a file to Tensorlake and use the file ID.
file_url = "https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/real-estate-purchase-all-signed.pdf"

parse_id = doc_ai.parse_dataset_file(
    dataset=dataset,
    file=file_url
)
3

Retrieve outputs

# Retrieve the outputs of the parsing job.
result = doc_ai.wait_for_completion(parse_id)

# The result contains the parsed document and any extracted data.
print(result)
The Python SDK wait_for_completion method will block until the parsing job is complete and return the result.
With datasets, you can ingest as many files as you want, and the parsing configuration will be applied to all of them. You can also create a dataset with structured extraction options, which will allow you to extract structured data from related documents.