Overview
Workflow nodes are the building blocks of document processing pipelines. Each node has specific inputs, outputs, and configuration options.Node Categories
Nodes are organized into three categories:| Category | Purpose |
|---|---|
| Core | Workflow entry/exit points and utilities |
| Tools | Document processing operations |
| Logic | Conditional flows and data transformations |
Core Nodes
Document (Start)
Document
The entry point for your workflow. Upload documents here for processing.
- PDF documents
- Images (PNG, JPG, JPEG, GIF, WebP, TIFF, BMP)
- Microsoft Word (.docx, .doc)
- Microsoft Excel (.xlsx, .xls)
- Microsoft PowerPoint (.pptx, .ppt)
- Drag multiple Document nodes for workflows that combine multiple inputs
- Each Document node can receive one file per workflow run
- Files are automatically converted to PDF when required by downstream nodes
Webhook (End)
Webhook
Send workflow outputs to an external HTTP endpoint.
| Setting | Description |
|---|---|
| Webhook URL | The HTTPS endpoint to receive the data |
| Headers | Custom HTTP headers (e.g., authentication tokens) |
completion: The extraction result with parsed datafile_payload: Document metadata including filename and URLuser: User email address (if authenticated)metadata: Additional workflow metadata
Note
Note
Add comments and documentation to your workflow.
Outputs: None Notes don’t affect workflow execution—they’re purely for documentation. Use them to:
- Explain complex logic
- Document configuration choices
- Leave instructions for teammates
Tools Nodes
Extract
Extract
Extract structured data from documents using a JSON schema.
| Setting | Description | Default |
|---|---|---|
| Schema | JSON Schema defining fields to extract | {} |
| Model | AI model for extraction | retab-small |
| Temperature | Randomness in extraction (0-1) | 0 |
| Image Resolution | DPI for document rendering | 150 |
| Consensus | Number of parallel extractions (1-10) | 1 |
| Reasoning Effort | How much the model “thinks” | minimal |
| Additional Inputs | Named inputs for context (text, JSON, or files) | [] |
When
n_consensus > 1, the node runs multiple extractions in parallel and returns:
- data: The consensus result
- likelihoods: Confidence scores for each field (0-1)
- Text inputs: Instructions or context as plain text
- JSON inputs: Structured data from other nodes
- File inputs: Additional reference documents
Parse
Parse
Convert documents to structured text/markdown using AI vision.
Outputs: File (parsed document), Text (extracted content) Configuration:
| Setting | Description | Default |
|---|---|---|
| Model | AI model for parsing | retab-small |
| Image Resolution | DPI for document rendering | 150 |
- Pre-process documents before extraction
- Convert scanned PDFs to searchable text
- Extract text from images
Split
Split
Split multi-page documents into separate PDFs by category.
Outputs: Multiple File outputs (one per category) Configuration:
| Setting | Description |
|---|---|
| Categories | List of document categories with names and descriptions |
| Model | AI model for classification |
Non-PDF documents are automatically converted to PDF before splitting.
Classifier
Classifier
Classify documents into one of the predefined categories.
Outputs: Multiple File outputs (one per category, only the matched category receives the document) Configuration:
| Setting | Description |
|---|---|
| Categories | List of document categories with names and descriptions |
| Model | AI model for classification |
| Feature | Split | Classifier |
|---|---|---|
| Input | Multi-page document | Single document |
| Output | Multiple PDFs (pages grouped by category) | Same document routed to one category |
| Use Case | Separating bundled documents | Routing different document types |
Agent Edit
Agent Edit
Fill PDF forms using AI with natural language instructions.
Outputs: File (filled document) Configuration:
| Setting | Description |
|---|---|
| Instructions | Natural language instructions for filling the form |
Template Edit
Template Edit
Fill documents using a pre-defined template.
| Setting | Description |
|---|---|
| Template ID | The template to use for filling |
Logic Nodes
Human in the Loop (HIL)
Human in the Loop
Pause workflow execution for human review and approval.
Outputs: JSON (verified data) Configuration: None (inherits schema from connected source) How It Works:
- Connect to an Extract or Functions node (schema is automatically inherited)
- When the workflow runs, it pauses at the HIL node
- A reviewer sees the extracted data alongside the source document
- The reviewer can approve, modify, or reject the data
- After approval, the verified data continues through the workflow
- Validate critical extractions before sending to downstream systems
- Quality control for high-value documents
- Compliance requirements that mandate human oversight
The HIL node automatically inherits the JSON schema from the connected upstream node. It also preserves any computed fields from Functions nodes.
Functions
Functions
Add computed fields using Excel-like formulas.
Outputs: JSON (with computed fields) Configuration:
| Setting | Description |
|---|---|
| Functions | List of computed fields with target paths and expressions |
| Function | Description | Example |
|---|---|---|
SUM | Sum of values | SUM(items.*.price) |
AVERAGE | Average of values | AVERAGE(scores.*) |
COUNT | Count of items | COUNT(line_items.*) |
MIN / MAX | Minimum/maximum value | MAX(items.*.quantity) |
IF | Conditional | IF(total > 1000, "Large", "Small") |
CONCAT | Join strings | CONCAT(first_name, " ", last_name) |
ROUND | Round number | ROUND(amount, 2) |
Functions are evaluated in dependency order. You can reference other computed fields in expressions.
If / Else
If / Else
Route data to different branches based on conditions.
Outputs: Multiple JSON outputs (one per branch: If, Else If, Else) Configuration:
| Setting | Description |
|---|---|
| Conditions | List of conditions to evaluate in order |
| Has Else | Whether to include a default else branch (default: true) |
| Type | Operators |
|---|---|
| Existence | exists, does_not_exist, is_empty, is_not_empty |
| Comparison | is_equal_to, is_not_equal_to |
| String | contains, starts_with, ends_with, matches_regex |
| Number | is_greater_than, is_less_than, is_greater_than_or_equal_to, is_less_than_or_equal_to |
| Boolean | is_true, is_false |
| Array | length_equal_to, length_greater_than, length_less_than |
| Date | is_after, is_before, is_after_or_equal_to, is_before_or_equal_to |
- Conditions are evaluated in order (If, Else If 1, Else If 2, …)
- The first matching condition determines the output branch
- Data is routed to exactly one branch
- If no conditions match and
has_elseis true, data goes to the Else branch - Downstream nodes on non-matched branches are skipped
- Route high-value invoices for additional approval
- Process documents differently based on vendor country
- Flag incomplete extractions for review
Merge PDF
Merge PDF
Combine multiple PDF documents into a single file.
Outputs: File (merged PDF) Configuration:
| Setting | Description |
|---|---|
| Inputs | Named input slots for PDFs to merge |
API Call
API Call
Make HTTP requests to external APIs and use the response in your workflow.
| Setting | Description | Default |
|---|---|---|
| URL | The API endpoint URL | Required |
| Method | HTTP method (GET, POST, PUT, PATCH, DELETE) | POST |
| Headers | Custom HTTP headers (e.g., authentication) | {} |
| Body Template | JSON template with placeholders for input data | {} |
- Validate extracted data against external systems
- Enrich documents with data from your CRM or ERP
- Trigger actions in third-party services based on extraction results
- Look up additional information using extracted identifiers
{{path.to.field}} syntax to reference values from the input JSON:
{{data.invoice_number}}→ Inserts the invoice_number field{{data.vendor.name}}→ Inserts nested fields{{data.line_items[0].amount}}→ Inserts array elements
API Call nodes execute synchronously. For long-running operations, consider using webhooks to trigger external workflows asynchronously.
Merge JSON
Merge JSON
Combine multiple JSON objects into a single structured object.
Outputs: JSON (merged object) Configuration:
| Setting | Description |
|---|---|
| Inputs | Named input slots for JSON objects to merge |
- Combining extractions from multiple documents
- Aggregating data from parallel processing branches
- Creating comprehensive outputs from Split or Classifier workflows
Node I/O Types
Understanding input/output types helps you connect nodes correctly:| Type | Color | Description |
|---|---|---|
| File | Blue (📎) | Documents, PDFs, images |
| JSON | Purple ({ }) | Structured data objects |
| Text | Cyan (📄) | Plain text strings |
Compatibility Matrix
| Source → Target | File | JSON | Text |
|---|---|---|---|
| File | ✅ | ❌ | ❌ |
| JSON | ❌ | ✅ | ✅ |
| Text | ❌ | ❌ | ✅ |
Tips for Building Workflows
Start with the output in mind
Start with the output in mind
Identify what data you need at the end, then work backwards to determine which nodes you need.
Add Functions for calculations
Add Functions for calculations
Instead of computing values after receiving webhook data, use Functions nodes to add totals, percentages, and derived fields directly in the workflow.
Use Classifier for routing
Use Classifier for routing
When handling different document types, use Classifier to route each document to the appropriate extraction schema before processing.
Split before specialized processing
Split before specialized processing
When handling mixed document bundles (e.g., invoice + contract in one PDF), use Split first, then apply specific Extract schemas to each category.
Combine results with Merge JSON
Combine results with Merge JSON
When processing multiple documents or branches, use Merge JSON to combine all extracted data into a single structured output.
Use If/Else for conditional logic
Use If/Else for conditional logic
Route data based on extracted values—for example, send high-value invoices to a different webhook or flag certain conditions for review.
Test edge cases
Test edge cases
Run your workflow with documents that have missing fields, poor scan quality, or unusual formats to ensure robust handling.