Skip to main content
Destination nodes persist or deliver the results of your graph. They define where data lands, how conflicts resolve, and what happens to existing target data on each run.

Write

Write is the standard relational or warehouse table writer. For cloud warehouses (Snowflake, BigQuery, Redshift, Synapse, Databricks, Fabric), bulk loading uses the staging configuration from the warehouse connection automatically. Configuration:
  • Connection and target object: Database, schema, table (or equivalent).
  • Write mode: See Write modes below.
  • Key columns (for upserts): Primary or business keys used for merge semantics.
  • Loading method (warehouse targets): Auto, Bulk, or Standard. Bulk uses cloud staging + COPY INTO; staging credentials come from the connection.
  • Column map: Source to target names and casts.
  • Pre/post SQL (when supported): Run maintenance statements cautiously—document and review.
Typical use: Load curated fact and dimension tables consumed by BI tools or downstream pipelines.

Cloud Destination

Cloud Destination writes to cloud warehouses and object stores (Amazon S3, Google Cloud Storage, Azure Blob) with bulk-optimized loading and format options. For cloud warehouse targets (Snowflake, BigQuery, Redshift, Synapse, Databricks, Fabric), staging for bulk loading is configured at the connection level — not on the node. Edit the warehouse connection to set up staging provider, bucket, and credentials. See Staging configuration. Configuration:
  • Connection: Select a warehouse or cloud storage connection.
  • Write mode: Append, truncate and load, or merge (upsert).
  • Loading method: Bulk (cloud staging + COPY INTO) or standard INSERT. Some connectors are bulk-only.
  • Compute scaling: Override warehouse or cluster size for heavy loads (when the platform supports it).
  • Path template (object store targets): Partition folders (year=2025/month=03/) for query engines.
  • File format (object store targets): Parquet, CSV, JSON Lines, Avro—match consumers.
  • Compression (object store targets): Snappy, ZSTD, gzip—balance CPU vs storage.
Typical use: Large-scale warehouse loads using COPY INTO, or data lake landing zones feeding Iceberg or external tables.

Iceberg Destination (professional+)

Iceberg Destination commits Apache Iceberg snapshots with ACID properties. Records are converted to Parquet and uploaded to the warehouse path (S3, GCS, or local storage) configured on the Iceberg connection. Configuration:
  • Connection: Select a saved Iceberg connection with catalog and warehouse path.
  • Table identifier: Namespace and table name in the catalog.
  • Write mode: Append, overwrite, or merge (upsert with key columns).
  • Batch size: Records per Parquet file (default 10,000).
  • Partition columns: Optional partition layout for query performance.
  • Schema evolution: Coordinate with Schema Evolution nodes when columns change.
Typical use: Lakehouse marts where readers expect snapshot isolation and time travel.

Webhook Action

Webhook Action POSTs (or otherwise invokes) an HTTP endpoint with a payload built upstream—often from a JSON Builder. Configuration:
  • URL, method, headers: Include auth headers via secret references.
  • Body: Template bound to row batches or single aggregate payloads.
  • Batching: Rows per request to respect API limits.
  • Retry / timeout: Align with partner SLAs.
Typical use: Push alerts or small transactional updates to SaaS APIs that do not warrant a full reverse ETL sync.
Webhooks can duplicate on retries. Use idempotency keys or destination-side deduplication when the API supports it.

Write modes

Insert appends new rows only. The run fails on unique-key violations if the target enforces constraints—good for append-only facts with surrogate keys generated upstream.

Pre-flight checklist

1

Confirm environment

Verify variables and environments so you do not write to production with dev credentials—or the reverse.
2

Align grain and keys

Upsert keys must match the grain you intend (per line item vs per order). Test with duplicate-source scenarios.
3

Dry run or preview

Preview upstream nodes; for relational targets, run against staging schemas first.
4

Observe first production cycles

Watch row counts, reject files, and warehouse credit usage after go-live.

Sources

Read back what you wrote for reconciliation jobs.

Data quality

Validate before and after loads.