Skip to main content
The OneLake destination writes data to Microsoft Fabric Lakehouses and Warehouses. It authenticates via a Service Principal, uploads Parquet files to OneLake storage, and commits Delta Lake transaction logs so data is immediately queryable through Fabric’s SQL analytics endpoint.

Architecture

Write path by item type

Item TypeWrite PathSQL Endpoint
LakehouseOneLake REST API → Delta Lake (Parquet + transaction log)Read-only SQL analytics endpoint (1–3 min discovery lag)
WarehouseTDS protocol → SQL bulk insert via service principal SQL tokenRead-write SQL endpoint (immediate)
Fabric Lakehouses expose a read-only SQL analytics endpoint. DDL operations (CREATE TABLE, ALTER TABLE) are not supported via SQL. The OneLake destination automatically detects this and uses the Delta Lake write path instead.

Prerequisites

Before creating the connection, set up a Service Principal in Azure:
1

Register an application

In the Azure Portal, navigate to Microsoft Entra ID → App registrations → New registration. Name it (e.g., planasonix-onelake) and register.
2

Create a client secret

Under Certificates & secrets → New client secret, create a secret and copy the Value immediately — it is only shown once. Note the expiration date.
3

Copy identifiers

From the app registration Overview page, copy the Application (client) ID and Directory (tenant) ID.
4

Enable Fabric API access

In the Fabric Admin Portal → Tenant settings, enable “Service principals can use Fabric APIs” for your security group or the entire organization.
5

Grant workspace access

In your Fabric Workspace, click Manage access and add the service principal as a Contributor or Member. This grants write access to OneLake storage.
6

Find workspace and item IDs

Open your Lakehouse or Warehouse in the Fabric portal. The URL contains both IDs:https://app.fabric.microsoft.com/groups/{workspaceId}/lakehouses/{itemId}

Connection fields

FieldRequiredDescription
Tenant IDYesMicrosoft Entra ID (Azure AD) tenant GUID
Client IDYesApplication (client) ID of the service principal
Client SecretYesClient secret value (not the secret ID)
Workspace IDYesFabric workspace GUID
Item IDYesLakehouse or Warehouse item GUID
Item TypeYeslakehouse or warehouse
Client secrets expire. Set a calendar reminder to rotate them before expiration. An expired secret silently breaks all pipelines using the connection.

Write modes

New Parquet files and Delta log versions are added without modifying existing data. Each pipeline run produces a new version number.Best for: incremental loads, event streams, CDC pipelines.

Schema management

  • Column types are inferred automatically from the first batch
  • Types are cached and reused for all subsequent batches in the same run
  • When auto schema migration is enabled, new columns in later batches trigger schema expansion
  • Supported types: string, long, double, boolean, timestamp, date

Performance considerations

FactorLakehouseWarehouse
Write latencyLow (direct OneLake upload)Low (SQL bulk insert)
SQL visibility1–3 min after write (endpoint discovery)Immediate
DDL supportDelta Lake only (no SQL DDL)Full SQL DDL
Optimal batch size10,000–50,000 rows10,000–50,000 rows
The 1–3 minute SQL endpoint discovery delay is a Microsoft Fabric limitation. Data is physically present in OneLake immediately after the write — it just takes time for the SQL analytics endpoint to index the new Delta log version.

Troubleshooting

SymptomLikely causeFix
”Authentication failed”Invalid or expired credentialsVerify tenant ID, client ID, and client secret in the Azure portal
”Workspace not found”Incorrect workspace IDCopy the workspace GUID from the Fabric portal URL
”Table not visible in SQL”Fabric discovery lagWait 1–3 minutes; data is already in OneLake storage
”SQL DDL blocked”Lakehouse read-only endpointExpected behavior — the destination uses Delta Lake writes automatically
”Upload failed to OneLake”Missing workspace permissionsGrant the service principal Contributor role on the workspace
”Item not found”Wrong item ID or item typeVerify the item GUID and that itemType matches (lakehouse vs warehouse)

OneLake vs Fabric connector

The OneLake destination is functionally identical to the existing Fabric connector for write operations. It exists as a separate connection type to support independent configuration, future OneLake-specific optimizations, and clearer naming for OneLake-focused data engineering workflows.
AspectOneLake DestinationFabric Connector
AuthenticationService principalService principal
Write path (Lakehouse)OneLake REST → Delta LakeOneLake REST → Delta Lake
Write path (Warehouse)TDS → SQL bulk insertTDS → SQL bulk insert
ConfigurationSeparate connectionSeparate connection
Use whenBuilding new OneLake pipelinesMaintaining existing Fabric pipelines

Data warehouses

Overview of all warehouse and lakehouse connections.

Delta Lake destination

Cloud-agnostic Delta Lake writes to S3, GCS, or Azure Blob.

Credentials

Securely store and rotate service principal secrets.

Destination nodes

Write modes, pre-flight checks, and all destination node types.