Overview
Planasonix provides a fully managed, spec-compliant Apache Iceberg REST Catalog so you can connect any query engine — Snowflake, DuckDB, Spark, Trino — directly to tables managed by your pipelines, without configuring external catalogs like AWS Glue or Hive Metastore.How It Works
- Pipeline writes — When a Managed Lakehouse pipeline writes Iceberg data, tables are automatically registered in the hosted catalog
- Query engines connect — Point any Iceberg-compatible engine to
https://api.planasonix.com/v1with your API key - Credential vending — On each
loadTablerequest, the catalog provides temporary, read-only storage credentials so engines can access data files directly
Authentication
Direct API Key
Pass yourflx_ API key as a Bearer token:
OAuth2 Token Exchange
For engines that require the Iceberg REST spec’s OAuth2 flow (Spark, Trino):API Endpoints
| Method | Endpoint | Description |
|---|---|---|
GET | /v1/config | Catalog configuration |
POST | /v1/oauth/tokens | OAuth2 token exchange |
GET | /v1/namespaces | List namespaces |
POST | /v1/namespaces | Create namespace |
GET | /v1/namespaces/{ns} | Get namespace |
DELETE | /v1/namespaces/{ns} | Drop namespace |
POST | /v1/namespaces/{ns}/properties | Update namespace properties |
GET | /v1/namespaces/{ns}/tables | List tables |
POST | /v1/namespaces/{ns}/tables | Create table |
GET | /v1/namespaces/{ns}/tables/{table} | Load table (with credentials) |
POST | /v1/namespaces/{ns}/tables/{table} | Commit table updates |
DELETE | /v1/namespaces/{ns}/tables/{table} | Drop table |
Credential Vending
When you load a table, the response includes temporary storage credentials in theconfig field:
AWS S3
Google Cloud Storage
Azure Blob Storage
Tier Limits
| Tier | Max Tables | API Requests/Day |
|---|---|---|
| Professional | 10 | 10,000 |
| Premium | 50 | 100,000 |
| Enterprise | Unlimited | Unlimited |
Setup
Enable Managed Lakehouse
Create a Managed Lakehouse connection with the Hosted (Planasonix) catalog type
Run a Pipeline
Configure a pipeline with a Managed Lakehouse destination node and run it. Tables are auto-registered.