Skip to main content

MotherDuck

This connector materializes Estuary collections into tables in a MotherDuck database.

The connector uses a supported object storage service to materialize to MotherDuck tables. You can choose from S3 or S3-compatible, GCS, Azure Blob Storage. The files in storage are used as a temporary staging area for data storage and retrieval.

ghcr.io/estuary/materialize-motherduck:dev provides the latest connector image. You can also follow the link in your browser to see past image versions.

Prerequisites

To use this connector, you'll need:

  • A MotherDuck account and Service Token.
  • An S3 bucket for staging temporary files, or a GCS bucket for staging temporary files. Cloudflare R2 can also be used via its S3-compatible API. An S3 bucket in us-east-1 is recommended for best performance and costs, since MotherDuck is currently hosted in that region.

To use an S3 bucket for staging temporary files:

  • See this guide for instructions on setting up a new S3 bucket.
  • Create a AWS root or IAM user with read and write access to the S3 bucket. For this user, you'll need the access key and secret access key. See the AWS blog for help finding these credentials.

To use a Cloudflare R2 bucket for staging temporary files:

  • Create a new bucket following this guide.
  • Create an API token with read and write permission to the bucket. Make sure to take note of the credentials for S3 clients, you will need the Access Key ID and Secret Access Key.
  • Configure the connector to use S3 for the staging bucket, and set the endpoint to the S3 API URL from the R2 object storage overview page. You can set the region to auto as this value is not used by R2.

To use a GCS bucket for staging temporary files:

  • See this guide for instructions on setting up a new GCS bucket.
  • Create a Google Cloud service account with a key file generated and roles/storage.objectAdmin on the GCS bucket you want to use.
  • Create an HMAC Key for the service account. You'll need the Access ID and Secret for the key you create.

To use Azure Blob Storage for staging temporary files:

  • Create or select a storage account.
  • Create a blob container.
  • Use the access keys listed listed under "Security + networking" for authentication.

Configuration

Use the below properties to configure MotherDuck materialization, which will direct one or more of your Estuary collections to your desired tables in the database.

Properties

Endpoint

PropertyTitleDescriptionTypeRequired/Default
/tokenMotherDuck Service TokenService token for authenticating with MotherDuck.stringRequired
/databaseDatabaseThe database to materialize to.stringRequired
/schemaDatabase SchemaDatabase schema for bound collection tables (unless overridden within the binding resource configuration) as well as associated materialization metadata tables.stringRequired
/hardDeleteHard DeleteIf enabled, items deleted in the source will also be deleted from the destination.booleanfalse
/stagingBucketStaging BucketThe type of staging bucket to use.Staging BucketRequired

Staging Bucket

PropertyTitleDescriptionTypeRequired/Default
/stagingBucketTypeStaging Bucket TypeUse S3 to stage files in S3 or compatible storage.stringRequired: S3
/bucketS3S3 Staging BucketName of the S3 bucket to use for staging data loads. Must not contain dots (.)stringRequired
/awsAccessKeyIdAccess Key IDAWS Access Key ID for reading and writing data to the S3 staging bucket.stringRequired
/awsSecretAccessKeySecret Access KeyAWS Secret Access Key for reading and writing data to the S3 staging bucket.stringRequired
/regionS3 Bucket RegionRegion of the S3 staging bucket.stringRequired
/bucketPathS3Bucket PathA prefix that will be used to store objects in S3.string
/endpointCustom EndpointCustom endpoint for S3-compatible storage.string
PropertyTitleDescriptionTypeRequired/Default
/stagingBucketTypeStaging Bucket TypeUse GCS to stage files in GCSstringRequired: GCS
/bucketGCSGCS Staging BucketName of the GCS bucket to use for staging data loads.stringRequired
/credentialsJSONService Account JSONThe JSON credentials of the service account to use for authorizing to the staging bucket.stringRequired
/gcsHMACAccessIDHMAC Access IDHMAC access ID for the service account.stringRequired
/gcsHMACSecretHMAC SecretHMAC secret for the service account.stringRequired
/bucketPathGCSS3 Bucket RegionAn optional prefix that will be used to store objects in the GCS staging bucket.string
PropertyTitleDescriptionTypeRequired/Default
/stagingBucketTypeStaging Bucket TypeUse Azure to stage files in Azure.stringRequired: Azure
/storageAccountNameStorage Account NameName of the Azure storage account.stringRequired
/storageAccountKeyStorage Account KeyStorage account key for authentication.stringRequired
/containerNameContainer NameName of the Azure Blob container to use for staging data loads.stringRequired
/bucketPathAzureBucket PathAn optional prefix that will be used to store objects in the staging container.stringRequired

Bindings

PropertyTitleDescriptionTypeRequired/Default
/tableTableName of the database table.stringRequired
/delta_updatesDelta UpdateShould updates to this table be done via delta updates.boolean
/schemaAlternative SchemaAlternative schema for this table (optional).string

Sample

materializations:
${PREFIX}/${mat_name}:
endpoint:
connector:
image: "ghcr.io/estuary/materialize-motherduck:dev"
config:
token: <motherduck_service_token>
database: my_db
schema: main
stagingBucket:
stagingBucketType: S3
bucketS3: my_bucket
awsAccessKeyId: <access_key_id>
awsSecretAccessKey: <secret_access_key>
region: us-east-1
bindings:
- resource:
table: ${TABLE_NAME}
source: ${PREFIX}/${COLLECTION_NAME}

Sync Schedule

This connector supports configuring a schedule for sync frequency. You can read about how to configure this here.

Delta updates

This connector supports both standard (merge) and delta updates. The default is to use standard updates.

Enabling delta updates will prevent Estuary from querying for documents in your MotherDuck table, which can reduce latency and costs for large datasets. If you're certain that all events will have unique keys, enabling delta updates is a simple way to improve performance with no effect on the output. However, enabling delta updates is not suitable for all workflows, as the resulting table in MotherDuck won't be fully reduced.

You can enable delta updates on a per-binding basis:

    bindings:
- resource:
table: ${table_name}
delta_updates: true
source: ${PREFIX}/${COLLECTION_NAME}