Skip to main content

Databricks

This connector materializes Flow collections into tables in a Databricks SQL Warehouse. It allows both standard and delta updates.

The connector first uploads data changes to a Databricks Unity Catalog Volume. From there, it transactionally applies the changes to the Databricks tables.

ghcr.io/estuary/materialize-databricks:dev provides the latest connector image. You can also follow the link in your browser to see past image versions.

Prerequisites

To use this connector, you'll need:

  • A Databricks account that includes:
    • A unity catalog
    • A SQL Warehouse
    • A schema — a logical grouping of tables in a catalog
    • A user or service principal with a role assigned that grants the appropriate access levels to these resources.
  • At least one Flow collection
tip

If you haven't yet captured your data from its external source, start at the beginning of the guide to create a dataflow. You'll be referred back to this connector-specific documentation at the appropriate steps.

Setup

You need to first create a SQL Warehouse if you don't already have one in your account. See Databricks documentation on configuring a Databricks SQL Warehouse. After creating a SQL Warehouse, you can find the details necessary for connecting to it under the Connection Details tab.

In order to save on costs, we recommend that you set the Auto Stop parameter for your SQL warehouse to the minimum available. Estuary's Databricks connector automatically delays updates to the destination according to the configured Sync Schedule (see configuration details below), with a default delay value of 30 minutes.

You also need an access token for your user or service principal to be used by our connector, see the respective documentation for user personal access tokens and service principal access tokens from Databricks on how to create an access token. Note that as of this writing, only service principals in the "admins" group can use a token.

To create an access token for your service principal:

  1. Make sure that they are part of the admins group by going to Settings -> Identity and access -> Groups -> admins -> Members and adding the service principal
  2. Find their "Application ID" by going to Settings -> Identity and access -> Service Principals
  3. Create an access token on behalf of the service principal using the databricks CLI
databricks token-management create-obo-token <application id of service principal>
  1. Copy the token_value value of the resulting JSON from the command above

Configuration

To use this connector, begin with data in one or more Flow collections. Use the below properties to configure a Databricks materialization, which will direct one or more of your Flow collections to new Databricks tables.

Properties

Endpoint

PropertyTitleDescriptionTypeRequired/Default
/addressAddressHost and port of the SQL warehouse (in the form of host[:port]). Port 443 is used as the default if no specific port is provided.stringRequired
/http_pathHTTP PathHTTP path of your SQL warehousestringRequired
/catalog_nameCatalog NameName of your Unity CatalogstringRequired
/schema_nameSchema NameDefault schema to materialize tostringdefault schema is used
/credentialsCredentialsAuthentication credentialsobject
/credentials/auth_typeRoleAuthentication type, set to PAT for personal access tokenstringRequired
/credentials/personal_access_tokenRoleAccess TokenstringRequired

Bindings

PropertyTitleDescriptionTypeRequired/Default
/tableTableTable namestringRequired
/schemaAlternative SchemaAlternative schema for this tablestringRequired
/delta_updatesDelta updatesWhether to use standard or delta updatesbooleanfalse

Sample


materializations:
${PREFIX}/${mat_name}:
endpoint:
connector:
config:
address: dbc-abcdefgh-a12b.cloud.databricks.com
catalog_name: main
http_path: /sql/1.0/warehouses/abcd123efgh4567
schema_name: default
credentials:
auth_type: PAT
personal_access_token: secret
image: ghcr.io/estuary/materialize-databricks:dev
# If you have multiple collections you need to materialize, add a binding for each one
# to ensure complete data flow-through
bindings:
- resource:
table: ${table_name}
schema: default
source: ${PREFIX}/${source_collection}

Sync Schedule

This connector supports configuring a schedule for sync frequency. You can read about how to configure this here.

Delta updates

This connector supports both standard (merge) and delta updates. The default is to use standard updates.

Enabling delta updates will prevent Flow from querying for documents in your Databricks table, which can reduce latency and costs for large datasets. If you're certain that all events will have unique keys, enabling delta updates is a simple way to improve performance with no effect on the output. However, enabling delta updates is not suitable for all workflows, as the resulting table in Databricks won't be fully reduced.

You can enable delta updates on a per-binding basis:

    bindings:
- resource:
table: ${table_name}
schema: default
delta_updates: true
source: ${PREFIX}/${source_collection}

Reserved words

Databricks has a list of reserved words that must be quoted in order to be used as an identifier. Flow automatically quotes fields that are in the reserved words list. You can find this list in Databricks's documentation here and in the table below.

caution

In Databricks, objects created with quoted identifiers must always be referenced exactly as created, including the quotes. Otherwise, SQL statements and queries can result in errors. See the Databricks docs.

Reserved words
ANTI
EXCEPTFULL
INNERINTERSECT
JOINLATERAL
LEFTMINUS
NATURALON
RIGHTSEMI
SEMIUSING
NULLDEFAULT
TRUEFALSE
CROSS