This Flow connector materializes delta updates of Flow collections into Firebolt
To interface between Flow and Firebolt, the connector uses Firebolt's method for loading data: First, it stores data as JSON documents in an S3 bucket. It then references the S3 bucket to create a Firebolt external table, which acts as a SQL interface between the JSON documents and the destination table in Firebolt.
It is available for use in the Flow web application. For local development or open-source workflows,
ghcr.io/estuary/materialize-firebolt:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions.
To use this connector, you'll need:
- A Firebolt database with at least one engine
- The engine must be started before creating the materialization.
- It's important that the engine stays up throughout the lifetime of the materialization. To ensure this is the case, select Edit Engine on your engine. In the engine settings, set Auto-stop engine after to Always On.
- An S3 bucket where JSON documents will be stored prior to loading
- At least one Flow collection
If you haven't yet captured your data from its external source, start at the beginning of the guide to create a dataflow. You'll be referred back to this connector-specific documentation at the appropriate steps.
For non-public buckets, you'll need to configure access in AWS IAM.
Follow the Firebolt documentation to set up an IAM policy and role, and add it to the external table definition.
Create a new IAM user. During setup:
Choose Programmatic (access key) access. This ensures that an access key ID and secret access key are generated. You'll use these to configure the connector.
On the Permissions page, choose Attach existing policies directly and attach the policy you created in step 1.
After creating the user, download the IAM credentials file. Take note of the access key ID and secret access key and use them to configure the connector. See the Amazon docs if you lose your credentials.
To use this connector, begin with data in one or more Flow collections. Use the below properties to configure a Firebolt materialization, which will direct Flow data to your desired Firebolt tables via an external table.
|AWS key ID||AWS access key ID for accessing the S3 bucket.||string|
|AWS region||AWS region the bucket is in.||string|
|AWS secret access key||AWS secret key for accessing the S3 bucket.||string|
|Database||Name of the Firebolt database.||string||Required|
|Engine URL||Engine URL of the Firebolt database, in the format: ||string||Required|
|S3 bucket||Name of S3 bucket where the intermediate files for external table will be stored.||string||Required|
|S3 prefix||A prefix for files stored in the bucket.||string|
|Table||Name of the Firebolt table to store materialized results in. The external table will be named after this table with an ||string||Required|
|Table type||Type of the Firebolt table to store materialized results in. See the Firebolt docs for more details.||string||Required|
# For public S3 buckets, only the bucket name is required
# Path to the latest version of the connector, provided as a Docker image
# If you have multiple collections you need to materialize, add a binding for each one
# to ensure complete data flow-through
Firebolt is an insert-only system; it doesn't support updates or deletes. Because of this, the Firebolt connector operates only in delta updates mode. Firebolt stores all deltas — the unmerged collection documents — directly.
In some cases, this will affect how materialized views look in Firebolt compared to other systems that use standard updates.
Firebolt has a list of reserved words, which my not be used in identifiers. Collections with field names that include a reserved word will automatically be quoted as part of a Firebolt materialization.