Skip to main content

Google BigQuery

This Flow connector materializes Flow collections into tables within a Google BigQuery dataset. It allows both standard and delta updates.

The connector uses your Google Cloud service account to materialize to BigQuery tables by way of files in a Google Cloud Storage (GCS) bucket. The tables in the bucket act as a temporary staging area for data storage and retrieval.

It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/materialize-bigquery:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions.

Prerequisites

To use this connector, you'll need:

tip

If you haven't yet captured your data from its external source, start at the beginning of the guide to create a dataflow. You'll be referred back to this connector-specific documentation at the appropriate steps.

Setup

To configure your service account, complete the following steps.

  1. Log into the Google Cloud console and create a service account. During account creation:

    1. Grant the user access to the project.
    2. Grant the user roles roles/bigquery.dataEditor, roles/bigquery.jobUser, and roles/storage.objectAdmin.
    3. Click Done.
  2. Select the new service account from the list of service accounts. On the Keys tab, click Add key and create a new JSON key.

    The key is automatically downloaded. You'll use it to configure the connector.

Configuration

To use this connector, begin with data in one or more Flow collections. Use the below properties to configure a BigQuery materialization, which will direct one or more of your Flow collections to your desired tables within a BigQuery dataset.

A BigQuery dataset is the top-level container within a project, and comprises multiple tables. You can think of a dataset as somewhat analogous to a schema in a relational database. For a complete introduction to resource organization in Bigquery, see the BigQuery docs.

Properties

Endpoint

PropertyTitleDescriptionTypeRequired/Default
/project_idProject IDThe project ID for the Google Cloud Storage bucket and BigQuery dataset.StringRequired
/credentials_jsonService Account JSONThe JSON credentials of the service account to use for authorization.StringRequired
/regionRegionThe GCS region.StringRequired
/datasetDatasetBigQuery dataset for bound collection tables (unless overridden within the binding resource configuration) as well as associated materialization metadata tables.StringRequired
/bucketBucketName of the GCS bucket.StringRequired
/bucket_pathBucket pathBase path within the GCS bucket. Also called "Folder" in the GCS console.String
/billing_project_idBilling project IDThe project ID to which these operations are billed in BigQuery. Typically, you want this to be the same as project_id (the default).StringSame as project_id
/advancedAdvanced OptionsOptions for advanced users. You should not typically need to modify these.object
/advanced/updateDelayUpdate DelayPotentially reduce compute time by increasing the delay between updates. Defaults to 30 minutes if unset.string

To learn more about project billing, see the BigQuery docs.

Bindings

PropertyTitleDescriptionTypeRequired/Default
/tableTableTable in the BigQuery dataset to store materialized result in.stringRequired
/datasetTableAlternative dataset for this table. Must be located in the region set in the endpoint configuration.string
/delta_updatesDelta updates.Whether to use standard or delta updatesbooleanfalse

Sample

materializations:
${PREFIX}/${mat_name}:
endpoint:
connector:
config:
project_id: our-bigquery-project
dataset: materialized-data
region: US
bucket: our-gcs-bucket
bucket_path: bucket-path/
credentials_json: <secret>
image: ghcr.io/estuary/materialize-bigquery:dev
bindings:
- resource:
table: ${table_name}
source: ${PREFIX}/${source_collection}

Update Delay

The Update Delay parameter in Estuary materializations offers a flexible approach to data ingestion scheduling. This advanced option allows users to control when the materialization or capture tasks pull in new data by specifying a delay period. By incorporating an update delay into your workflow, you can effectively manage and optimize your active warehouse time, leading to potentially lower costs and more efficient data processing.

An update delay is configured in the advanced settings of a materialization's configuration. It represents the amount of time the system will wait before it begins materializing the latest data. This delay is specified in hours and can be adjusted according to the needs of your data pipeline.

For example, if an update delay is set to 2 hours, the materialization task will pause for 2 hours before processing the latest available data. This delay ensures that data is not pulled in immediately after it becomes available, allowing for batching and other optimizations that can reduce warehouse load and processing time.

To configure an update delay, navigate the Advanced Options section of the materialization's configuration and select a value from the drop down. The default value for the update delay in Estuary materializations is set to 30 minutes.

Delta updates

This connector supports both standard (merge) and delta updates. The default is to use standard updates.

Enabling delta updates will prevent Flow from querying for documents in your BigQuery table, which can reduce latency and costs for large datasets. If you're certain that all events will have unique keys, enabling delta updates is a simple way to improve performance with no effect on the output. However, enabling delta updates is not suitable for all workflows, as the resulting table in BigQuery won't be fully reduced.

You can enable delta updates on a per-binding basis:

    bindings:
- resource:
table: ${table_name}
delta_updates: true
source: ${PREFIX}/${source_collection}

Table Partitioning

Tables are automatically created with clustering based on the Flow collection primary keys. Tables are not created with any other partitioning, but pre-existing partitioned tables can be materialized to.

It isn't possible to alter the partitioning of an existing table, but you can convert an existing table to one with partitioning by creating a new table and copying the data from the existing table into it. This can be done to tables that the connector is materializing to, as long as the materializing task is temporarily disabled while doing the conversion.

To convert an existing materialized table to one with different partitioning:

  1. Pause your materialization by disabling it from the UI or editing the task specification with the CLI.
  2. Create a new table with the partitioning you want from the data in the existing table:
create table <your_dataset>.<your_schema>.<your_table>_copy
partition by <your_partitioning>
as select * from <your_dataset>.<your_schema>.<your_table>;
  1. Verify that the data in <your_table>_copy looks good, then drop the original table:
drop table <your_dataset>.<your_schema>.<your_table>;
  1. "Rename" <your_table>_copy back to <your_table> by copying it as a new table with the original name of <your_table>:
create table <your_dataset>.<your_schema>.<your_table> copy <your_dataset>.<your_schema>.<your_table>_copy;
  1. Verify that the data in <your_table> looks good, then drop the <your_table>_copy table:
drop table <your_dataset>.<your_schema>.<your_table>_copy;
  1. Re-enable the materialization to continue materializing data to the now partitioned table.