Skip to main content

CSV Files in Amazon S3

This connector materializes delta updates of Estuary collections into files in an S3 bucket per the CSV format described in RFC-4180. The CSV files are compressed using Gzip compression and written to S3 as .csv.gz files.

The delta updates are batched within Estuary, converted to CSV files, and then pushed to the S3 bucket at a time interval that you set. Files are limited to a configurable maximum size. Each materialized Estuary collection will produce many separate files.

Prerequisites

To use this connector, you'll need:

  • An S3 bucket to write files to. See this guide for instructions on setting up a new S3 bucket.

  • An AWS root, IAM user or role with the s3:PutObject permission for the S3 bucket.

    When authenticating as user, you'll need the access key and secret access key. See the AWS blog for help finding these credentials. When authenticating using a role, you'll need the region and the role arn. Follow the steps in the AWS IAM guide to setup the role.

Configuration

Use the below properties to configure the materialization, which will direct one or more of your Estuary collections to your bucket.

Properties

Endpoint

PropertyTitleDescriptionTypeRequired/Default
/bucketBucketBucket to store materialized objects.stringRequired
/regionRegionRegion of the bucket to write to.stringRequired
/uploadIntervalUpload IntervalFrequency at which files will be uploaded.string5m
/credentials/auth_typeAuth TypeMethod to use for authentication. Must be set to either AWSAccessKey or AWSIAM.stringAWSAccessKey
/credentials/awsAccessKeyIdAWS Access Key IDAccess Key ID for writing data to the bucket. Required when using the AWSAccessKey auth type.string
/credentials/awsSecretAccessKeyAWS Secret Access keySecret Access Key for writing data to the bucket. Required when using the AWSAccessKey auth type.string
/credentials/aws_role_arnAWS Role ARNRole to assume for writing data to the bucket. Required when using the AWSIAM auth type.string
/credentials/aws_regionRegionRegion of the bucket to write to. Required when using the AWSIAM auth type.string
/prefixPrefixOptional prefix that will be used to store objects. May contain date patterns.string
/fileSizeLimitFile Size LimitApproximate maximum size of materialized files in bytes. Defaults to 10737418240 (10 GiB) if blank.integer
/endpointCustom S3 EndpointThe S3 endpoint URI to connect to. Use if you're materializing to a compatible API that isn't provided by AWS. Should normally be left blank.string
/csvConfig/skipHeadersSkip HeadersDo not write headers to files.integer

Bindings

PropertyTitleDescriptionTypeRequired/Default
/pathPathThe path that objects will be materialized to.stringRequired

Sample

materializations:
${PREFIX}/${mat_name}:
endpoint:
connector:
image: "ghcr.io/estuary/materialize-s3-csv:v1"
config:
bucket: bucket
awsAccessKeyId: <access_key_id>
awsSecretAccessKey: <secret_access_key>
region: us-east-2
uploadInterval: 5m
bindings:
- resource:
path: ${COLLECTION_NAME}
source: ${PREFIX}/${COLLECTION_NAME}

File Names

Materialized files are named with monotonically increasing integer values, padded with leading 0's so they remain lexically sortable. For example, a set of files may be materialized like this for a given collection:

bucket/prefix/path/v0000000000/00000000000000000000.csv.gz
bucket/prefix/path/v0000000000/00000000000000000001.csv.gz
bucket/prefix/path/v0000000000/00000000000000000002.csv.gz

Here the values for bucket and prefix are from your endpoint configuration. The path is specific to the binding configuration. v0000000000 represents the current backfill counter for binding and will be increased if the binding is re-backfilled, along with the file names starting back over from 0.

Date Patterns

The prefix option of the endpoint configuration can contain patterns that are expanded using the time of the start of the transaction.

The transaction time is always represented as UTC.

The following patterns are available:

  • %Y: The year as a 4-digit number. (2025, 2026)
  • %m: The month as a 2-digit number. (01, 02, ..., 12)
  • %d: The day as a 2-digit number. (01, 02, ..., 31)
  • %H: The hour as a 2-digit number with a 24-hour clock. (01, 02, ..., 23)
  • %M: The minute as a 2-digit number. (01, 02, ..., 59)
  • %S: The second as a 2-digit number. (01, 02, ..., 59)
  • %Z: The timezone abbreviation. (will always be UTC)
  • %z: The timezone as an HHMM offset. (will always be +0000)

Multipart Upload Cleanup

This materialization uses S3 multipart uploads to ensure exactly-once semantics. If the materialization shard is interrupted while processing a transaction and the transaction must be re-started, there may be incomplete multipart uploads left behind.

As a best practice it is recommended to add a lifecycle rule to the bucket to automatically remove incompleted uploads. A 1-day or greater delay removing incomplete multipart uploads will be sufficient for the current transaction to complete.