Skip to main content

PostgreSQL

This connector materializes Flow collections into tables in a PostgreSQL database.

It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/materialize-postgres:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions.

Prerequisites

To use this connector, you'll need:

  • A Postgres database to which to materialize, and user credentials. The connector will create new tables in the database per your specification. Tables created manually in advance are not supported.
  • At least one Flow collection

Setup

To meet these requirements, follow the steps for your hosting type.

In addition to standard PostgreSQL, this connector supports cloud-based PostgreSQL instances. Google Cloud Platform, Amazon Web Service, and Microsoft Azure are currently supported. You may use other cloud platforms, but Estuary doesn't guarantee performance.

To connect securely, you can either enable direct access for Flows's IP or use an SSH tunnel.

Configuration Tip

To configure the connector, you must specify the database address in the format host:port. (You can also supply host only; the connector will use the port 5432 by default, which is correct in many cases.) You can find the host and port in the following locations in each platform's console:

  • Amazon RDS and Amazon Aurora: host as Endpoint; port as Port.
  • Google Cloud SQL: host as Private IP Address; port is always 5432. You may need to configure private IP on your database.
  • Azure Database: host as Server Name; port under Connection Strings (usually 5432).
  • TimescaleDB: host as Host; port as Port.

Azure Database for PostgreSQL

  • Connect Directly With Azure Database For PostgreSQL: Create a new firewall rule that grants access to the IP address 34.121.207.128.

  • Connect With SSH Tunneling: Follow the instructions for setting up an SSH connection to Azure Database.

Configuration

To use this connector, begin with data in one or more Flow collections. Use the below properties to configure a Postgres materialization, which will direct one or more of your Flow collections to your desired tables, or views, in the database.

Properties

Endpoint

PropertyTitleDescriptionTypeRequired/Default
/databaseDatabaseName of the logical database to materialize to.string
/addressAddressHost and port of the database. If only the host is specified, port will default to 5432.stringRequired
/passwordPasswordPassword for the specified database user.stringRequired
/schemaDatabase SchemaDatabase schema to use for materialized tables (unless overridden within the binding resource configuration) as well as associated materialization metadata tablesstring"public"
/userUserDatabase user to connect as.stringRequired
/advancedAdvanced OptionsOptions for advanced users. You should not typically need to modify these.object
/advanced/sslmodeSSL ModeOverrides SSL connection behavior by setting the 'sslmode' parameter.string

Bindings

PropertyTitleDescriptionTypeRequired/Default
/additional_table_create_sqlAdditional Table Create SQLAdditional SQL statement(s) to be run in the same transaction that creates the table.string
/delta_updatesDelta UpdateShould updates to this table be done via delta updates.booleanfalse
/schemaAlternative SchemaAlternative schema for this table (optional). Overrides schema set in endpoint configuration.string
/tableTableTable name to materialize to. It will be created by the connector, unless the connector has previously created it.stringRequired

SSL Mode

Certain managed PostgreSQL implementations may require you to explicitly set the SSL Mode to connect with Flow. One example is Neon, which requires the setting verify-full. Check your managed PostgreSQL's documentation for details if you encounter errors related to the SSL mode configuration.

Sample

materializations:
${PREFIX}/${mat_name}:
endpoint:
connector:
image: ghcr.io/estuary/materialize-postgres:dev
config:
database: flow
address: localhost:5432
password: flow
user: flow
bindings:
- resource:
table: ${TABLE_NAME}
source: ${PREFIX}/${COLLECTION_NAME}

Delta updates

This connector supports both standard (merge) and delta updates.

The default is to use standard updates.

Reserved words

PostgreSQL has a list of reserved words that must be quoted in order to be used as an identifier. Flow considers all the reserved words that are marked as "reserved" in any of the columns in the official PostgreSQL documentation.

These reserve words are listed in the table below. Flow automatically quotes fields that are in this list.

Reserved words
abscurrent_transform_group_for_typeindicatorordersqlexception
absolutecurrent_userinitialoutsqlstate
acoscursorinitiallyoutersqlwarning
actioncycleinneroutputsqrt
adddatalinkinoutoverstart
alldateinputoverlapsstatic
allocatedayinsensitiveoverlaystddev_pop
alterdeallocateinsertpadstddev_samp
analysedecintparametersubmultiset
analyzedecfloatintegerpartialsubset
anddecimalintersectpartitionsubstring
anydeclareintersectionpatternsubstring_regex
aredefaultintervalpersucceeds
arraydeferrableintopercentsum
array_aggdeferredispercentile_contsymmetric
array_max_cardinalitydefineisnullpercentile_discsystem
asdeleteisolationpercent_ranksystem_time
ascdense_rankjoinperiodsystem_user
asensitivederefjson_arraypermutetable
asindescjson_arrayaggplacingtablesample
assertiondescribejson_existsportiontan
asymmetricdescriptorjson_objectpositiontanh
atdeterministicjson_objectaggposition_regextemporary
atandiagnosticsjson_querypowerthen
atomicdisconnectjson_tableprecedestime
authorizationdistinctjson_table_primitiveprecisiontimestamp
avgdlnewcopyjson_valuepreparetimezone_hour
begindlpreviouscopykeypreservetimezone_minute
begin_framedlurlcompletelagprimaryto
begin_partitiondlurlcompleteonlylanguagepriortrailing
betweendlurlcompletewritelargeprivilegestransaction
bigintdlurlpathlastproceduretranslate
binarydlurlpathonlylast_valueptftranslate_regex
bitdlurlpathwritelateralpublictranslation
bit_lengthdlurlschemeleadrangetreat
blobdlurlserverleadingranktrigger
booleandlvalueleftreadtrim
bothdolevelreadstrim_array
bydomainlikerealtrue
calldoublelike_regexrecursivetruncate
calleddroplimitrefuescape
cardinalitydynamiclistaggreferencesunion
cascadeeachlnreferencingunique
cascadedelementlocalregr_avgxunknown
caseelselocaltimeregr_avgyunmatched
castemptylocaltimestampregr_countunnest
catalogendlogregr_interceptupdate
ceilend-execlog10regr_r2upper
ceilingend_framelowerregr_slopeusage
charend_partitionmatchregr_sxxuser
characterequalsmatchesregr_sxyusing
character_lengthescapematch_numberregr_syyvalue
char_lengtheverymatch_recognizerelativevalues
checkexceptmaxreleasevalue_of
classifierexceptionmeasuresrestrictvarbinary
clobexecmemberresultvarchar
closeexecutemergereturnvariadic
coalesceexistsmethodreturningvarying
collateexpminreturnsvar_pop
collationexternalminuterevokevar_samp
collectextractmodrightverbose
columnfalsemodifiesrollbackversioning
commitfetchmodulerollupview
concurrentlyfiltermonthrowwhen
conditionfirstmultisetrowswhenever
connectfirst_valuenamesrow_numberwhere
connectionfloatnationalrunningwidth_bucket
constraintfloornaturalsavepointwindow
constraintsforncharschemawith
containsforeignnclobscopewithin
continuefoundnewscrollwithout
convertframe_rownextsearchwork
copyfreenosecondwrite
corrfreezenonesectionxml
correspondingfromnormalizeseekxmlagg
cosfullnotselectxmlattributes
coshfunctionnotnullsensitivexmlbinary
countfusionnth_valuesessionxmlcast
covar_popgetntilesession_userxmlcomment
covar_sampglobalnullsetxmlconcat
creategonullifshowxmldocument
crossgotonumericsimilarxmlelement
cubegrantoccurrences_regexsinxmlexists
cume_distgroupoctet_lengthsinhxmlforest
currentgroupingofsizexmliterate
current_cataloggroupsoffsetskipxmlnamespaces
current_datehavingoldsmallintxmlparse
current_default_transform_groupholdomitsomexmlpi
current_pathhouronspacexmlquery
current_roleidentityonespecificxmlserialize
current_rowilikeonlyspecifictypexmltable
current_schemaimmediateopensqlxmltext
current_timeimportoptionsqlcodexmlvalidate
current_timestampinorsqlerroryear

Changelog

The changelog includes a list of breaking changes made to this connector. Backwards-compatible changes are not listed.

Proceed with caution when editing materializations created with previous versions of this connector; editing always upgrades your materialization to the latest connector version.

V4: 2022-11-30

This version includes breaking changes to materialized table columns. These provide more consistent column names and types, but tables created from previous versions of the connector may not be compatible with this version.

  • Capitalization is now preserved when fields in Flow are converted to Postgres column names. Previously, fields containing uppercase letters were converted to lowercase.

  • Field names and values of types date, duration, ipv4, ipv6, macaddr, macaddr8, and time are now converted into their corresponding Postgres types. Previously, only date-time was converted, and all others were materialized as strings.