Skip to main content

Google Cloud SQL for SQLServer

This connector materializes Flow collections into tables in a Microsoft SQLServer database.

It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/materialize-sqlserver:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions.

Prerequisites

To use this connector, you'll need:

  • A SQLServer database to which to materialize, and user credentials.
    • SQLServer 2017 and later are supported
    • The connector will create new tables in the database per your specification, so user credentials must have access to create new tables.
  • At least one Flow collection

Setup Google Cloud SQL for SQL Server

Allow connections between the database and Estuary Flow. There are two ways to do this: by granting direct access to Flow's IP or by creating an SSH tunnel.

  1. To allow direct access:

  2. To allow secure connections via SSH tunneling:

  3. In your SQL client, connect to your instance as the default sqlserver user and issue the following commands.

USE <database>;
-- Create user and password for use with the connector.
CREATE LOGIN flow_materialize WITH PASSWORD = 'secret';
CREATE USER flow_materialize FOR LOGIN flow_materialize;
-- Grant control on the database to flow_materialize
GRANT CONTROL ON DATABASE::<database> TO flow_materialize;
  1. In the Cloud Console, note the instance's host under Public IP Address. Its port will always be 1433. Together, you'll use the host:port as the address property when you configure the connector.

Connecting to SQLServer

  1. Allow connections between the database and Estuary Flow. There are two ways to do this: by granting direct access to Flow's IP or by creating an SSH tunnel.

    1. To allow direct access:

    2. To allow secure connections via SSH tunneling:

  2. In your SQL client, connect to your instance as the default sqlserver user and issue the following commands.

USE <database>;
-- Create user and password for use with the connector.
CREATE LOGIN flow_materialize WITH PASSWORD = 'Secret123!';
CREATE USER flow_materialize FOR LOGIN flow_materialize;
-- Grant control on the database to flow_materialize
GRANT CONTROL ON DATABASE::<database> TO flow_materialize;
  1. In the Cloud Console, note the instance's host under Public IP Address. Its port will always be 1433. Together, you'll use the host:port as the address property when you configure the connector.

Configuration

To use this connector, begin with data in one or more Flow collections. Use the below properties to configure a SQLServer materialization, which will direct one or more of your Flow collections to your desired tables, or views, in the database.

Properties

Endpoint

PropertyTitleDescriptionTypeRequired/Default
/databaseDatabaseName of the logical database to materialize to.stringRequired
/addressAddressHost and port of the database. If only the host is specified, port will default to 1433.stringRequired
/passwordPasswordPassword for the specified database user.stringRequired
/userUserDatabase user to connect as.stringRequired

Bindings

PropertyTitleDescriptionTypeRequired/Default
/tableTableTable name to materialize to. It will be created by the connector, unless the connector has previously created it.stringRequired
/delta_updatesDelta UpdateShould updates to this table be done via delta updates.booleanfalse

Sample

materializations:
${PREFIX}/${mat_name}:
endpoint:
connector:
image: ghcr.io/estuary/materialize-sqlserver:dev
config:
database: flow
address: localhost:5432
password: flow
user: flow
bindings:
- resource:
table: ${TABLE_NAME}
source: ${PREFIX}/${COLLECTION_NAME}

Delta updates

This connector supports both standard (merge) and delta updates. The default is to use standard updates.

Reserved words

SQLServer has a list of reserved words that must be quoted in order to be used as an identifier. Flow considers all the reserved words in the official SQLServer documentation.

These reserved words are listed in the table below. Flow automatically quotes fields that are in this list.

Reserved words
absoluteconnectelseintersecton
actionconnectionendintersectiononly
adaconstraintend-execintervalopen
addconstraintsequalsintoopendatasourc
adminconstructorerrlvlisopenquery
aftercontainsescapeisolationopenrowset
aggregatecontainstableeveryiterateopenxml
aliascontinueexceptjoinoperation
allconvertexceptionkeyoption
allocatecorrexeckillor
altercorrespondingexecutelanguageorder
andcountexistslargeordinality
anycovar_popexitlastout
arecovar_sampexternallateralouter
arraycreateextractleadingoutput
ascrossfalseleftover
asccubefetchlessoverlaps
asensitivecume_distfileleveloverlay
assertioncurrentfillfactorlikepad
asymmetriccurrent_catalfilterlike_regexparameter
atcurrent_datefirstlimitparameters
atomiccurrent_defaufloatlinenopartial
authorizationcurrent_pathforlnpartition
avgcurrent_roleforeignloadpascal
backupcurrent_schemfortranlocalpath
beforecurrent_timefoundlocaltimepercent
begincurrent_timesfreelocaltimestampercent_rank
betweencurrent_transfreetextlocatorpercentile_co
binarycurrent_userfreetexttablelowerpercentile_di
bitcursorfrommappivot
bit_lengthcyclefullmatchplan
blobdatafulltexttablemaxposition
booleandatabasefunctionmemberposition_rege
bothdatefusionmergepostfix
breadthdaygeneralmethodprecision
breakdbccgetminprefix
browsedeallocateglobalminutepreorder
bulkdecgomodprepare
bydecimalgotomodifiespreserve
calldeclaregrantmodifyprimary
calleddefaultgroupmoduleprint
cardinalitydeferrablegroupingmonthprior
cascadedeferredhavingmultisetprivileges
cascadeddeleteholdnamesproc
casedenyholdlocknationalprocedure
castdepthhostnaturalpublic
catalogderefhourncharraiserror
chardescidentitynclobrange
char_lengthdescribeidentity_insenewread
characterdescriptoridentitycolnextreads
character_lendestroyifnoreadtext
checkdestructorignorenocheckreal
checkpointdeterministicimmediatenonclusteredreconfigure
classdiagnosticsinnonerecursive
clobdictionaryincludenormalizeref
closedisconnectindexnotreferences
clustereddiskindicatornullreferencing
coalescedistinctinitializenullifregr_avgx
collatedistributedinitiallynumericregr_avgy
collationdomaininnerobjectregr_count
collectdoubleinoutoccurrences_rregr_intercep
columndropinputoctet_lengthregr_r2
commitdumpinsensitiveofregr_slope
completiondynamicinsertoffregr_sxx
computeeachintoffsetsregr_sxy
conditionelementintegeroldregr_syy
relativesemanticsimilstructuretruncatewindow
releasesemanticsimilsubmultisettry_convertwith
replicationsensitivesubstringtsequalwithin
restoresequencesubstring_reguescapewithin
restrictsessionsumunderwithout
resultsession_usersymmetricunionwork
returnsetsystemuniquewrite
returnssetssystem_userunknownwritetext
revertsetusertableunnestxmlagg
revokeshutdowntablesampleunpivotxmlattributes
rightsimilartemporaryupdatexmlbinary
rolesizeterminateupdatetextxmlcast
rollbacksmallinttextsizeupperxmlcomment
rollupsomethanusagexmlconcat
routinespacethenusexmldocument
rowspecifictimeuserxmlelement
rowcountspecifictypetimestampusingxmlexists
rowguidcolsqltimezone_hourvaluexmlforest
rowssqlcatimezone_minuvaluesxmliterate
rulesqlcodetovar_popxmlnamespaces
savesqlerrortopvar_sampxmlparse
savepointsqlexceptiontrailingvarcharxmlpi
schemasqlstatetranvariablexmlquery
scopesqlwarningtransactionvaryingxmlserialize
scrollstarttranslateviewxmltable
searchstatetranslate_regwaitforxmltext
secondstatementtranslationwhenxmlvalidate
sectionstatictreatwheneveryear
securityauditstatisticstriggerwherezone
selectstddev_poptrimwhile
semantickeyphstddev_samptruewidth_bucket

Changelog

The changelog includes a list of breaking changes made to this connector. Backwards-compatible changes are not listed.

Proceed with caution when editing materializations created with previous versions of this connector; editing always upgrades your materialization to the latest connector version.

V1: 2023-09-01

  • First version