Flow documentation
Estuary Flow is a data movement and transformation platform for the whole data team.
With Flow, you build, test, and evolve streaming pipelines (called data flows in the Flow ecosystem) that continuously move data across all of your systems with optional in-flight transformations.
You work with Flow through its intuitive web application or using the command line interface. Business users and analysts can configure data flows to connect disparate systems in minutes, and engineers can then refine those data flows, troubleshoot, and configure complex transformations in their preferred environment.
Quick start
Want to get up and running ASAP?
Use the web app to sign up. (You start for free.)
See the get started documentation.
Wondering if Flow is right for you?
If you're unsure if Flow is the right solution for your data integration needs, you can read about the technical benefits and clear comparisons with similar systems that may be familiar to you.
Looking to understand the concepts behind Flow at a deeper level?
We recommend starting with a tutorial or guide to get acquainted with basic Flow concepts in action. After that, read the Concepts to go deeper.
Real-time data and Flow
Flow synchronizes your systems – SaaS, databases, streaming, and more – around the same datasets, which it stores in the cloud and updates in milliseconds. It combines the easy cross-system integration of an ELT tool and a flexible streaming backbone, all while remaining aware of your data's complete history.
A few examples of what you can do with Flow:
- Perform change data capture from MySQL tables into PostgreSQL or a cloud analytics warehouse
- Fetch, transform, and load logs from cloud delivery networks (CDNs) into Elasticsearch or BigQuery
- Instrument real-time analytics over your business events, accessible from current tools like PostgreSQL or even Google Sheets
- Capture and organize your data from your SaaS vendors (like Hubspot or Facebook), into a Parquet data lake
Under the hood, Flow comprises cloud-native streaming infrastructure, a powerful runtime for data processing, and an open-source ecosystem of pluggable connectors for integrating your existing data systems.