Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

Estuary Flow

Connect your data, where you want it, in milliseconds

Capture data in real-time from databases using CDC or streaming SaaS API's. Transform that data into views using stateful streaming SQL. Materialize views where you want them. Advance both streaming and batch data pipelines for fresh data and cost savings.

Top comment

The Pitch: 🚰 Estuary Flow is a Real-Time Data Platform that enables building no-code reliable pipes that don’t require scheduling, and support batch/streaming and materialized views in milliseconds. 📒 A free account with up to 10gb/mo in data movement can be had here: www.estuary.dev The Details: Estuary Flow is built on top of an open-source streaming framework (Gazette) that combines millisecond-latency pub/sub with native persistence to cloud storage. Basically, it’s a real-time data lake. Beyond being able to sync data continuously between sources/destinations without configuring, say, Kafka, there are a few benefits to a UI built on top of this streaming framework, specifically: 🗄️ Managed CDC. Simple, efficient change data capture from databases with minimal impact and latency. Seamless backfills – even over your very large tables that Debezium tends to choke on – and real-time streaming out of the box. 🧑‍💻 Streaming SQL transformations. We have a quite powerful transformation product that allows for Streaming SQL transforms without a requirement for windowing. Join historical data with real-time without having to think about it. Flow also offers schema validation and first-class support for testing transformations, with continuous integration whenever you make changes. 💽 Collections instead of Buffers. When a data source is captured – like Postgres CDC, or Kinesis, or streaming Salesforce – the data is stored in your cloud storage as regular JSON files. Later, you can materialize all of that juicy history and ongoing updates into a variety of different data systems. Create identical, up-to-date views of your data in multiple places, now or in the future. 📈 Continuous Views instead of Sinks. Materialized views update in-place. Go beyond append-only sinks to build real-time fact tables that update with your captured data – even in systems not designed for it, like PostgreSQL or Google Sheets. Make any database a “real time” database. ✅ Completely Incremental, Exactly-Once. Flow uses a continuous processing model, which propagates transactional data changes through your processing graph. This helps keep costs low while maintaining exact copies across different systems. ⏩ Turnkey batch and streaming connectors. Both real-time data as well as historical data supported through one tool and access to pre-built connectors to ~50 endpoints. For example, you can capture from the batch Stripe API, join it with data from Kafka and push that all to Google Sheets – all without building a custom integration. Or if you want, plug in your own connector through Flow’s open protocol.