Ingest and Transform Overview

This page introduces how kdb Insights Enterprise ingests and transforms both streaming and batch data using its Stream Processor, either through the Web UI (Import Wizard and Pipelines menus) or via the pipeline APIs and CLI.

kdb Insights Enterprise supports two ingestion modes, streaming and batch, both powered by the same Stream Processor building blocks. Data ingestion and transformation is powered by the kdb Insights Stream Processor.

kdb Insights Enterprise supports three methods for building pipelines to ingest, transform, and analyze data. Click on the links below to learn more:

  1. Stream Processor APIs: Build and package pipelines programmatically, then submit them using the kdb Insights CLI .

  2. Import Wizard: Create pipelines step-by-step through the Web Interface's Import Wizard.

  3. Pipelines menu: Define and manage pipelines directly from the Web Interfaces's pipelines section. Use the Pipelines menu in the Web Interface create pipelines.

Examples

Below are sample pipelines implemented via the pipeline API:

  • S3 - Import data from an S3 bucket

  • Kafka - Import data from a Kafka stream

  • PostgreSQL - Query data from PostgreSQL

See the guided walkthroughs for examples of pipelines created using the Import Wizard and Pipelines menu in the Web Interface.