This page describes Dittofeed’s system architecture. This is great for understanding how Dittofeed works, especially if you intend to self-host, or contribute to Dittofeed.

Summary

Dittofeed encompasses a collection of components that work together to scalably ingest user events, and send those users personalized messages. This is accomplished through a series of steps.

Steps

  1. Clients send User Events to the API from their applications.

  2. These events are then written to ClickHouse. Optionally, they’re buffered in Kafka first.

  3. Dittofeed’s worker issues queries to ClickHouse on a short polling period for the following purposes:

    a. Process new events to update Segments and User Properties assignments within Clickhouse.

    b. Retrieve Segment updates, and signal subscribed Journeys.

    c. Retrieve Segments and User Properties assignments from Clickhouse to persist back into Postgres for rapid, row-wise lookups.

  4. User Journeys are processed by the worker, and progress in response to signals, and the passage of time.

  5. Journeys issue requests to Messaging Services, to send messages to end users on one or more Channels.

Diagram