Skip to content

Kafka prerequisites

EvidentSource has a built-in Kafka-compatible brokerMilena — backed by SlateDB. Every committed transaction is automatically available as a Kafka record on a per-database topic. For most deployments, that’s all you need.

This page is only relevant if you want to bridge EvidentSource’s event stream to an external Kafka cluster (MSK, Confluent Cloud, a self-hosted Kafka) — typically for integration with existing stream-processing infrastructure.

  • You have Kafka-consuming systems that can’t easily connect to Milena’s endpoint
  • You use Kafka Streams, ksqlDB, or Kafka Connect and want those to operate on EvidentSource events
  • You have organizational standards that require a specific Kafka vendor

EvidentSource emits to the built-in Milena stream regardless. A separate Kafka bridge process (provided as a container) consumes from Milena and produces to your external Kafka using standard producer config.

If you’re bridging to an external Kafka:

  • Bootstrap servers URL
  • Authentication credentials (SASL/SCRAM, mTLS, or IAM depending on vendor)
  • Topic naming convention — default is evidentsource.<database>.events
  • Schema registry (optional) — if your stream processors require one
  • You’re using the Sandbox
  • You’re running EvidentSource standalone — Milena is already there
  • Your consumers can speak Kafka protocol against Milena’s endpoint directly

In all those cases, you don’t need an external Kafka cluster at all.

See the Kafka bridge container’s README for the specific environment variables. At minimum you’ll set:

  • BOOTSTRAP_SERVERS
  • SECURITY_PROTOCOL
  • SASL_MECHANISM / SASL_USERNAME / SASL_PASSWORD (or equivalent)
  • EVIDENTSOURCE_ENDPOINT (the Milena endpoint it’s reading from)