Kafka prerequisites
EvidentSource has a built-in Kafka-compatible broker — Milena — backed by SlateDB. Every committed transaction is automatically available as a Kafka record on a per-database topic. For most deployments, that’s all you need.
This page is only relevant if you want to bridge EvidentSource’s event stream to an external Kafka cluster (MSK, Confluent Cloud, a self-hosted Kafka) — typically for integration with existing stream-processing infrastructure.
When to bridge to external Kafka
Section titled “When to bridge to external Kafka”- You have Kafka-consuming systems that can’t easily connect to Milena’s endpoint
- You use Kafka Streams, ksqlDB, or Kafka Connect and want those to operate on EvidentSource events
- You have organizational standards that require a specific Kafka vendor
How the bridge works
Section titled “How the bridge works”EvidentSource emits to the built-in Milena stream regardless. A separate Kafka bridge process (provided as a container) consumes from Milena and produces to your external Kafka using standard producer config.
What you need
Section titled “What you need”If you’re bridging to an external Kafka:
- Bootstrap servers URL
- Authentication credentials (SASL/SCRAM, mTLS, or IAM depending on vendor)
- Topic naming convention — default is
evidentsource.<database>.events - Schema registry (optional) — if your stream processors require one
When you don’t need this page
Section titled “When you don’t need this page”- You’re using the Sandbox
- You’re running EvidentSource standalone — Milena is already there
- Your consumers can speak Kafka protocol against Milena’s endpoint directly
In all those cases, you don’t need an external Kafka cluster at all.
Configuration
Section titled “Configuration”See the Kafka bridge container’s README for the specific environment variables. At minimum you’ll set:
BOOTSTRAP_SERVERSSECURITY_PROTOCOLSASL_MECHANISM/SASL_USERNAME/SASL_PASSWORD(or equivalent)EVIDENTSOURCE_ENDPOINT(the Milena endpoint it’s reading from)