Operations: containers + S3
The operational story for EvidentSource is intentionally small: one container for the runtime, S3 for storage. No Kafka cluster to run, no relational database to tune, no stateful sharding layer to balance.
This page walks through what each piece does, why it’s enough, and how to run it.
The pieces
Section titled “The pieces”The EvidentSource container
Section titled “The EvidentSource container”A single Rust binary, packaged as a container image (evidentsource/evidentsource). It hosts:
- The API surface (gRPC on
:50051, REST on:3000, optional MCP endpoint) - The WebAssembly runtime that executes your
decideandevolvecomponents - A built-in Kafka-compatible broker (Milena) backed by SlateDB
- Admission control, transaction coordination, and the event-store core
One container is the whole data plane. For production you run several of them behind a load balancer and let them share S3 state; for local development one is plenty.
S3 (via SlateDB)
Section titled “S3 (via SlateDB)”SlateDB is an LSM-tree storage engine that persists to object storage. It’s how EvidentSource gets durable, strongly-consistent event-store writes without a separate database tier.
In production, “S3” means Amazon S3. For local development, anything that speaks the S3 API works — MinIO, LocalStack, a local filesystem adapter. The same SlateDB code runs against all of them.
That’s it
Section titled “That’s it”There is no required external:
- Relational database
- Kafka cluster (Milena is built in; if you want to bridge to an external Kafka, that’s optional)
- Cache, queue, or coordination service
- Zookeeper, etcd, or Consul
If your team can run a container and point it at an S3 bucket, you can run EvidentSource.
Why this works
Section titled “Why this works”Two architectural choices make this possible.
SlateDB separates compute from storage. The container holds no durable state — all durable state lives in S3. You can kill any EvidentSource container at any time; a restart reads the relevant SlateDB files back from S3 and picks up where it left off. Horizontal scaling is a load-balancer config change, not a data migration.
Milena replaces an external event bus. Because the event log is already durable in S3, exposing it as a Kafka-compatible stream is a thin layer on top of SlateDB, not a separate cluster to operate. Consumers that already speak Kafka can read from EvidentSource unchanged.
Running it
Section titled “Running it”Locally
Section titled “Locally”docker run -p 3000:3000 -p 50051:50051 evidentsource/evidentsource-dev:latestThe -dev image uses an embedded filesystem adapter instead of S3 so you don’t need any cloud setup. See AWS Marketplace deployment for production.
On AWS
Section titled “On AWS”Deploy via the AWS Marketplace listing. The provided CloudFormation template stands up:
- An ECS service running the EvidentSource container (typically 2+ tasks behind an ALB)
- An S3 bucket for SlateDB state
- IAM roles scoped to the bucket
- Optional: CloudWatch metrics, X-Ray tracing, WAF
The whole stack is in one template — see AWS Marketplace deployment for the parameters.
Elsewhere
Section titled “Elsewhere”Anywhere you can run a container and reach an S3-compatible endpoint, EvidentSource runs:
- ECS, EKS, or Fargate on AWS
- GKE or Cloud Run against a GCS bucket (via an S3-compatible front-end)
- Kubernetes anywhere, with MinIO or Ceph for storage
- A single VM with Docker and a local object store
The runtime doesn’t care. Operations is your storage story plus one container.
What you don’t have to do
Section titled “What you don’t have to do”- Run Kafka
- Manage schema migrations
- Tune a database for transactional workloads
- Operate Zookeeper
- Build a CDC pipeline to get events out
- Design shard keys
What you still have to do
Section titled “What you still have to do”- Monitor the container (CPU, memory, request latency) — OpenTelemetry is wired in
- Size the container for your workload — see the sizing guide in AWS Marketplace deployment
- Back up or version your S3 bucket if your retention policy demands it
- Configure authentication and authorization — see Security
- AWS Marketplace deployment — production CloudFormation parameters and sizing.
- Security — authentication, authorization, and token vending.
- Kafka prerequisites — read this if you want to bridge to an external Kafka.