Visualize and explore Apache Kafka topologies — topics, producers, consumers, streams, schemas, connectors, and ACLs — with an AI assistant that queries live broker metrics.
Watch the full setup and demo on YouTube
- Live topology visualization — Interactive graph of your Kafka cluster powered by React Flow
- Auto-discovery — Topics, consumer groups, producers, connectors, schemas, and ACLs are detected automatically from the live cluster
- Schema grouping — Schemas sharing the same Schema Registry ID are merged into a single node connected to all related topics
- Consumer lag — Click any consumer node to view per-partition lag
- Topic details — Click any topic node to view configuration, recent messages, and generate sample client code (Java / Python)
- Connector details — Click connector nodes to inspect configuration (sensitive values masked)
- Producer detection — Via JMX metrics or offset-change detection (automatic fallback chain)
- Search & navigation — Find nodes by name or type, auto-zoom to matches, keyboard navigation (Enter / Shift+Enter)
- Topic pagination — Large clusters load incrementally (connected topics first), with search across all topics
- Produce messages — Optionally produce messages from the UI (disabled by default, per-cluster opt-in)
- AI assistant (StreamPilot) — Ask questions about your topology and live broker metrics; highlights and zooms to relevant nodes. Supports OpenAI, Gemini, Anthropic, and Ollama
- Dark / light theme — Toggle between themes
Backend
cd server
uv sync
uv run uvicorn main:app --reload --port 5000Always use
uv runto ensure the correct environment is used.
Frontend
cd client
npm install
npm run devOpen http://localhost:5173. The dev server proxies /api to the backend. Set VITE_API_URL if your backend runs on a different host/port.
docker build -f container/Dockerfile -t streamlens .
docker run -p 5000:5000 streamlensOpen http://localhost:5000. See container/README.md for volume mounts and environment variables.
- client/ — React frontend (Vite, TypeScript, Tailwind, shadcn/ui)
- server/ — Python backend (FastAPI)
- container/ — Dockerfile and Docker Compose for deployment and testing
- docs/ — Additional documentation (AI setup, topology)
Clusters are stored in server/data/clusters.json (no database required). Add clusters through the UI, or edit the file directly and restart the server.
{
"clusters": [
{
"id": 1,
"name": "My Cluster",
"bootstrapServers": "localhost:9092",
"schemaRegistryUrl": "http://localhost:8081",
"connectUrl": "http://localhost:8083",
"prometheusUrl": "http://prometheus:9090",
"jmxHost": "localhost",
"jmxPort": 9999
},
{
"id": 2,
"name": "oauthbearer_cluster",
"bootstrapServers": "your_bootstrap_url",
"securityProtocol": "SASL_SSL",
"saslMechanism": "OAUTHBEARER",
"saslOauthbearerClientId": "streamlens_clientId",
"saslOauthbearerClientSecret": "streamlens_clientSecret",
"saslOauthbearerTokenEndpointUrl": "your_keycloak_token_url",
"sslTruststoreLocation": "your.truststore.jks",
"sslTruststorePassword": "your_truststore_password",
"sslEndpointIdentificationAlgorithm": "",
"enableSslCertificateVerification": true
},
{
"id": 3,
"name": "scram_cluster",
"bootstrapServers": "your_bootstrap_url",
"clusterType": "Apache Kafka",
"securityProtocol": "SASL_SSL",
"saslMechanism": "SCRAM-SHA-256",
"saslUsername": "streamlens_user",
"saslPassword": "streamlens_password",
"sslTruststoreLocation": "your.truststore.jks",
"sslTruststorePassword": "your_truststore_password",
"sslEndpointIdentificationAlgorithm": "",
"enableSslCertificateVerification": true
}
]
}| Field | Required | Description |
|---|---|---|
bootstrapServers |
Yes | Kafka broker address(es) |
schemaRegistryUrl |
No | Schema Registry URL (enables schema nodes) |
connectUrl |
No | Kafka Connect REST URL (enables connector nodes) |
prometheusUrl |
No | Prometheus URL — enables AI-powered broker metric queries and producer detection per client ID. If set, jmxHost/jmxPort are not required. |
jmxHost / jmxPort |
No | Broker JMX endpoint — fallback for producer detection when Prometheus is not available |
enableKafkaEventProduceFromUi |
No | Allow producing messages from the UI (default: false) |
Override the file path with the CLUSTERS_JSON env var.
streamLens currently supports SASL_SSL, PLAINTEXT, and SSL Kafka listener protocols.
For SSL and SASL_SSL connections, add these fields to the cluster object:
securityProtocol—"SASL_SSL","SSL"or"PLAINTEXT"(default)saslMechanism—"OAUTHBEARER","SCRAM-SHA-512","SCRAM-SHA-256"or"PLAIN"- Scram Authentication:
saslUsername,saslPassword - OAUTHBEARER Authentication:
saslOauthbearerMethod,saslOauthbearerClientId,saslOauthbearerClientSecret,saslOauthbearerTokenEndpointUrl sslEndpointIdentificationAlgorithm—""to disable hostname verification (dev/self-signed)- PEM paths:
sslCaLocation,sslCertificateLocation,sslKeyLocation,sslKeyPassword - Java truststore/keystore (auto-converted to PEM; requires
keytool+openssl):sslTruststoreLocation,sslTruststorePassword,sslKeystoreLocation,sslKeystoreType,sslKeystorePassword,sslKeyPassword enableSslCertificateVerification—falseto skip broker cert verification (insecure, dev only)
If your cluster has ACLs enabled, grant StreamLens READ/Describe permissions:
kafka-acls.sh --bootstrap-server localhost:9092 \
--add --allow-principal User:streamlens --allow-host streamlenshost \
--operation Read --topic '*' \
--operation Describe --topic '*' \
--operation Describe --cluster \
--operation Describe --group '*'Replace User:streamlens with your actual principal.
StreamLens detects producers using an automatic fallback chain. Only the first source that returns results is used:
| Priority | Source | Granularity | Config field |
|---|---|---|---|
| 1 | Prometheus (client-side) | Per client ID + topic | prometheusUrl |
| 2 | Prometheus (broker-side) | Per topic (aggregate) | prometheusUrl |
| 3 | Broker JMX | Per topic (aggregate) | jmxHost + jmxPort |
| 4 | Offset change | Per topic (needs 2 syncs) | (automatic) |
Tip: If
prometheusUrlis configured,jmxHostandjmxPortare not required. Prometheus covers both client-side (per producer client ID) and broker-side (per topic) detection, plus AI-powered broker metric queries.
Prometheus (recommended) — Run the JMX Exporter as a Java agent on the Kafka brokers (for broker-side producer detection and AI metric queries) and optionally on producer applications (for per-client-id detection). StreamLens first tries kafka_producer_topic_metrics_record_send_total grouped by client_id and topic. If no client-side metrics are found, it falls back to broker-side kafka_server_brokertopicmetrics_messagesinpersec per topic — no JMX port needed for either.
Broker JMX — Falls back to broker-side MessagesInPerSec metrics (one producer node per active topic, no client IDs). Only needed if Prometheus is not configured:
- Start Kafka with
JMX_PORT=9999 - Set
jmxHostandjmxPortin the cluster config - Restart the backend and click Sync
See docs/AI_SETUP.md for configuring the StreamPilot AI chat (OpenAI, Gemini, Anthropic, or Ollama).
AI Broker Metrics — When prometheusUrl is configured and the JMX Exporter is running on the Kafka brokers, the AI assistant can answer questions about live broker metrics. Ask questions like:
- "What is the current message throughput?"
- "Are there any under-replicated partitions?"
- "What is the avg time to handle a produce request?"
- "How many partitions do we have?"
- "Show me all metrics"
StreamLens queries a curated set of 17 broker metrics from Prometheus across 5 categories:
| Category | Example metrics |
|---|---|
| Cluster Health | Under-replicated partitions, active controller count, offline partitions |
| Throughput | Messages in/sec (total and per topic), bytes in/out per sec |
| Request Performance | Produce/fetch request rate, produce/fetch avg latency |
| Broker Resources | Partition count, leader count, log size |
| Replication | ISR shrinks/expands per sec |
See server/src/kafka/metrics.py for the full catalog and PromQL queries.
| Variable | Description |
|---|---|
CLUSTERS_JSON |
Path to clusters file (default: server/data/clusters.json) |
TOPOLOGY_MAX_TOPICS |
Max topic nodes per snapshot (default: 2000) |
VITE_API_URL |
Frontend API target (default: http://localhost:5000) |
AI_PROVIDER |
AI provider: openai, gemini, anthropic, ollama |
See docs/AI_SETUP.md for AI-specific env vars.
If you find StreamLens useful, consider giving it a star on GitHub — it helps others discover the project and motivates continued development.
See CONTRIBUTING.md.
