Infrastructure Requirements
Minimum Resources
- 4 vCPUs
- 16 GB memory
- 400 GB storage for approximately 30 million events per 3 months, or use a managed serverless database (e.g., Aurora Serverless, Neon) for elastic storage
Included Dependencies
The Docker Compose deployment includes all required services:- Kafka — Event queue for reliable message processing
- PostgreSQL — Primary data store
- Clickhouse (optional) — High-performance analytics engine, recommended for deployments processing 1M+ events per day
- SMTP service — Your own email service for notifications and alerts
- LLM API key — Powers AI features like session summarization and intent detection. Built on LiteLLM, so any major LLM provider is supported including OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, and more
If you use AWS Bedrock for your LLM provider, external network requirements are minimized since traffic stays within AWS.
Management CLI
Every self-hosted deployment includes a purpose-built CLI for managing your MCPcat instance:- Health checks — Run diagnostics across all services and dependencies. Verify database connectivity, Kafka broker status, worker health, and end-to-end event pipeline flow.
- Log access — Stream or export logs from any running service. Filter by severity, time range, or service name to quickly isolate issues.
- Queue management — Monitor consumer lag and throughput across Kafka topics. Run targeted commands to rebalance partitions or flush stuck messages when queue pressure builds.
- Version upgrades — Pull and apply the latest MCPcat images with a single command. The CLI handles service orchestration, migration execution, and rollback if needed.
- Reporting — Generate reports on system health, event throughput, storage utilization, and service uptime for operational reviews.
Updates & Maintenance
New MCPcat versions are delivered as updated Docker images. The management CLI handles version upgrades, making it straightforward to stay current with the latest features and security patches.Network & Security
Self-hosted MCPcat is designed to run in locked-down environments:- No inbound external traffic required — The deployment does not need to be exposed to the public internet. Your agents and SDKs only need internal network access to the MCPcat API endpoint
- Runs behind your VPN or firewall — Full compatibility with private network configurations
- Minimal egress — Outbound traffic is limited to pulling container images and LLM API calls. Using AWS Bedrock further reduces egress by keeping LLM traffic within your VPC
Data Residency
Because you control the infrastructure, you choose where MCPcat runs. Deploy in any region or cloud provider that meets your compliance requirements — including on-premise data centers.Backup & Recovery
Database backup and recovery follows your existing infrastructure practices. We recommend configuring automated backups for your PostgreSQL instance, especially if using an externally managed database service.Forward Deployed Engineer
MCPcat offers an optional Forward Deployed Engineer (FDE) — a dedicated MCPcat engineer embedded with your team to own the operational side of your self-hosted deployment. Your FDE handles:- Deployment & configuration — End-to-end setup tailored to your infrastructure and security requirements
- Ongoing operations — Proactive monitoring, upgrades, and capacity planning so your team doesn’t have to
- Performance tuning — Optimization for your specific event volumes and query patterns
- Incident response — Direct escalation path when issues arise
Book a demo
Talk to our team about self-hosted MCPcat for your organization.