Generation Guide
This guide describes how to implement the generation logic inside the MCP server: which signals to extract from each file type, how to assign edge directions, how to use layers for environment comparison, and how to validate the output script before returning it.
Extraction pipeline
The generator processes input files in four stages:
1. Parse — extract raw entities (services, ports, env vars, topics)
2. Resolve — match entities across files, deduplicate, assign IDs
3. Connect — infer edges from dependencies, env vars, and known patterns
4. Render — emit the cloud-arch scriptStage 1: Parse
docker-compose.yml
| Field | What to extract |
|---|---|
services.<name> | One node per service |
services.<name>.image | Infer technology (postgres → cylinder shape, redis → cylinder shape, nginx → default, kafka → cylinder + pulse edges) |
services.<name>.ports | Exposed ports (used to infer protocols) |
services.<name>.environment | Env var names containing URL, HOST, BROKERS, DSN → likely connections |
services.<name>.depends_on | Startup ordering — infer as potential connection (verify with env vars) |
services.<name>.networks | Group services on the same network together |
Kubernetes manifests
| Resource | What to extract |
|---|---|
Deployment | One node per deployment (metadata.name). spec.replicas → use replica shorthand |
Service | Port and protocol info; ClusterIP vs NodePort vs LoadBalancer affects external flag |
Ingress | Rules define external entry points → create external client nodes |
ConfigMap | Often contains application.yml or connection strings — parse recursively |
HorizontalPodAutoscaler | minReplicas/maxReplicas → set replicas on the node |
application.yml / application.properties
Look for these key patterns:
spring:
datasource:
url: jdbc:postgresql://pg-service:5432/mydb # → connect('app', 'pg-service')
redis:
host: redis-service # → connect('app', 'redis-service')
kafka:
bootstrap-servers: kafka:9092 # → connect('app', 'kafka')
kafka:
consumer:
topics: orders, payments # → this service consumes these topics
producer:
topic: processed-events # → this service produces to this topicKafka topic configs
# Confluent / custom topic manifest
topics:
- name: orders
partitions: 12
replication-factor: 3
- name: payments
partitions: 6
replication-factor: 3Each topic becomes a cylinder node. Producers and consumers are connected to broker topic nodes, not to abstract topic names.
Stage 2: Resolve
After parsing all files, resolve references across files:
- Deduplicate services: the same service may appear in
docker-compose.ymlandk8s/— use the canonical name - Normalize IDs: strip invalid characters, lowercase, replace spaces with dashes
- Group assignment: services sharing a Docker network or Kubernetes namespace go in the same group
- Technology detection: use image name or dependency list to set
shapeand suggestprotocol
Technology → shape mapping
| Image / dependency keyword | shape | Suggested protocol |
|---|---|---|
postgres, postgresql | cylinder | postgresql |
mysql, mariadb | cylinder | mysql |
redis | cylinder | redis |
kafka, confluent | cylinder | kafka |
rabbitmq | cylinder | rabbitmq |
elasticsearch, opensearch | cylinder | http |
nginx, traefik, envoy | default | http |
clickhouse | cylinder | http |
mongodb | cylinder | tcp |
Stage 3: Connect
Edge direction rules
Always infer connect(A, B) as “A opens the TCP connection to B.” When in doubt, ask: which process runs connect() at startup?
Apply these rules in order:
- Environment variable URL: if service A has
DATABASE_URL=postgres://pg:5432/db, thenconnect('a', 'pg')— A dials Postgres - depends_on: if A
depends_on: [B], A likely connects to B — confirm with env vars; if no env var found, still add the edge but include a warning - Kafka producer: if A has
kafka.producer.topic=orders, thenconnect('a', 'kafka-broker')— A dials broker to produce - Kafka consumer: if A has
kafka.consumer.topics=orders, thenconnect('a', 'kafka-broker')— A dials broker to consume (neverconnect('broker', 'a')) - Debezium / CDC: Debezium always dials the database —
connect('debezium', 'source-db') - Nginx / Load Balancer: proxy dials upstream — if nginx proxies to
api, useconnect('nginx', 'api')… wait, no: the client dials nginx, nginx dials upstream.connect('client', 'nginx')andconnect('nginx', 'api')
Never generate reverse edges for responses
// CORRECT: one edge, reverse animation handles responses
topology.connect('api', 'pg', { protocol: 'postgresql' });
// WRONG: never generate this
topology.connect('pg', 'api', { protocol: 'postgresql' });Stage 4: Render
Script structure template
// 1. TopologyBuilder
const topology = new TopologyBuilder(true);
// 2. Groups (one per namespace/network/team)
const group = topology.createGroup('group-id', { label: 'Group Name' });
group
.addProcess({ id: 'svc-id', label: 'Service Name', shape: '...', state: 'running' })
// ... more nodes
.autoResize();
// 3. Connections (always after all groups/nodes)
topology.connect('source', 'target', { protocol: '...', label: '...' });
// 4. Apply
await topology.apply();
// 5. Basic animation (always generate at least one scenario)
const flow = new FlowBuilder();
flow.scenario('main', 'Main Flow', 'Typical request path');
// ... at least 3 hops
return flow;Generating animation scenarios
Always generate at least one animation scenario. Use the main data flow path — the most common user action — as the first scenario.
For a web API + database:
flow.scenario('request', 'API Request', 'Client reads data');
flow
.from('client').to('api')
.showMessage('[GET] /api/resource')
.to('pg')
.showMessage('[QUERY] SELECT ...')
.from('pg').to('api')
.showMessage('[ROWS] Data returned')
.from('api').to('client')
.showMessage('[200 OK] Response delivered');For a Kafka pipeline:
flow.scenario('event', 'Event Published', 'Producer sends event, consumer receives');
flow
.from('producer').to('kafka-broker')
.showMessage('[PRODUCE] Event published to topic')
.from('kafka-broker').to('consumer')
.showMessage('[CONSUME] Consumer group receives event');Layer assignment
When the caller specifies layers: ['v1', 'v2', 'v3'], assign components to layers based on:
| Signal | Layer assignment logic |
|---|---|
| Component appears in all environments | No layer (always visible) |
| Component only in staging | layer: 'staging' |
| Component only added in v2 manifest diff | layer: 'v2' |
| Monitoring/observability stack | Usually layer: 'v3' or layer: 'highload' |
| Cache layer | Usually layer: 'v2' |
| Message broker | Usually layer: 'v2' or layer: 'v3' depending on maturity |
Validation checklist
Before returning the generated script, verify:
- Every node ID is unique across all groups
- Every
connect(A, B)references node IDs that exist in the topology - No duplicate reverse edges (check: for every
connect(A, B), there is noconnect(B, A)for the same service pair) - Every group calls
.autoResize() -
await topology.apply()is present - At least one animation scenario is defined
- Last line is
return flow;(orreturn null;if no flow) - No TypeScript-specific syntax (no
as Type, no interface declarations, noimport/export) - No emojis in
showMessageorshowErrorcalls — use[LABEL]prefixes
Validation pseudocode
function validateScript(script: string): string[] {
const warnings: string[] = [];
if (!script.includes('await topology.apply()')) {
warnings.push('Missing await topology.apply()');
}
if (!script.match(/return (flow|null)/)) {
warnings.push('Missing return statement');
}
if (script.includes('export default') || script.includes('import ')) {
warnings.push('Script contains import/export — remove them');
}
if (script.match(/\p{Emoji}/u)) {
warnings.push('Script contains emoji — replace with [LABEL] prefixes');
}
return warnings;
}Common generation mistakes
| Mistake | Fix |
|---|---|
Generating connect('kafka', 'consumer') | Consumers dial the broker: connect('consumer', 'kafka') |
| Adding response edges | Remove them — reverse animation is automatic |
Skipping autoResize() | Add it after every group’s last addProcess |
Using TypeScript casting (as HTMLElement) | Remove — scripts run as plain JS |
| Generating empty animation | Add at least one flow.from(...).to(...) chain |
| Generating a node for every replica | Use replicas: N shorthand or explicit id-1, id-2 naming |