Architectural PatternsProxy Pattern

Proxy Pattern

The proxy pattern covers any component that sits between a client and a backend: Circuit Breakers, API Gateways, Load Balancers, and Sidecars. Getting the edge direction right is critical — a single incorrect edge will make the animation bypass the proxy and misrepresent how the system actually works.


The rule

⚠️

All client edges go TO the proxy. All backend edges go FROM the proxy. Responses travel BACK through the proxy using reverse animation on the same edges. The proxy is the ONLY node connected to both the client and the backend.

Never add a direct edge from a backend to a client that bypasses the proxy, even for responses.

CORRECT:   Client → Proxy → Backend
                     ↑ responses go back through proxy (reverse)

WRONG:     Client → Proxy → Backend

           Client ←───────── Backend  (bypasses proxy!)

Circuit Breaker

A circuit breaker monitors failure rates and opens the circuit (stops forwarding requests) when a threshold is exceeded. All traffic — requests and responses — must pass through it.

Topology

const topology = new TopologyBuilder(true);
 
const clientGroup = topology.createGroup('clients', { label: 'Clients' });
clientGroup.addProcess({ id: 'client', label: 'Client', state: 'running' }).autoResize();
 
const cbGroup = topology.createGroup('resilience', { label: 'Resilience Layer' });
cbGroup
  .addProcess({ id: 'cb',       label: 'Circuit Breaker', state: 'running' })
  .addProcess({ id: 'cb-state', label: 'CB State: CLOSED', state: 'running' })
  .addProcess({ id: 'fallback', label: 'Fallback Handler', state: 'running' })
  .autoResize();
 
const serviceGroup = topology.createGroup('backend', { label: 'Backend' });
serviceGroup
  .addProcess({ id: 'service', label: 'Upstream Service', state: 'running' })
  .autoResize();
 
// Edges: client → cb → service
// NO direct edge between client and service, and NO reverse edges
topology.connect('client',  'cb',      { protocol: 'http', label: 'request' });
topology.connect('cb',      'service', { protocol: 'http', label: 'forward' });
topology.connect('cb',      'fallback', { protocol: 'http', label: 'fallback path' });

Animation

const flow = new FlowBuilder();
 
// Scenario 1: Circuit CLOSED — normal request
flow.scenario('closed', 'Circuit Closed', 'Normal request flow');
flow
  .from('client').to('cb')
  .showMessage('[REQUEST] GET /api/data')
  .from('cb').to('service')
  .showMessage('[FORWARD] Circuit CLOSED — forwarding request')
  .from('service').to('cb')
  .showMessage('[200 OK] Response received')
  .from('cb').to('client')
  .showMessage('[200 OK] Delivered to client');
 
// Scenario 2: Circuit OPEN — fallback
flow.scenario('open', 'Circuit Open', 'Upstream unhealthy, fallback engaged');
flow
  .from('client').to('cb')
  .showMessage('[REQUEST] GET /api/data')
  .flashError('service')                      // show upstream as unhealthy
  .from('cb').to('fallback')
  .showMessage('[OPEN] Circuit OPEN — routing to fallback')
  .from('fallback').to('cb')
  .showMessage('[FALLBACK] Returning cached response')
  .from('cb').to('client')
  .showError('[200 OK] Degraded response (from cache)');
 
return flow;

API Gateway

An API Gateway authenticates requests, routes them to the appropriate microservice, and aggregates responses. All traffic — including error responses — flows through the gateway.

Topology

const topology = new TopologyBuilder(true);
 
const clientGroup = topology.createGroup('external', { label: 'External' });
clientGroup.addProcess({ id: 'client', label: 'Mobile App', state: 'running' }).autoResize();
 
const gwGroup = topology.createGroup('gateway', { label: 'API Gateway' });
gwGroup
  .addProcess({ id: 'gw',   label: 'API Gateway', state: 'running' })
  .addProcess({ id: 'auth', label: 'Auth Service', state: 'running' })
  .autoResize();
 
const servicesGroup = topology.createGroup('microservices', { label: 'Microservices' });
servicesGroup
  .addProcess({ id: 'orders',  label: 'Orders Service',  state: 'running' })
  .addProcess({ id: 'catalog', label: 'Catalog Service', state: 'running' })
  .addProcess({ id: 'users',   label: 'Users Service',   state: 'running' })
  .autoResize();
 
// Client talks ONLY to gateway
topology.connect('client', 'gw',   { protocol: 'https', label: 'HTTPS' });
// Gateway talks to auth service
topology.connect('gw', 'auth',     { protocol: 'http',  label: 'verify token' });
// Gateway routes to microservices
topology.connect('gw', 'orders',   { protocol: 'http',  label: 'route' });
topology.connect('gw', 'catalog',  { protocol: 'http',  label: 'route' });
topology.connect('gw', 'users',    { protocol: 'http',  label: 'route' });

Common mistake

// WRONG — orders service responds directly to client, bypassing gateway
topology.connect('client',  'gw',     { protocol: 'https' });
topology.connect('gw',      'orders', { protocol: 'http' });
topology.connect('orders',  'client', { protocol: 'http' }); // bypass!
 
// CORRECT — responses go back through the gateway (reverse animation)
topology.connect('client',  'gw',     { protocol: 'https' });
topology.connect('gw',      'orders', { protocol: 'http' });
// Response: orders → gw (reverse) → client (reverse)

Load Balancer

A load balancer distributes incoming connections across multiple backend instances. Each backend has exactly one edge: from the load balancer.

Topology

const topology = new TopologyBuilder(true);
 
const lbGroup = topology.createGroup('lb', { label: 'Load Balancer' });
lbGroup.addProcess({ id: 'lb', label: 'HAProxy', state: 'running' }).autoResize();
 
const apiGroup = topology.createGroup('api-pool', { label: 'API Pool' });
apiGroup
  .addProcess({ id: 'api-1', label: 'API Pod 1', state: 'running' })
  .addProcess({ id: 'api-2', label: 'API Pod 2', state: 'running' })
  .addProcess({ id: 'api-3', label: 'API Pod 3', state: 'running' })
  .autoResize();
 
topology.connect('client', 'lb',    { protocol: 'http', label: 'HTTP' });
topology.connect('lb', 'api-1',     { protocol: 'http', label: 'backend' });
topology.connect('lb', 'api-2',     { protocol: 'http', label: 'backend' });
topology.connect('lb', 'api-3',     { protocol: 'http', label: 'backend' });

Animation with replica round-robin

Because api-1, api-2, api-3 are replicas of api (numeric suffix), the FlowBuilder automatically round-robins across them:

flow.scenario('round-robin', 'Round-Robin Distribution');
flow
  .from('client').to('lb')
  .showMessage('[REQUEST] Incoming HTTP request')
  .from('lb').to('api')        // FlowBuilder picks api-1, api-2, or api-3
  .showMessage('[ROUTED] Load balancer selects backend')
  .from('api').to('lb')        // response back to lb
  .showMessage('[200 OK] Response ready')
  .from('lb').to('client')
  .showMessage('[200 OK] Delivered');

Sidecar (Service Mesh)

In a service mesh, each service pod is paired with a sidecar proxy (e.g., Envoy). The sidecar intercepts all inbound and outbound traffic. Model each service+sidecar pair as a group.

const svcGroup = topology.createGroup('service-pod', { label: 'Service Pod' });
svcGroup
  .addProcess({ id: 'envoy',   label: 'Envoy Sidecar', state: 'running' })
  .addProcess({ id: 'service', label: 'App Service',   state: 'running' })
  .autoResize();
 
// Inbound: traffic arrives at envoy, then envoy forwards to service
topology.connect('mesh-ingress', 'envoy',   { protocol: 'http', label: 'mTLS' });
topology.connect('envoy',        'service', { protocol: 'http', label: 'plain HTTP' });