Deployment

BSFG Triad Ha Zone Reference Physical Realization

Reference Physical Realization: Triad-HA Zone

Name

Triad-HA Physical Realization — Canonical host-level realization of one BSFG zone using three hosts, local quorum-backed durability, and active-passive controller failover.

Classification

  • Layer: Physical realization
  • Kind: Reference physical realization
  • Scope: One BSFG zone
  • Derived from: Reference Deployment Pattern: Triad-HA with Keepalived Failover
  • Audience: Infrastructure architects, platform engineers, delivery engineers
  • Purpose: Show where zone components live physically and how they are placed across hosts, storage, and networks

Intent

Defines the canonical host-level embodiment of a single BSFG zone.

This document answers:

  • which physical hosts exist
  • what runs on each host
  • where durable state lives
  • where the VIP lives
  • how storage is attached
  • how failover-related components are placed
  • what physical/network assumptions the deployment pattern relies on

This document is more concrete than the deployment pattern, but still reusable across environments.

Non-Goals

This document does not define:

  • cross-zone federation policy
  • environment-specific IP assignments
  • stream authorization policy
  • cursor initialization policy
  • commissioning procedures
  • operational drills
  • exact cloud or hypervisor implementation details

Those belong in:

  • environment deployment maps
  • federation relationship matrices
  • runbooks
  • checklists

Relationship to Other Documents

This document should be read alongside:

  • Reference Deployment Pattern: Triad-HA with Keepalived Failover
  • Runbook: Triad-HA Zone Deployment
  • Checklist: Triad-HA Commissioning

The deployment pattern defines what must be true.
This document defines how that pattern is physically embodied.

Physical Model Overview

A single zone is realized as three hosts:

  • Alpha
  • Beta
  • Gamma

Alpha and Beta are service-bearing hosts.
Gamma is a non-controller quorum host.

All three hosts participate in JetStream quorum.
Only Alpha and Beta participate in controller failover.

Host Roles

Host Role Keepalived BSFG Controller JetStream Artifact Storage
Alpha Primary service-bearing host Yes Active when VIP held Yes Yes
Beta Secondary service-bearing host Yes Standby; promotable Yes Yes
Gamma Non-controller quorum host No No Yes No

Physical Host Assumptions

Each host is assumed to be:

  • an independent failure domain at host level
  • on stable power and network
  • under zone-local operational ownership
  • reachable only according to approved network policy
  • configured with persistent local storage
  • time-synchronized with other hosts in the zone

Recommended assumption:

  • Alpha, Beta, and Gamma should not share the same single-host hypervisor failure domain unless explicitly accepted

Canonical Host Layout

Alpha

Component Placement Notes
OS Local system RAID Standard host baseline
JetStream data Dedicated NVMe mount Zone-local durable state
Artifact storage RAID-backed artifact mount Active artifact location when Alpha is controller host
Keepalived Local service Competes for VIP ownership
BSFG controller Local service/container Runs only when promotion gates pass
Log agent Local agent Ships logs to approved aggregation path
Monitoring agent Local agent Exposes/ships host and service metrics

Beta

Component Placement Notes
OS Local system RAID Standard host baseline
JetStream data Dedicated NVMe mount Zone-local durable state
Artifact storage RAID-backed artifact mount Active artifact location when Beta is controller host
Keepalived Local service Competes for VIP ownership
BSFG controller Local service/container Normally stopped; started by promotion flow
Log agent Local agent Ships logs to approved aggregation path
Monitoring agent Local agent Exposes/ships host and service metrics

Gamma

Component Placement Notes
OS Local system disk Standard host baseline
JetStream data Dedicated NVMe mount Participates in quorum
Artifact storage None No controller role
Keepalived None Not part of VIP failover
BSFG controller None Never runs here
Log agent Local agent Ships logs to approved aggregation path
Monitoring agent Local agent Exposes/ships host and service metrics

Network Realization

Intra-Zone Network Roles

Network Function Participants Purpose
VIP failover path Alpha, Beta Keepalived coordination and VIP movement
JetStream cluster path Alpha, Beta, Gamma Route/cluster membership and quorum
Local service path Active controller host BSFG Connect RPC on zone VIP
Management/observability path All hosts Logging, metrics, operations access

Canonical Bindings

Service Host(s) Bind Target
BSFG Connect RPC Active Alpha or Beta Zone VIP
Keepalived VRRP Alpha, Beta Zone-local failover segment
JetStream client port All hosts Localhost or restricted local bind
JetStream cluster ports All hosts Host IPs
JetStream monitoring All hosts Restricted management exposure

Network Assumptions

  • Alpha and Beta must share a network segment that allows VIP failover coordination
  • Alpha, Beta, and Gamma must have stable reachability for JetStream quorum
  • management access should be logically separated from cross-zone application traffic where possible
  • the VIP is a zone-local service identity, not a cross-zone durability mechanism

Storage Realization

Storage Classes

Storage Class Hosts Purpose
System storage Alpha, Beta, Gamma OS and base services
JetStream storage Alpha, Beta, Gamma Durable message and state persistence
Artifact storage Alpha, Beta Local artifact availability for active controller host

Canonical Mount Intent

Mount Hosts Function
/data/jetstream Alpha, Beta, Gamma JetStream durable data
/artifacts Alpha, Beta Artifact storage for active controller host
/opt/bsfg All hosts Config, scripts, deployment artifacts

Storage Principles Preserved

  • no shared storage between hosts
  • no SAN/NFS dependency required for correctness
  • artifact availability is local to the current active service-bearing host
  • artifact recovery after failover depends on prior replication or rehydrate policy, not shared block storage

Service Placement Model

Service Distribution

Service Alpha Beta Gamma
JetStream Yes Yes Yes
Keepalived Yes Yes No
BSFG controller Conditional Conditional No
Log shipping Yes Yes Yes
Metrics/monitoring Yes Yes Yes

Controller Placement Rule

The BSFG controller may run only on Alpha or Beta, and only when:

  • the host holds the VIP
  • local JetStream is reachable
  • quorum is available
  • artifact storage is mounted
  • local certificate-validity policy is satisfied
  • controller bind to VIP succeeds

Gamma is physically incapable of becoming active controller in the reference realization.

Canonical Runtime Substrate

The canonical runtime substrate is:

  • host OS
  • systemd for service lifecycle and resource control
  • Docker Compose or equivalent host-local container runner for packaged services
  • Keepalived for VIP failover on Alpha/Beta
  • local filesystems and RAID/NVMe mounts for durable state

This document does not require Kubernetes or cluster schedulers.

Canonical Physical Diagram

flowchart LR
  subgraph ALPHA["Alpha Host"]
    A_OS["OS + systemd"]
    A_JS["JetStream"]
    A_KA["Keepalived"]
    A_BSFG["BSFG Controller (conditional)"]
    A_ART["/artifacts"]
    A_NVME["/data/jetstream"]
  end

  subgraph BETA["Beta Host"]
    B_OS["OS + systemd"]
    B_JS["JetStream"]
    B_KA["Keepalived"]
    B_BSFG["BSFG Controller (conditional)"]
    B_ART["/artifacts"]
    B_NVME["/data/jetstream"]
  end

  subgraph GAMMA["Gamma Host"]
    G_OS["OS + systemd"]
    G_JS["JetStream"]
    G_NVME["/data/jetstream"]
  end

  A_KA <-->|VIP failover| B_KA
  A_JS <-->|cluster/quorum| B_JS
  B_JS <-->|cluster/quorum| G_JS
  A_JS <-->|cluster/quorum| G_JS

Failure-Domain Assumptions

This realization assumes:

  • Alpha may fail independently
  • Beta may fail independently
  • Gamma may fail independently
  • any single host failure is tolerated by design, subject to durability tier and active controller placement
  • simultaneous Alpha+Beta loss renders the zone unavailable
  • loss of Gamma alone does not remove quorum if Alpha and Beta remain healthy
  • storage failure on artifact-bearing active host may block promotion if artifact mount requirements are not met

Physical Security and Operations Assumptions

  • host access is controlled by zone-local operations policy
  • certificate material is stored only on authorized hosts
  • private keys are not shared outside approved host scope
  • logs remain zone-local first, then forwarded
  • time synchronization is mandatory for TLS validity and operational correlation
  • observability agents run on all hosts, not just the active controller host

Observability Placement

Capability Alpha Beta Gamma Notes
Host metrics Yes Yes Yes CPU, memory, disk, network
JetStream metrics Yes Yes Yes Quorum, cluster health, lag
Keepalived state Yes Yes No MASTER/BACKUP state only on service-bearing hosts
BSFG controller health Conditional Conditional No Only where controller active
Log shipping Yes Yes Yes Zone-local first, forwarded second

What Makes This a Reference Physical Realization

This realization is “reference” because it standardizes:

  • three-host host layout
  • two service-bearing hosts plus one quorum host
  • VIP only on Alpha/Beta
  • controller never on Gamma
  • JetStream on all three hosts
  • artifacts only on service-bearing hosts
  • no shared storage
  • host-based failover rather than orchestration-plane failover

Environment-specific docs should instantiate this realization, not redefine it.

Environment-Specific Derivation

An environment-specific deployment map derived from this realization must provide:

  • actual hostnames
  • actual IPs
  • actual VIP
  • actual certificate identities
  • owner assignment
  • retention specifics
  • storage capacity specifics
  • exact zone membership in the estate