Context
BSFG exists to preserve autonomy across trust, network, and failure boundaries. If all zones share one common broker or one common persistence plane, the boundary collapses at the storage layer even if the application protocol appears separated.
The architecture therefore needs a storage topology that preserves:
- zone-local durability
- survival under partial disconnection
- independent retention and operations
- explicit cross-zone synchronization rather than implicit shared state
Options Considered
| Option | Description | Benefits | Drawbacks |
|---|---|---|---|
| Single global JetStream estate | All zones publish into one shared JetStream domain or cluster. |
- simple global visibility
- fewer broker estates to manage
|
- collapses trust boundary at storage layer
- shared failure domain
- zone autonomy weakened
| | Enterprise-only persistence | Plants and intermediary zones rely on enterprise-side durable storage. |
- centralized operations
- simpler edge footprint
|
- weak local durability
- plant depends on remote availability
- poor fit for partition tolerance
| | Plant-only persistence | Durable logs exist only in plants; enterprise remains mostly stateless. |
- strong edge autonomy
- simple enterprise integration surface
|
- unbalanced architecture
- weak enterprise-side evidence retention
- awkward downstream replay
| | Zone-local log domain per zone (Selected) | Each zone owns its own JetStream domain and object storage; BSFG synchronizes facts across zones. |
- preserves trust and failure boundaries
- local durability survives partition
- independent retention and lifecycle policy
- cross-zone movement remains explicit
|
- more estates to operate
- replication semantics must be designed explicitly
|
Decision
Each BSFG zone will own its own local persistence domain:
zone = BSFG service + JetStream domain + Object Store
Zones include, for example:
EnterpriseIDMZPlantAPlantB
Cross-zone transfer occurs only through BSFG protocol operations:
AppendFact
FetchFacts
ConfirmReceipt
PutObject
No zone writes directly into another zone’s local durable log. Cross-zone synchronization is explicit and replayable.
Consequences
Benefits:
- zone-local operation can continue during remote outage
- storage topology aligns with trust boundaries
- retention, security, and object lifecycle can differ by zone
- boundary semantics remain visible rather than hidden inside shared infrastructure
Tradeoffs:
- more infrastructure estates must be monitored and upgraded
- replication lag and confirmation state become first-class operational concerns
- global analytics must be built from synchronized facts rather than assumed shared storage