Audience: Integrators, application engineers. Use: Implement emitter-side append, retry, and artifact publication behavior correctly.
Producer Role
A producer is any system that emits facts into BSFG by calling AppendFact (and optionally PutObject for large artifacts).
Producers are responsible for:
- Generating stable, deterministic message IDs
- Managing the two-step write sequence for artifact-bearing facts
- Implementing retry logic with the same message ID
- Understanding that
AppendFactconfirms only boundary ingress durability, not consumer delivery
Required Call Sequence
The canonical producer sequence is:
1. [OPTIONAL] PutObject(bucket, key, blob)
↓
[Wait for confirmation: {digest, size}]
↓
2. AppendFact({
envelope: {message_id, from_zone, to_zone, produced_at_unix_ms, ...},
fact: {subject, predicate, object_json}
})
↓
[Wait for confirmation: {offset}]
↓
3. On error: retry AppendFact with SAME message_id and SAME payload
Step 1: Upload Artifact (if needed)
If the fact references a large artifact, upload it first:
PutObject(
bucket: "batch-files",
key: "order-2026-03-06-001.pdf",
blob: <binary data>
) → {digest: "sha256:...", size: 2048576}
Wait for the operation to complete and durability to be acknowledged. Do not proceed to AppendFact until the artifact is durably stored.
Step 2: Append Fact
Create and append the fact with the message ID and optional artifact reference:
AppendFact({
envelope: {
message_id: "<stable-id>",
from_zone: "Plant A",
to_zone: "Enterprise",
produced_at_unix_ms: 1741248600000,
correlation_id: "order:12345",
labels: {"priority": "high"}
},
fact: {
subject: "work_order:WO-2026-001",
predicate: "has_batch_attachment",
object_json: {
"bucket": "batch-files",
"key": "order-2026-03-06-001.pdf",
"digest": "sha256:...",
"size": 2048576,
"media_type": "application/pdf",
"file_name": "batch_order.pdf"
}
}
})
→ Confirmation: {offset: 42}
Step 3: Retry on Failure
If AppendFact times out or fails:
if (error during AppendFact) {
// Retry with SAME message_id and SAME payload
retry_count = 0
max_retries = 3
backoff = exponential(base=100ms)
while (retry_count < max_retries) {
try {
result = AppendFact(same_message_id, same_payload)
// Success — break
break
} catch (error) {
retry_count++
if (retry_count == max_retries) {
throw error // Give up after max retries
}
wait(backoff)
backoff *= 2
}
}
}
Generating Stable Message IDs
The message_id must be deterministically derived from the business event. It is the idempotency key that prevents duplicates.
Good: Deterministic Derivation
- Hash of Business Key:
message_id = SHA256(event_type + entity_kind + entity_id)
Example:SHA256("work_order_created" + "WO" + "12345") - Stable UUID:
message_id = UUID(namespace, event_key)
Example:UUID(v5, namespace="plant-a", name="WO:12345:created") - Domain Function:
message_id = f(entity_kind, entity_id, event_type, timestamp)
Example: Concatenate and hash the tuple
Bad: Non-Deterministic IDs
- Random UUID:
message_id = UUID.random()
❌ Each call generates a different ID — retry creates duplicates - Wall-Clock Timestamp:
message_id = System.currentTimeMillis()
❌ Restarts or clock adjustments change the ID - Sequence Counter:
message_id = ++counter
❌ Counter resets on restart — duplicate IDs possible - Loosely Coupled Hash:
message_id = SHA256(object_json)
❌ If object_json changes (e.g., timestamp field), ID changes — duplicates possible
Retry Safety
Retrying with the same message_id and payload is safe because:
- The forward buffer (IFB/EFB) uses
putIfAbsent(message_id, payload) - If the ID already exists, the insertion is rejected
- The boundary returns the same confirmation (offset) for repeated attempts
Example:
Attempt 1: AppendFact(message_id="X", payload="P") → offset: 100
Attempt 2: Network timeout, retry
Attempt 2: AppendFact(message_id="X", payload="P") → offset: 100 (same)
Attempt 3: AppendFact(message_id="X", payload="P") → offset: 100 (same)
Result: ONE fact at offset 100, not three.
Artifact Obligations
If a fact references an artifact:
Before Appending the Fact
- Call
PutObjectand wait for durability confirmation- If
PutObjectfails, do not append the fact - If
PutObjectsucceeds, the artifact is durable and immutable
- If
After Appending the Fact
- Do not modify or delete the artifact
- If a correction is needed, upload a new artifact with a new key/digest
- Emit a new fact referencing the new artifact
- Optionally emit a correction fact linking the old and new artifacts
Artifact Existence Guarantee
Appending a fact with an artifact reference is a producer guarantee that the artifact exists and is accessible. If the consumer later tries to retrieve the artifact and it is missing, that is a producer defect.
What AppendFact Confirms
AppendFact confirmation means:
- ✅ The fact was durably written to the store buffer (ISB or ESB)
- ✅ The offset is assigned and stable (no further offsets will take precedence)
It does NOT mean:
- ❌ The fact was delivered to the forward buffer (IFB or EFB)
- ❌ The fact was delivered to any consumer
- ❌ The fact was processed or accepted by any consumer
- ❌ Business confirmation or acceptance occurred
Handling Producer Errors
Network Timeouts
If AppendFact times out (no response from boundary), retry with the same message ID and payload.
Application Crashes
If the producer crashes after AppendFact succeeds but before the application can record success:
- On restart, check if the message_id was already appended (query the boundary log)
- If found, skip re-appending
- If not found, retry the
AppendFact
Artifact Upload Failure
If PutObject fails:
- Do not proceed to
AppendFact - Retry
PutObjectwith the same bucket, key, and blob - If
PutObjecteventually succeeds, proceed withAppendFact
Performance Characteristics
Under normal operation with a healthy boundary:
PutObjectlatency: 10–500ms (depending on artifact size and network)AppendFactlatency: 1–10ms (local durability, no remote wait)- Total E2E latency (artifact + fact): 10–510ms
Producers should not wait for consumer processing. Consumer latency is decoupled and may be much longer (hours or days in autonomous mode).