RFC-046: Consolidated Pattern Protocols with Backend Slot Requirements
Abstract
This RFC proposes consolidating each pattern's protocol into a single, complete gRPC service that presents a mandatory, simple interface to clients. Currently, patterns expose multiple backend-level services (e.g., KeyValue exposes 4 separate services: Basic, Batch, Scan, TTL), forcing clients to understand backend capabilities and handle optional operations. The proposed design eliminates client complexity by requiring patterns to satisfy their complete interface through backend slot requirements, emulation, or explicit configuration-time errors.
Key Benefits:
- Zero Client Complexity: Clients see complete, mandatory interfaces - no optional operations, no capability checks
- Configuration-Time Validation: Backend compatibility verified at pattern instantiation, not runtime
- Backend Slot Schema: Patterns declare required/optional backend capabilities through slot requirements
- Pattern-Level Abstraction: Patterns hide backend limitations through emulation or fallback slots
- Simple Client Code: No GetCapabilities(), no feature detection, just use the interface
Motivation
Current Architecture Problems
Problem 1: Protocol Leakage
The KeyValue pattern currently exposes 4 separate gRPC services directly to the proxy:
Current: Client must handle 4 separate services
┌─────────────────────────────────────────────────┐
│ KeyValue Pattern │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ BasicService │ │ TTLService │ ← 4 gRPC │
│ └──────────────┘ └──────────────┘ services│
│ ┌──────────────┐ ┌──────────────┐ │
│ │ ScanService │ │ BatchService │ ← Optional│
│ └──────────────┘ └──────────────┘ may not │
│ exist │
└─────────────────────────────────────────────────┘
Client code must:
- Import 4 different proto packages
- Check if ScanService exists before calling Scan()
- Handle "method not found" errors gracefully
- Understand backend capability variations
Problem 2: No Pattern-Level Abstraction
Multicast Registry pattern has sophisticated business logic (Register → Enumerate → Multicast with 3 backend slots) but no proto definition:
// patterns/multicast_registry/slots.go
// These are Go interfaces, not proto services!
type RegistryBackend interface {
Set(ctx context.Context, identity string, metadata map[string]interface{}, ttl time.Duration) error
Scan(ctx context.Context) ([]*Identity, error)
Enumerate(ctx context.Context, filter *Filter) ([]*Identity, error)
}
type MessagingBackend interface {
Publish(ctx context.Context, topic string, payload []byte) error
}
type DurabilityBackend interface { // Optional slot
Enqueue(ctx context.Context, queue string, payload []byte) error
}
Problems:
- Pattern business logic invisible in protocol layer
- Clients can't discover Multicast Registry via gRPC reflection
- No versioning for pattern operations
- Can't generate client SDKs from proto
Problem 3: Ad-Hoc Capability Detection
// patterns/keyvalue/grpc_server.go
if kv.SupportsScan() { // Runtime check!
scanService := &KeyValueScanService{kv: kv}
pb_kv.RegisterKeyValueScanInterfaceServer(grpcServer, scanService)
}
Issues:
- Capability check happens at server startup, not discoverable by clients
- No formal capability declaration mechanism
- Clients must use trial-and-error to discover features
- Error handling is application-specific
Problem 4: Backend Capability Gaps
Different backends support different operations:
| Backend | Basic | Batch | Scan | TTL | Transactions |
|---|---|---|---|---|---|
| Redis | ✅ | ✅ | ✅ | ✅ | ✅ |
| Postgres | ✅ | ✅ | ✅ | ❌ | ✅ |
| MemStore | ✅ | ❌ | ✅ | ✅ | ❌ |
| S3/MinIO | ✅ | ❌ | ❌ | ❌ | ❌ |
| DynamoDB | ✅ | ✅ | ❌ | ✅ | ✅ |
Current Approach: Expose optional services, let clients handle variations → Client complexity
Proposed Approach: Pattern satisfies complete interface OR fails at configuration time → Zero client complexity
Goals
- Mandatory Complete Interfaces: Every operation in pattern protocol MUST be available to clients
- Configuration-Time Validation: Backend compatibility checked at namespace creation, not runtime
- Backend Slot Requirements: Patterns declare what they need from backends through slot schemas
- Pattern-Level Emulation: Patterns implement missing backend features internally
- Zero Client Capability Checks: Clients never call GetCapabilities() or check for optional features
- Single Pattern Protocol: Each pattern exposes ONE consolidated gRPC service
- Clear Configuration Errors: If backend can't satisfy pattern interface, fail at config time with actionable message
Non-Goals
- Exposing Backend Capabilities to Clients: Clients don't see or care about backend limitations
- Optional Operations: All pattern operations are mandatory from client perspective
- Runtime Feature Detection: Capability checking happens at configuration/instantiation only
- Client-Side Emulation: Patterns handle all backend variations internally
- Dynamic Capability Changes: Pattern interface is fixed per namespace configuration
Design Principles
Principle 1: Patterns Are Semantic, Backends Are Mechanical
Pattern protocols expose "what you want to do":
- KeyValue:
Store(),Retrieve(),Scan(),SetExpiration() - Multicast Registry:
Register(),Enumerate(),Multicast() - Session Store:
CreateSession(),GetSession(),ExtendSession()
Backend protocols expose "how to do it":
- Redis:
SET,GET,SCAN,EXPIRE - Kafka:
Produce,Consume,CreateTopic - NATS:
Publish,Subscribe,Request
Client code operates at pattern level, never sees backends.
Principle 2: Complete Interfaces Are Mandatory
Every pattern presents a complete, mandatory interface:
service KeyValuePattern {
// ALL operations are mandatory and always available
// No GetCapabilities(), no optional operations
rpc Store(StoreRequest) returns (StoreResponse);
rpc Retrieve(RetrieveRequest) returns (RetrieveResponse);
rpc Delete(DeleteRequest) returns (DeleteResponse);
// These MUST work, even if backend doesn't natively support them
rpc Scan(ScanRequest) returns (stream ScanResponse);
rpc SetExpiration(SetExpirationRequest) returns (SetExpirationResponse);
rpc BatchStore(BatchStoreRequest) returns (BatchStoreResponse);
}
Client perspective: Every operation exists and works. No capability checks, no optional features.
Pattern responsibility: Satisfy complete interface through:
- Native backend support (best case)
- Emulation using other backend operations
- Secondary slot backend (e.g., Redis for scan when primary is S3)
- Configuration-time error with clear message: "S3 backend requires scan_slot for Scan() operation"
Principle 3: Backend Limitations Handled Internally
When backend doesn't natively support an operation, pattern handles it internally (clients never see this):
- Native Support (ideal): Backend provides the operation
- Example: Redis natively supports Scan → direct passthrough
- Emulation (acceptable): Pattern implements using other backend operations
- Example: Batch operations emulated as sequential calls for simple backends
- Fallback Slot (acceptable): Use secondary backend for missing capability
- Example: S3 for storage + Redis slot for Scan operations
- Configuration Error (explicit): Refuse to instantiate pattern if requirements not met
- Example: "KeyValue pattern requires backend with Scan support OR scan_slot configuration"
Client impact: None. Client code is identical regardless of which strategy pattern uses.
Never: Expose optional operations to clients, return "not supported" errors at runtime, or silently degrade behavior.
Principle 4: Backend Slot Requirements Drive Validation
Patterns declare what they need from backends through slot schemas (pattern-internal, not exposed to clients):
# Example: KeyValue pattern with complete interface support
namespaces:
- name: user-sessions
pattern: keyvalue
pattern_version: v1
slots:
primary:
backend: redis
interfaces:
- KeyValueBasicInterface # Store, Retrieve, Delete
- KeyValueScanInterface # Scan operations
- KeyValueTTLInterface # Expiration
- KeyValueBatchInterface # Batch operations
config:
address: localhost:6379
Configuration-time validation (pattern checks slot requirements):
✅ Redis → Provides all required interfaces → Pattern instantiates
❌ S3 → Missing KeyValueScanInterface → Configuration error:
"KeyValue pattern requires KeyValueScanInterface for Scan() operations.
Backend 's3' does not implement this interface.
Options: Use Redis, Postgres, or MemStore backends."
Alternative: Multi-slot pattern (for backends with gaps):
slots:
primary:
backend: s3 # Storage only
interfaces:
- KeyValueBasicInterface
config:
bucket: my-data
scan_slot: # Scan optimization backend
backend: redis
interfaces:
- KeyValueScanInterface
config:
address: localhost:6379
Client code: Unchanged. Clients don't see slot configuration, just use the complete interface.
Consolidated Pattern Protocols
Pattern 1: KeyValue Pattern
Current State: 4 separate services (Basic, Batch, Scan, TTL)
Proposed: Single consolidated service with semantic operations
// proto/prism/patterns/keyvalue/keyvalue_pattern.proto
syntax = "proto3";
package prism.patterns.keyvalue;
import "prism/common/types.proto";
// KeyValuePattern provides semantic key-value storage operations.
// This is the ONLY service exposed to the proxy for KeyValue patterns.
// Backend details (Redis, Postgres, S3) are hidden behind this interface.
//
// IMPORTANT: ALL operations are mandatory and must work.
// Pattern handles backend limitations through emulation or slot configuration.
// Clients NEVER check capabilities or handle missing operations.
service KeyValuePattern {
// === Core Operations ===
// Store a value (semantic operation, not backend-specific SET)
rpc Store(StoreRequest) returns (StoreResponse);
// Retrieve a value (semantic operation, not backend-specific GET)
rpc Retrieve(RetrieveRequest) returns (RetrieveResponse);
// Remove a value (semantic operation, not backend-specific DELETE)
rpc Remove(RemoveRequest) returns (RemoveResponse);
// Check if value exists
rpc Exists(ExistsRequest) returns (ExistsResponse);
// === Batch Operations ===
// MUST work (emulated as sequential if backend lacks batch support)
// Store multiple values
rpc StoreBatch(StoreBatchRequest) returns (StoreBatchResponse);
// Retrieve multiple values
rpc RetrieveBatch(RetrieveBatchRequest) returns (RetrieveBatchResponse);
// === Scan Operations ===
// MUST work (use scan_slot backend if primary lacks scan)
// Scan keys with pagination (streaming for large result sets)
rpc Scan(ScanRequest) returns (stream ScanResponse);
// List keys matching prefix (non-streaming for small sets)
rpc ListKeys(ListKeysRequest) returns (ListKeysResponse);
// Count keys matching prefix
rpc Count(CountRequest) returns (CountResponse);
// === Expiration Operations ===
// MUST work (emulated with background cleanup if backend lacks TTL)
// Set or update expiration time for a key
rpc SetExpiration(SetExpirationRequest) returns (SetExpirationResponse);
// Get remaining time-to-live
rpc GetExpiration(GetExpirationRequest) returns (GetExpirationResponse);
// Remove expiration (make key persistent)
rpc ClearExpiration(ClearExpirationRequest) returns (ClearExpirationResponse);
// === Transactional Operations ===
// MUST work (serialized if backend lacks transaction support)
// Execute multiple operations atomically or serially
rpc ExecuteTransaction(TransactionRequest) returns (TransactionResponse);
}
// NO GetCapabilities() RPC - interface is complete and mandatory
// NO Capabilities message - clients don't check capabilities
// NO optional operations - everything works or pattern fails at config time
enum BatchPerformance {
BATCH_NATIVE = 0; // Backend has native batch operations (Redis MGET/MSET)
BATCH_PIPELINE = 1; // Pattern pipelines individual operations
BATCH_UNAVAILABLE = 2;// Batch not meaningful for this backend
}
// === Request/Response Messages ===
message StoreRequest {
string key = 1;
bytes value = 2;
optional int64 expiration_seconds = 3; // Inline expiration for convenience
prism.common.Tags tags = 4;
}
message StoreResponse {
bool success = 1;
optional string error = 2;
}
message RetrieveRequest {
string key = 1;
}
message RetrieveResponse {
bool found = 1;
optional bytes value = 2;
optional string error = 3;
}
message RemoveRequest {
string key = 1;
}
message RemoveResponse {
bool success = 1;
bool key_existed = 2;
optional string error = 3;
}
message ExistsRequest {
string key = 1;
}
message ExistsResponse {
bool exists = 1;
optional string error = 2;
}
// Batch operations
message StoreBatchRequest {
repeated StoreRequest requests = 1;
}
message StoreBatchResponse {
repeated StoreResponse results = 1;
bool all_success = 2;
int32 success_count = 3;
}
message RetrieveBatchRequest {
repeated string keys = 1;
}
message RetrieveBatchResponse {
repeated RetrieveResponse results = 1;
int32 found_count = 2;
}
// Scan operations
message ScanRequest {
optional string prefix = 1; // Filter by key prefix
optional string cursor = 2; // Pagination cursor
optional int32 limit = 3; // Max keys per page
optional bool include_values = 4; // Whether to return values (default: false)
}
message ScanResponse {
repeated string keys = 1;
repeated bytes values = 2; // Only present if include_values=true
optional string next_cursor = 3; // Empty if no more results
bool has_more = 4;
}
message ListKeysRequest {
optional string prefix = 1;
optional int32 limit = 2;
}
message ListKeysResponse {
repeated string keys = 1;
bool truncated = 2;
int32 total_count = 3;
}
message CountRequest {
optional string prefix = 1;
}
message CountResponse {
int64 count = 1;
}
// Expiration operations
message SetExpirationRequest {
string key = 1;
int64 expiration_seconds = 2;
}
message SetExpirationResponse {
bool success = 1;
bool key_existed = 2;
optional string error = 3;
}
message GetExpirationRequest {
string key = 1;
}
message GetExpirationResponse {
int64 expiration_seconds = 1; // -1 = no expiration, -2 = key doesn't exist
optional string error = 2;
}
message ClearExpirationRequest {
string key = 1;
}
message ClearExpirationResponse {
bool success = 1;
bool expiration_removed = 2;
optional string error = 3;
}
// Transaction operations
message TransactionRequest {
repeated TransactionOp operations = 1;
}
message TransactionOp {
oneof op {
StoreRequest store = 1;
RemoveRequest remove = 2;
SetExpirationRequest set_expiration = 3;
}
}
message TransactionResponse {
bool success = 1;
repeated TransactionOpResult results = 2;
optional string error = 3;
}
message TransactionOpResult {
bool success = 1;
optional string error = 2;
}
Key Improvements:
- Single service instead of 4 separate services
- Semantic names:
StorenotSet,RetrievenotGet - Capability discovery: Clients know what's available before calling
- Inline expiration: Convenience feature for common case
- Clear performance metadata:
NATIVEvsEMULATEDvsUNAVAILABLE
Pattern 2: Multicast Registry Pattern
Current State: No proto definition, only Go interfaces
Proposed: Comprehensive pattern protocol with slot schema
// proto/prism/patterns/multicast_registry/multicast_registry_pattern.proto
syntax = "proto3";
package prism.patterns.multicast_registry;
import "prism/common/types.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
// MulticastRegistryPattern provides identity registration with selective multicast.
// Combines registry (KeyValue + Scan) + messaging (PubSub) + optional durability (Queue).
service MulticastRegistryPattern {
// === Capability Discovery ===
rpc GetCapabilities(GetCapabilitiesRequest) returns (Capabilities);
// === Identity Management ===
// Register an identity with metadata and optional TTL
rpc Register(RegisterRequest) returns (RegisterResponse);
// Update identity metadata
rpc UpdateMetadata(UpdateMetadataRequest) returns (UpdateMetadataResponse);
// Deregister identity
rpc Deregister(DeregisterRequest) returns (DeregisterResponse);
// Refresh identity TTL (heartbeat)
rpc Heartbeat(HeartbeatRequest) returns (HeartbeatResponse);
// === Discovery ===
// Enumerate identities with optional filter
rpc Enumerate(EnumerateRequest) returns (EnumerateResponse);
// Get specific identity metadata
rpc GetIdentity(GetIdentityRequest) returns (GetIdentityResponse);
// === Multicast ===
// Send message to all identities matching filter
rpc Multicast(MulticastRequest) returns (MulticastResponse);
// === Streaming (if supported) ===
// Subscribe to receive multicasts for this identity
rpc Subscribe(SubscribeRequest) returns (stream MulticastMessage);
// Stream all registration events (admin/monitoring)
rpc StreamRegistrations(StreamRegistrationsRequest) returns (stream RegistrationEvent);
}
message GetCapabilitiesRequest {}
message Capabilities {
// Feature flags
bool supports_filtering = 1; // Backend-native filtering (vs client-side)
bool supports_durability = 2; // Messages persisted to queue
bool supports_streaming = 3; // Streaming subscriptions
bool supports_wildcard_filters = 4; // Complex filter expressions
// Slot configuration
SlotConfiguration slots = 5;
// Performance characteristics
FilterPerformance filter_performance = 6;
MulticastPerformance multicast_performance = 7;
// Limitations
optional int32 max_identities = 8;
optional int32 max_metadata_size = 9;
optional int32 max_multicast_batch = 10;
}
message SlotConfiguration {
// Registry slot (required)
string registry_backend_type = 1; // "redis", "postgres", etc.
repeated string registry_interfaces = 2; // ["KeyValueBasicInterface", "KeyValueScanInterface"]
// Messaging slot (required)
string messaging_backend_type = 3; // "nats", "kafka", "redis-pubsub"
repeated string messaging_interfaces = 4; // ["PubSubBasicInterface"]
// Durability slot (optional)
optional string durability_backend_type = 5;
repeated string durability_interfaces = 6;
}
enum FilterPerformance {
FILTER_NATIVE = 0; // Backend supports native filtering (Redis with Lua)
FILTER_CLIENT_SIDE = 1; // Pattern filters in-memory (MemStore, basic Redis)
FILTER_UNAVAILABLE = 2; // Only basic prefix matching
}
enum MulticastPerformance {
MULTICAST_NATIVE = 0; // Backend has native broadcast (Redis PUBLISH)
MULTICAST_FANOUT = 1; // Pattern fans out to individual topics
MULTICAST_QUEUED = 2; // Messages go through durable queue first
}
// === Identity Management Messages ===
message RegisterRequest {
string identity = 1; // Unique identifier
google.protobuf.Struct metadata = 2; // Arbitrary JSON metadata
optional google.protobuf.Duration ttl = 3; // Auto-expire after duration
repeated string tags = 4; // Simple tags for filtering
}
message RegisterResponse {
bool success = 1;
optional string error = 2;
string identity = 3;
}
message UpdateMetadataRequest {
string identity = 1;
google.protobuf.Struct metadata = 2; // Replaces existing metadata
bool merge = 3; // If true, merge with existing (vs replace)
}
message UpdateMetadataResponse {
bool success = 1;
optional string error = 2;
}
message DeregisterRequest {
string identity = 1;
}
message DeregisterResponse {
bool success = 1;
bool identity_existed = 2;
optional string error = 3;
}
message HeartbeatRequest {
string identity = 1;
optional google.protobuf.Duration extend_ttl = 2; // Extend by duration
}
message HeartbeatResponse {
bool success = 1;
optional string error = 2;
}
// === Discovery Messages ===
message EnumerateRequest {
optional FilterExpression filter = 1;
optional int32 limit = 2;
optional string cursor = 3; // For pagination
}
message FilterExpression {
oneof expr {
TagFilter tag_filter = 1; // Simple: identity.tags contains "production"
MetadataFilter metadata_filter = 2; // Complex: identity.metadata.status == "healthy"
string raw_expression = 3; // Advanced: "status=='healthy' AND region=='us-west'"
}
}
message TagFilter {
repeated string required_tags = 1; // AND logic
repeated string any_tags = 2; // OR logic
}
message MetadataFilter {
map<string, string> equals = 1; // metadata.key == value
map<string, string> not_equals = 2; // metadata.key != value
map<string, string> contains = 3; // metadata.key contains value
}
message EnumerateResponse {
repeated Identity identities = 1;
int32 total_count = 2;
optional string next_cursor = 3;
bool has_more = 4;
}
message Identity {
string identity = 1;
google.protobuf.Struct metadata = 2;
repeated string tags = 3;
int64 registered_at = 4; // Unix timestamp
optional int64 expires_at = 5;
}
message GetIdentityRequest {
string identity = 1;
}
message GetIdentityResponse {
bool found = 1;
optional Identity identity = 2;
optional string error = 3;
}
// === Multicast Messages ===
message MulticastRequest {
optional FilterExpression filter = 1; // If empty, send to all identities
bytes payload = 2;
map<string, string> metadata = 3;
optional int32 timeout_ms = 4; // Max time to wait for all deliveries
}
message MulticastResponse {
bool success = 1;
int32 target_count = 2; // How many identities matched filter
int32 delivered_count = 3; // How many messages delivered
int32 failed_count = 4; // How many failures
repeated DeliveryResult results = 5;
optional string error = 6;
}
message DeliveryResult {
string identity = 1;
DeliveryStatus status = 2;
optional string error = 3;
int32 latency_ms = 4;
}
enum DeliveryStatus {
DELIVERED = 0;
PENDING = 1; // Queued for later delivery
FAILED = 2;
TIMEOUT = 3;
}
// === Streaming Messages ===
message SubscribeRequest {
string identity = 1;
bool include_history = 2; // Replay missed messages (if durability enabled)
}
message MulticastMessage {
string from_identity = 1;
bytes payload = 2;
map<string, string> metadata = 3;
int64 timestamp = 4;
string message_id = 5;
}
message StreamRegistrationsRequest {
optional FilterExpression filter = 1;
}
message RegistrationEvent {
enum EventType {
REGISTERED = 0;
UPDATED = 1;
DEREGISTERED = 2;
EXPIRED = 3;
}
EventType type = 1;
Identity identity = 2;
int64 timestamp = 3;
}
Key Improvements:
- Pattern now has formal proto definition (previously only Go interfaces)
- Semantic operations: Register, Enumerate, Multicast vs backend primitives
- Flexible filtering: From simple tags to complex expressions
- Slot schema formally defined in Capabilities message
- Performance metadata: Clients know if filtering is native or client-side
- Streaming support: Subscribe to multicasts and registration events
Pattern 3: Session Store Pattern
Current State: Not yet implemented (mentioned in RFC-024)
Proposed: Complete pattern protocol
// proto/prism/patterns/session_store/session_store_pattern.proto
syntax = "proto3";
package prism.patterns.session_store;
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
// SessionStorePattern provides distributed session management with automatic
// expiration, replication, and conflict resolution.
service SessionStorePattern {
// === Capability Discovery ===
rpc GetCapabilities(GetCapabilitiesRequest) returns (Capabilities);
// === Session Lifecycle ===
// Create new session
rpc CreateSession(CreateSessionRequest) returns (SessionResponse);
// Get session data
rpc GetSession(GetSessionRequest) returns (SessionResponse);
// Update session data
rpc UpdateSession(UpdateSessionRequest) returns (SessionResponse);
// Extend session TTL (keep-alive)
rpc ExtendSession(ExtendSessionRequest) returns (SessionResponse);
// Invalidate session
rpc InvalidateSession(InvalidateSessionRequest) returns (InvalidateResponse);
// === Batch Operations ===
// Get multiple sessions
rpc GetSessionBatch(GetSessionBatchRequest) returns (GetSessionBatchResponse);
// === Admin Operations ===
// List active sessions
rpc ListSessions(ListSessionsRequest) returns (ListSessionsResponse);
// Count active sessions
rpc CountSessions(CountSessionsRequest) returns (CountSessionsResponse);
}
message GetCapabilitiesRequest {}
message Capabilities {
bool supports_replication = 1; // Multi-region replication
bool supports_sticky = 2; // Sticky routing hints
bool supports_batch = 3; // Batch get operations
bool supports_conflict_resolution = 4; // Automatic conflict resolution
SlotConfiguration slots = 5;
optional int32 max_session_size = 6;
optional int32 default_ttl_seconds = 7;
optional int32 max_ttl_seconds = 8;
}
message SlotConfiguration {
// Primary storage slot
string storage_backend_type = 1;
repeated string storage_interfaces = 2;
// Optional replication slot (for multi-region)
optional string replication_backend_type = 3;
}
// === Request/Response Messages ===
message CreateSessionRequest {
optional string session_id = 1; // If empty, pattern generates UUID
google.protobuf.Struct data = 2;
optional google.protobuf.Duration ttl = 3;
repeated string tags = 4;
}
message SessionResponse {
bool success = 1;
optional string error = 2;
string session_id = 3;
google.protobuf.Struct data = 4;
int64 created_at = 5;
int64 expires_at = 6;
int32 version = 7; // For optimistic locking
}
message GetSessionRequest {
string session_id = 1;
bool extend_ttl = 2; // Automatically extend on read
}
message UpdateSessionRequest {
string session_id = 1;
google.protobuf.Struct data = 2;
bool merge = 3; // Merge with existing vs replace
optional int32 expected_version = 4; // Optimistic locking
}
message ExtendSessionRequest {
string session_id = 1;
google.protobuf.Duration extend_by = 2;
}
message InvalidateSessionRequest {
string session_id = 1;
}
message InvalidateResponse {
bool success = 1;
bool session_existed = 2;
optional string error = 3;
}
message GetSessionBatchRequest {
repeated string session_ids = 1;
}
message GetSessionBatchResponse {
repeated SessionResponse sessions = 1;
int32 found_count = 2;
}
message ListSessionsRequest {
optional string tag_filter = 1;
optional int32 limit = 2;
optional string cursor = 3;
}
message ListSessionsResponse {
repeated string session_ids = 1;
optional string next_cursor = 2;
bool has_more = 3;
}
message CountSessionsRequest {
optional string tag_filter = 1;
}
message CountSessionsResponse {
int64 count = 1;
}
Pattern 4: Producer/Consumer Patterns
Current State: Wrappers around backend producers/consumers
Proposed: Semantic messaging operations
// proto/prism/patterns/producer/producer_pattern.proto
syntax = "proto3";
package prism.patterns.producer;
// ProducerPattern provides semantic message publishing with guarantees.
service ProducerPattern {
rpc GetCapabilities(GetCapabilitiesRequest) returns (Capabilities);
// Publish single message
rpc Publish(PublishRequest) returns (PublishResponse);
// Publish batch of messages
rpc PublishBatch(PublishBatchRequest) returns (PublishBatchResponse);
// Flush pending messages
rpc Flush(FlushRequest) returns (FlushResponse);
}
message Capabilities {
bool supports_ordering = 1; // Guaranteed order within partition/topic
bool supports_batching = 2; // Native batch support
bool supports_transactions = 3; // Atomic multi-message publish
bool supports_compression = 4; // Message compression
DeliveryGuarantee delivery_guarantee = 5;
SlotConfiguration slots = 6;
}
enum DeliveryGuarantee {
AT_MOST_ONCE = 0; // Fire and forget
AT_LEAST_ONCE = 1; // May duplicate on retry
EXACTLY_ONCE = 2; // Idempotent delivery
}
message SlotConfiguration {
string messaging_backend_type = 1; // "kafka", "nats", "redis-streams"
repeated string messaging_interfaces = 2;
}
message PublishRequest {
string topic = 1;
bytes payload = 2;
optional string key = 3; // For partitioning/ordering
map<string, string> metadata = 4;
optional int32 timeout_ms = 5;
}
message PublishResponse {
bool success = 1;
optional string error = 2;
string message_id = 3;
int64 offset = 4; // Kafka offset or NATS sequence
int64 timestamp = 5;
}
message PublishBatchRequest {
repeated PublishRequest messages = 1;
}
message PublishBatchResponse {
repeated PublishResponse results = 1;
bool all_success = 2;
}
message FlushRequest {
optional int32 timeout_ms = 1;
}
message FlushResponse {
bool success = 1;
int32 flushed_count = 2;
}
// proto/prism/patterns/consumer/consumer_pattern.proto
syntax = "proto3";
package prism.patterns.consumer;
// ConsumerPattern provides semantic message consumption with acknowledgment.
service ConsumerPattern {
rpc GetCapabilities(GetCapabilitiesRequest) returns (Capabilities);
// Subscribe to topic(s)
rpc Subscribe(SubscribeRequest) returns (stream Message);
// Acknowledge message(s)
rpc Acknowledge(AcknowledgeRequest) returns (AcknowledgeResponse);
// Negative acknowledge (retry later)
rpc Nack(NackRequest) returns (NackResponse);
// Seek to specific offset/timestamp
rpc Seek(SeekRequest) returns (SeekResponse);
// Get consumer position
rpc GetPosition(GetPositionRequest) returns (GetPositionResponse);
}
message Capabilities {
bool supports_consumer_groups = 1; // Shared consumption across instances
bool supports_replay = 2; // Seek to past messages
bool supports_deadletter = 3; // Failed messages go to DLQ
bool supports_filtering = 4; // Server-side message filtering
DeliveryGuarantee delivery_guarantee = 5;
SlotConfiguration slots = 6;
}
enum DeliveryGuarantee {
AT_MOST_ONCE = 0;
AT_LEAST_ONCE = 1;
EXACTLY_ONCE = 2;
}
message SlotConfiguration {
string messaging_backend_type = 1;
repeated string messaging_interfaces = 2;
optional string deadletter_backend_type = 3; // For failed messages
}
message SubscribeRequest {
repeated string topics = 1;
optional string consumer_group = 2;
optional string consumer_id = 3;
optional FilterExpression filter = 4;
optional SeekPosition seek = 5;
}
message FilterExpression {
map<string, string> metadata_equals = 1;
repeated string metadata_contains = 2;
}
message SeekPosition {
oneof position {
int64 offset = 1;
int64 timestamp = 2;
string beginning = 3; // "beginning" or "end"
}
}
message Message {
string message_id = 1;
string topic = 2;
bytes payload = 3;
map<string, string> metadata = 4;
int64 offset = 5;
int64 timestamp = 6;
optional string key = 7;
int32 retry_count = 8;
}
message AcknowledgeRequest {
repeated string message_ids = 1;
}
message AcknowledgeResponse {
bool success = 1;
int32 acked_count = 2;
}
message NackRequest {
repeated string message_ids = 1;
optional int32 retry_delay_ms = 2;
}
message NackResponse {
bool success = 1;
int32 nacked_count = 2;
}
message SeekRequest {
SeekPosition position = 1;
}
message SeekResponse {
bool success = 1;
int64 new_offset = 2;
}
message GetPositionRequest {}
message GetPositionResponse {
int64 current_offset = 1;
int64 latest_offset = 2;
int64 lag = 3;
}
Capability Negotiation Strategies
Strategy 1: GetCapabilities() RPC
Every pattern service exposes GetCapabilities() that returns:
- Feature flags (supports_scan, supports_batch, etc.)
- Slot configuration (which backends are bound)
- Performance characteristics (native vs emulated)
- Limitations (max sizes, timeouts)
Client usage:
# Initialize client
client = KeyValuePatternClient("localhost:50051")
# Discover capabilities once at startup
caps = client.get_capabilities()
if caps.supports_scan:
# Use scan efficiently
for key in client.scan(prefix="user:"):
process(key)
else:
# Fallback to list (may be slower or limited)
keys = client.list_keys(prefix="user:", limit=1000)
Strategy 2: Graceful Degradation Matrix
Pattern decides how to handle missing capabilities:
| Operation | Backend Missing | Pattern Behavior |
|---|---|---|
| Scan on S3 | No SCAN support | Return error: "Scan unavailable: S3 list too expensive" |
| Scan on DynamoDB | No SCAN, has Query | Emulate via paginated Query (warn: EMULATED) |
| TTL on Postgres | No native TTL | Use secondary Redis backend for expiration tracking |
| Batch on MemStore | No batch impl | Pipeline individual operations (warn: PIPELINE) |
| Transactions on S3 | No transactions | Return error: "Transactions unavailable" |
| Filter on basic Redis | No Lua filtering | Fetch all, filter client-side (warn: CLIENT_SIDE) |
Strategy 3: Multi-Slot Patterns
Multicast Registry demonstrates slot composition:
# Configuration with 3 slots
pattern: multicast-registry
slots:
registry:
backend: redis-cluster
interfaces:
- KeyValueBasicInterface
- KeyValueScanInterface # Required for Enumerate()
- KeyValueTTLInterface # Required for identity TTL
messaging:
backend: nats-jetstream
interfaces:
- PubSubBasicInterface # Required for Multicast()
- PubSubPersistentInterface # Optional for durability
durability: # Optional slot
backend: kafka
interfaces:
- StreamBasicInterface # For message queuing
Pattern validates at initialization:
- Registry slot MUST provide Scan capability (required for Enumerate)
- If registry backend doesn't support Scan, fail fast with clear error
- Durability slot is optional - if missing,
supports_durability = falsein capabilities
Strategy 4: Performance Metadata
Capabilities expose HOW operation is implemented:
message Capabilities {
bool supports_scan = 1;
ScanPerformance scan_performance = 2;
}
enum ScanPerformance {
SCAN_NATIVE = 0; // O(N) cursor-based iteration (Redis SCAN)
SCAN_EMULATED = 1; // O(N*log(N)) via paginated queries (DynamoDB)
SCAN_UNAVAILABLE = 2; // Would require O(N*M) list operations (S3)
}
Client can make informed decisions:
caps = client.get_capabilities()
if caps.scan_performance == ScanPerformance.SCAN_NATIVE:
# Scan entire keyspace, it's efficient
scan_all_keys()
elif caps.scan_performance == ScanPerformance.SCAN_EMULATED:
# Limit scan to smaller prefix ranges
for prefix in shard_prefixes:
scan_prefix(prefix, limit=1000)
else:
# Don't scan at all, use alternative approach
use_prebuilt_index()
Backend Slot Schema
Slot Schema Definition
Patterns declare their slot requirements in proto:
// proto/prism/patterns/schemas/slot_schema.proto
syntax = "proto3";
package prism.patterns.schemas;
// Slot schema defines backend requirements for a pattern
message SlotSchema {
repeated SlotRequirement required_slots = 1;
repeated SlotRequirement optional_slots = 2;
}
message SlotRequirement {
string slot_name = 1; // "primary", "registry", "messaging", etc.
string slot_purpose = 2; // Human-readable description
// Interface requirements (from Layer 1 backend interfaces)
repeated string required_interfaces = 3; // MUST implement these
repeated string preferred_interfaces = 4; // SHOULD implement these
repeated string optional_interfaces = 5; // MAY implement these
// Performance requirements
PerformanceRequirements performance = 6;
// Compatibility constraints
repeated string incompatible_backends = 7; // Known incompatibilities
}
message PerformanceRequirements {
optional LatencyRequirement latency = 1;
optional ThroughputRequirement throughput = 2;
optional DurabilityRequirement durability = 3;
}
message LatencyRequirement {
enum LatencyClass {
LOW = 0; // <1ms (in-memory)
MEDIUM = 1; // <10ms (local network)
HIGH = 2; // <100ms (remote network)
ANY = 3; // No requirement
}
LatencyClass read_latency = 1;
LatencyClass write_latency = 2;
}
message ThroughputRequirement {
enum ThroughputClass {
HIGH = 0; // >100k ops/sec
MEDIUM = 1; // >10k ops/sec
LOW = 2; // >1k ops/sec
ANY = 3;
}
ThroughputClass operations_per_second = 1;
}
message DurabilityRequirement {
enum DurabilityClass {
NONE = 0; // In-memory, data loss acceptable
EVENTUAL = 1; // Async replication, may lose recent writes
STRONG = 2; // Sync replication, no data loss
}
DurabilityClass requirement = 1;
}
Example: KeyValue Pattern Slot Schema
# Embedded in proto or separate YAML
slot_schema:
required_slots:
- slot_name: primary
slot_purpose: "Primary key-value storage backend"
required_interfaces:
- KeyValueBasicInterface # MUST support Get/Set/Delete
preferred_interfaces:
- KeyValueBatchInterface # SHOULD support batch ops
- KeyValueScanInterface # SHOULD support scanning
optional_interfaces:
- KeyValueTTLInterface # MAY support expiration
performance:
read_latency: MEDIUM # <10ms
write_latency: MEDIUM
throughput: MEDIUM # >10k ops/sec
durability: EVENTUAL
incompatible_backends:
- s3 # Too slow for primary storage
optional_slots:
- slot_name: scan_backend
slot_purpose: "Dedicated backend for scan operations (if primary doesn't support)"
required_interfaces:
- KeyValueBasicInterface
- KeyValueScanInterface # MUST support scan
performance:
read_latency: HIGH # Scan can be slower
- slot_name: expiration_backend
slot_purpose: "Dedicated backend for TTL tracking (if primary doesn't support)"
required_interfaces:
- KeyValueBasicInterface
- KeyValueTTLInterface
Example: Multicast Registry Slot Schema
slot_schema:
required_slots:
- slot_name: registry
slot_purpose: "Identity storage with metadata and scanning"
required_interfaces:
- KeyValueBasicInterface # Store identities
- KeyValueScanInterface # Enumerate all identities (critical!)
preferred_interfaces:
- KeyValueTTLInterface # Auto-expire identities
performance:
read_latency: LOW # Fast lookups
write_latency: MEDIUM
durability: EVENTUAL # Registry can be rebuilt
- slot_name: messaging
slot_purpose: "Message delivery for multicast operations"
required_interfaces:
- PubSubBasicInterface # Publish messages
preferred_interfaces:
- PubSubPersistentInterface # Message persistence
performance:
write_latency: LOW # Fast message delivery
throughput: HIGH # Handle burst multicasts
optional_slots:
- slot_name: durability
slot_purpose: "Persistent queue for guaranteed message delivery"
required_interfaces:
- StreamBasicInterface # Durable message queue
performance:
durability: STRONG # No message loss
Implementation Strategy
Phase 1: Proto Definitions (Week 1)
Tasks:
- Create
proto/prism/patterns/directory structure - Write consolidated proto for each pattern:
keyvalue/keyvalue_pattern.protomulticast_registry/multicast_registry_pattern.protosession_store/session_store_pattern.protoproducer/producer_pattern.protoconsumer/consumer_pattern.proto
- Define slot schema proto:
patterns/schemas/slot_schema.proto - Generate Go code:
make generate-proto
Validation:
- Proto files compile without errors
- Generated Go code builds
- gRPC services can be registered
Phase 2: KeyValue Pattern Migration (Week 2-3)
Tasks:
- Implement
KeyValuePatterngRPC service inpatterns/keyvalue/pattern_service.go - Implement
GetCapabilities()based on backend detection - Adapt existing
KeyValuestruct to use new service interface - Implement graceful degradation:
- Scan emulation for DynamoDB (if needed)
- Error messages for S3 scan attempts
- Batch pipelining for MemStore
- Update unit tests to test new service
- Maintain backward compatibility: Keep old services running alongside new one
Validation:
- Integration tests pass with Redis backend
- Integration tests pass with Postgres backend
- Integration tests pass with MemStore backend
- Capability detection correctly identifies backend features
Phase 3: Multicast Registry Pattern Migration (Week 4-5)
Tasks:
- Create
proto/prism/patterns/multicast_registry/multicast_registry_pattern.proto - Implement
MulticastRegistryPatterngRPC service - Implement
GetCapabilities()based on 3-slot configuration - Implement filter performance detection:
- Native filtering for Redis with Lua
- Client-side filtering for basic Redis/MemStore
- Implement graceful degradation:
- Fail if registry backend doesn't support Scan
- Degrade multicast if durability slot missing
- Update integration tests
Validation:
- Tests pass with Redis (registry) + NATS (messaging)
- Tests pass with Postgres (registry) + Kafka (messaging)
- Capability detection exposes correct slot configuration
- Filter performance correctly identified
Phase 4: Client Integration Testing (Week 6)
Tasks:
- Update Rust proxy client code to use consolidated pattern services
- Remove references to backend-level services (KeyValueBasicInterface, etc.)
- Implement capability discovery in client initialization
- Update client integration tests:
- Test capability discovery
- Test graceful degradation (call unsupported operation, verify error)
- Test performance metadata usage
- Update client SDK generation
Validation:
- Rust proxy connects to patterns using new services
- Clients handle missing capabilities gracefully
- Integration tests cover all capability combinations
- Generated client SDKs are idiomatic and ergonomic
Phase 5: Producer/Consumer Migration (Week 7)
Tasks:
- Create pattern protos for Producer and Consumer
- Implement pattern services
- Implement capability detection (ordering, batching, transactions)
- Update integration tests
Validation:
- Kafka backend exposes correct capabilities
- NATS backend exposes correct capabilities
- Clients use new pattern services
Phase 6: Documentation and Deprecation (Week 8)
Tasks:
- Update documentation to reference pattern services, not backend interfaces
- Mark old backend interfaces as deprecated in proto
- Add migration guide for existing clients
- Update ADRs and RFCs
- Generate updated client SDKs for all languages
- Announce deprecation timeline for direct backend interface access
Validation:
- Documentation is complete and accurate
- Migration guide tested with real client migrations
- Deprecation warnings visible in logs
Phase 7: Cleanup (Week 9-10)
Tasks:
- Remove deprecated backend interface registrations from patterns
- Remove backward compatibility shims
- Delete unused proto files (if backend interfaces only used internally)
- Performance benchmarking to ensure no regressions
Validation:
- All tests pass without deprecated code
- Performance benchmarks within 5% of previous implementation
- Binary size unchanged or reduced
Migration Path for Clients
Before (Current Architecture)
Python Client:
from prism.interfaces.keyvalue import KeyValueBasicInterface, KeyValueScanInterface
# Must import multiple services
basic_client = KeyValueBasicInterface("localhost:50051")
scan_client = KeyValueScanInterface("localhost:50051") # May not exist!
# Basic operations
basic_client.Set(key="user:123", value=b"data")
value = basic_client.Get(key="user:123")
# Scan operations (may fail if backend doesn't support)
try:
results = scan_client.Scan(prefix="user:")
except grpc.RpcError as e:
if e.code() == grpc.StatusCode.UNIMPLEMENTED:
print("Scan not supported by this backend")
# Now what? Client must implement workaround
Problems:
- Client must know about 4 different services
- Try/catch for every optional operation
- No way to discover capabilities ahead of time
After (Proposed Architecture)
Python Client:
from prism.patterns.keyvalue import KeyValuePatternClient
# Single client, single service
client = KeyValuePatternClient("localhost:50051")
# Discover capabilities once at startup
caps = client.get_capabilities()
print(f"Backend: {caps.slots.primary_backend_type}")
print(f"Supports scan: {caps.supports_scan}")
print(f"Scan performance: {caps.scan_performance}")
# Core operations (always available)
client.store(key="user:123", value=b"data")
value = client.retrieve(key="user:123")
# Optional operations (check capabilities first)
if caps.supports_scan:
if caps.scan_performance == ScanPerformance.SCAN_NATIVE:
# Scan is efficient, use it freely
for key in client.scan(prefix="user:"):
process(key)
elif caps.scan_performance == ScanPerformance.SCAN_EMULATED:
# Scan is slow, limit usage
keys = client.list_keys(prefix="user:", limit=1000)
else:
# Scan unavailable, use alternative
print("Scan not available, using index")
else:
# Clear fallback path
keys = load_from_index()
Benefits:
- Single import, single client
- Capability discovery is explicit
- Performance characteristics visible
- Clear fallback paths
Backward Compatibility Period
Transition Strategy:
-
Phase 1 (Months 1-3): Both old and new services available
- Patterns register both
KeyValueBasicInterfaceServerANDKeyValuePatternServer - Clients can use either
- Emit deprecation warnings in logs when old services used
- Patterns register both
-
Phase 2 (Months 4-6): Deprecation period
- Update documentation to show new patterns only
- Provide migration guide
- Add loud warnings to clients using old services
-
Phase 3 (Months 7+): Old services removed
- Patterns only register new consolidated services
- Old proto files moved to
proto/deprecated/
Open Questions and Decisions
Question 1: How aggressive should feature emulation be?
Options:
- Conservative: Only emulate if performance acceptable, else fail fast
- Aggressive: Always try to emulate, even if slow
- Configurable: Let admin choose in pattern configuration
Recommendation: Conservative - Fail fast with clear error rather than silently degrade performance. Example:
- ✅ Emulate Scan on DynamoDB using Query (acceptable performance)
- ❌ Don't emulate Scan on S3 via List (too expensive)
Question 2: Should GetCapabilities() be cached or dynamic?
Options:
- Static: Capabilities determined at pattern creation, never change
- Dynamic: Capabilities can change at runtime (backend failover, etc.)
Recommendation: Static - Capabilities fixed at pattern instantiation. If backend changes, pattern restarts with new capabilities. Simpler mental model for clients.
Question 3: How to handle partial slot support?
Example: Registry backend (Redis) supports Scan, but Messaging backend (NATS) doesn't support persistent subscriptions.
Options:
- Fail fast: Reject configuration if any slot missing preferred interfaces
- Degrade: Allow configuration, expose limitations in capabilities
- Fallback: Use secondary slot for missing features
Recommendation: Degrade - Allow configuration, but clearly expose limitations. Example:
capabilities:
supports_filtering: true # Registry supports it
supports_durability: false # Messaging doesn't support it
filter_performance: NATIVE
multicast_performance: FANOUT # Can't use durable queue
Question 4: Should pattern protos import backend interface protos?
Options:
- Yes: Pattern proto imports backend interface proto for slot definitions
- No: Pattern proto is fully independent, only references backend types as strings
Recommendation: No - Keep pattern protos independent. Use string references for backend types and interfaces. Prevents circular dependencies and allows patterns to evolve independently.
Question 5: How to version pattern protocols?
Options:
- Proto package versioning:
prism.patterns.keyvalue.v1,prism.patterns.keyvalue.v2 - In-place evolution: Add new fields, never remove (protobuf compatibility)
- Semantic versioning in capabilities: Return protocol version in GetCapabilities()
Recommendation: In-place evolution with semantic versioning metadata. Example:
message Capabilities {
string protocol_version = 1; // "1.2.0"
// Always add new fields, never remove
}
Success Criteria
- Protocol Consolidation: Each pattern has ONE gRPC service, not 4+
- Capability Discovery: 100% of clients use GetCapabilities() before calling optional operations
- Graceful Degradation: Zero "method not found" errors in production
- Backend Flexibility: Same client code works with 3+ different backend combinations
- Performance: No regression compared to current architecture
- Client Experience: Client code is 50% simpler (fewer imports, clearer error handling)
- Test Coverage: Integration tests cover all backend capability combinations
Related Documents
- RFC-008: Proxy Plugin Architecture
- RFC-014: Layered Data Access Patterns
- RFC-017: Multicast Registry Pattern
- RFC-025: Pattern SDK Architecture
- MEMO-006: Backend Interface Decomposition
- ADR-003: Protobuf Single Source of Truth
Appendix: Complete Example Configurations
Example 1: KeyValue with Redis (Full Features)
pattern: keyvalue
backend:
type: redis-cluster
interfaces:
- KeyValueBasicInterface
- KeyValueBatchInterface
- KeyValueScanInterface
- KeyValueTTLInterface
# Resulting capabilities:
capabilities:
supports_batch: true
supports_scan: true
supports_expiration: true
supports_transactions: true
scan_performance: SCAN_NATIVE
batch_performance: BATCH_NATIVE
Example 2: KeyValue with S3 (Minimal Features)
pattern: keyvalue
backend:
type: s3
interfaces:
- KeyValueBasicInterface # Only basic Get/Set/Delete
# Resulting capabilities:
capabilities:
supports_batch: false
supports_scan: false # Too expensive
supports_expiration: false # Would need S3 lifecycle policies
supports_transactions: false
scan_performance: SCAN_UNAVAILABLE
Example 3: KeyValue with Postgres + Redis (Hybrid)
pattern: keyvalue
slots:
primary:
backend: postgres
interfaces:
- KeyValueBasicInterface
- KeyValueBatchInterface
- KeyValueScanInterface
# Missing: TTL support
expiration: # Secondary slot for TTL
backend: redis
interfaces:
- KeyValueBasicInterface
- KeyValueTTLInterface
# Resulting capabilities:
capabilities:
supports_batch: true
supports_scan: true
supports_expiration: true # Via secondary Redis slot
supports_transactions: true
scan_performance: SCAN_NATIVE
slots:
primary_backend_type: "postgres"
expiration_backend_type: "redis" # Visible to client!
Example 4: Multicast Registry with Full Durability
pattern: multicast-registry
slots:
registry:
backend: redis-cluster
interfaces:
- KeyValueBasicInterface
- KeyValueScanInterface
- KeyValueTTLInterface
messaging:
backend: nats-jetstream
interfaces:
- PubSubBasicInterface
- PubSubPersistentInterface
durability: # Optional slot for guaranteed delivery
backend: kafka
interfaces:
- StreamBasicInterface
# Resulting capabilities:
capabilities:
supports_filtering: true
supports_durability: true # Kafka slot provides this
supports_streaming: true
filter_performance: FILTER_CLIENT_SIDE # Redis doesn't have native filtering
multicast_performance: MULTICAST_QUEUED # Goes through Kafka
slots:
registry_backend_type: "redis"
messaging_backend_type: "nats"
durability_backend_type: "kafka"
Example 5: Multicast Registry (Minimal, No Durability)
pattern: multicast-registry
slots:
registry:
backend: memstore # In-memory for development
interfaces:
- KeyValueBasicInterface
- KeyValueScanInterface
messaging:
backend: nats-core # Ephemeral pub/sub
interfaces:
- PubSubBasicInterface
# No durability slot
# Resulting capabilities:
capabilities:
supports_filtering: false # MemStore doesn't have filtering
supports_durability: false # No durability slot
supports_streaming: true
filter_performance: FILTER_CLIENT_SIDE
multicast_performance: MULTICAST_FANOUT
slots:
registry_backend_type: "memstore"
messaging_backend_type: "nats"
max_identities: 10000 # MemStore limitation