Multicast message routing for P2P publish/subscribe


We present a design for a multicast message routing infrastructure to be used together with decentralized publish/subscribe protocols in a two-tier P2P system, composed of a stable core network and intermittently connected edge networks, where asynchronous communication is made possible by core nodes providing store-and-forward service to edge nodes with end-to-end security guarantees.


In the context of the DREAM project [1] we're aiming to enable offline-first, decentralized group collaboration.

UPSYCLE [2] is a decentralized pub/sub protocol suite based on P2P gossip-based clustering and dissemination protocols, augmented with a multicast routing protocol that provides a pub/sub interface to decentralized, local-first applications. It is used for the decentralized synchronization of mergeable, replicated data structures with causal ordering and eventual consistency properties.

DMC [3] is a synchorization protocol based on operation-based CRDTs[4]. Conflict-Free Replicated Data Types (CRDTs) enable merging operations without conflicts and without coordination between the replicas. DMC uses pub/sub to disseminate operations to all group members, which apply the operations to their local replica. It relies on causality information embedded in pub/sub messages to establish partial ordering of CRDT operations.

UPSYCLE consists of two components: UPSYCLE node, the P2P node that implements the gossip-based protocols specified in [2], and UPSYCLE router, the P2P multicast routing service specified in this document that provides a pub/sub interface to local applications.

The UPSYCLE router can connect both to the local UPSYCLE node to transport messages via gossip-based protocols, and to other routers directly via peer advertisements sent between routers via a discovery protocol or out-of-band. A node can run either or both components depending on requirements. A core node would typically run both the node and router. The node to participate in the gossip-based core network protocols, and the router to provide access to the pub/sub service to local and remote applications. An edge node with more resources like a desktop or laptop computer would run both components to participate in the gossip-based edge network protocols, and to connect to one or more core nodes. A low-resource, battery-powered edge node (e.g. mobile phone) would typically run only the router component to save battery: it connects to a core node to receive incoming messages, and establishes direct connections to nearby nodes when requested, e.g. via a direct peer advertisement that could happen via an out-of-band link or QR code exchange.

In the following we present the design for the public key-addressed routing protocol with unicast and multicast routing algorithms, the necessary transport mechanisms, message formats, message type definitions, and the protocol exposed to services and applications that enables the transport of pub/sub messages between subscribers.

Design overview

Requirements for the system:

In order to enable asynchronous communication even on resource-constrained, mobile end-user devices we consider a two-tier P2P architecture, as proposed in [2], that consist of a stable, always online P2P core network, along with several ephemeral edge networks. The core and edge networks run distinct P2P dissemination protocols, and edge nodes reach remote nodes via a core node that provides a multicast routing service with store-and-forward message queues.

The incentives for core nodes to provide such a service are based on a traditional service provider model with a mutual agreement between a core and an edge node (service provider and client, respectively), as opposed to the open relay model of many P2P systems where everyone provides service to everyone. Core nodes provide store-and-forward message routing service for edge nodes, who can then rely on a core node for incoming messages from remote nodes, and don't have to participate in the core P2P network, thereby reducing resource usage. This design allows edge nodes to freely move between core nodes (service providers) or even use multiple ones simultaneously for redundancy, because their identity is not tied to a specific service provider unlike in federated systems like XMPP [9] or Matrix [10].

This is made possible by relying on decentralized, public key addressing based on cryptography, instead of centralized, DNS-based addressing, and by using a decentralized pub/sub protocol in the core network, which is described in [2]. A P2P pub/sub topic has a long-term public key identity, and uses a public-key addressed multicast group for message dissemination for its current members, that message routers route to their subscribers which are local services or edge nodes they provide service to. Group members who possess the private key that corresponds to the public key group identity are authorized to send messages to the group, which they need to sign with this private key in order for it to be accepted and routed in the network. In case permission to receive or send messages needs to be revoked from a certain member, the multicast group associated with a pub/sub topic changes and the new public key group address is communicated to the remaining members, and the signing key as well to senders.

Public key addressing makes it possible for multicast routers to forward only signed messages sent by authorized senders. This is necessary to prevent denial of service attacks trying to overload the multicast dissemination infrastructure, and also enables stateless message routing where messages routers do not have to maintain group membership information.

Software architecture

The software architecture running on each P2P node is based on the actor model where software components, both local and remote, communicate with each other via message passing.

This enables a composable and modular system, where message type definitions establish a clear interface between services, and allows flexibility in organizing and connecting the infrastructure to suit deployment in different environments, as well as makes it possible for different implementations in different languages to interface with each other. This architecture enables security and fault isolation of components provided services run in securely sandboxed environments.

Unikernels are specialized, single-address-space machine images constructed by library operating systems that can be run as lightweight virtual machines or as sandboxed processes. Running services as unikernels offers the benefit of security and fault isolation and reduces the amount of software dependencies and thus trusted computing base (TCB) thereby making the system more secure and robust. We intend to use the MirageOS library operating system that constructs unikernels using the OCaml language.


Each node runs a single instance of a message router and a set of other services such as peer sampling, pub/sub, DMC database, and on edge nodes end-user applications as well. Applications are considered services as well and interface with the rest of the system the same way, except they might not be running all the time and might have a user interface.

The message router service is responsible for public key-based unicast and multicast message routing both among local services and between local and remote services.

Each service is addressed by a public key, which allows end-to-end authenticated and encrypted messaging between components.

Services can be run in various ways depending on the deployment environment: as lightweight VMs when running the system on dedicated servers, while end-user devices would run sandboxed processes instead.

Authentication and encryption

Messages over the network between remote nodes and services are sent via encrypted transport channels with forward secrecy using TLS 1.3, with client authentication to ensure mutual authentication between the two ends of a connection.

Messages are addressed using public keys and authenticated by a signature. Services authenticate each other using public keys that they either have stored in their configuration or learned via P2P protocols, due to this there's no need for a public key infrastructure with certificate authorities.

End-to-end security for groups

Message payloads sent to the group are encrypted using a decentralized group encryption scheme, such as the one proposed in [11] that we intend to use, but other encryption schemes would be possible to use depending on the requirements of the group.

The group encryption scheme in [11] relies on a causal broadcast primitive that the underlying p2p pub/sub infrastructure provides, and implements a decentralized membership management protocol.

Each pub/sub topic uses a multicast group for message dissemination shared by the current members of the group. Whenever there's a membership change that involves revoking access, a new multicast group is set up without the excluded members, using an end-to-end encrypted message to the remaining group members who instruct their local node and core nodes they're relying on to join the new multicast group.

This design ensures that core nodes only provide the necessary pub/sub infrastructure and do not have access to message contents, and also minimizes the amount of metadata they have access to. The only information core nodes learn about the pub/sub topic they participate in is the identity of other core nodes who participate in the P2P dissemination protocol. Core nodes that provide service to edge nodes also learn which edge nodes they serve subscribe to which groups.

User and node identities are decoupled and separate, and thus nodes who only participate in the dissemination protocols do not learn about user identities.

Message Router

Message routers of edge nodes (circled) & core nodes (double-circled); services (boxed).

The message router service is responsible for public key-based routing of unicast and multicast messages between services.

It maintains a routing table and a pool of open connections to remote nodes. This ensures only a single a connection is necessary to each remote node, which is torn down after a period of inactivity. This reduces network overhead since multiple services can reuse the same connection when sending messages to the same remote node.

Remote nodes connect to the message router via a secure connection that provides forward secrecy. The message router is designed with multiple transports in mind, such as TLS 1.3 for remote nodes and unikernels, Unix sockets for local processes, and WebSockets over TLS for web applications.

Unicast & multicast messages

Unicast messages are sent between local services and between services of directly connected remote nodes. The message router is responsible for forwarding incoming unicast messages to their destination according to its routing table.

Multicast messages are sent from one sender to many subscribers, and the message router forwards each incoming multicast message to all local subscribers of the multicast group the message is addressed to.

For example, the pub/sub service uses multicast to deliver messages to locally subscribed applications, while the peer discovery service sends multicast updates about discovered peers that the message router and pub/sub service subscribers to.

Both unicast and multicast messages are public key-addressed and signed. A valid signature is necessary in both cases for the message to be accepted and forwarded. In case of unicast, this is the signature by the sender, while multicast messages are signed by the private key that corresponds to the group's public key address, which is thus shared among authorized senders of the group.

Decentralized publish-subscribe is implemented by the pub/sub service on each node. It is responsible for the dissemination of multicast messages according to a P2P dissemination protocol. In order to implement P2P dissemination, the pub/sub service forwards each message received to a number of directly connected remote subscribers according to the P2P protocol used, by encapsulating the multicast message signed by the publisher in a unicast message addressed to the pub/sub service of the other nodes. See [2] for more details about the P2P protocols we are going to use.

The message router accepts requests to join and leave multicast groups from local services, and it also provides service to authorized remote nodes. In the two-tier P2P model we use, core nodes provide this service to edge nodes.

Connection establishment

Direct connections between two services can be established over different transport mechanisms, initially we define TLS 1.3 over TCP/IP for this purpose.

We use TLS 1.3 with mutual public key authentication. The public key used for the TLS connection is the same as the public key address of the service. Since we use public-key addressing, keys can be directly verified upon connection establishment, even though services present a self-signed certificate.

When the message router is contacted by another service, it checks whether the public key of the connecting service is in the list of local services. If so, it flags the connection as local, otherwise it's considered remote.

The message router then adds an entry to its routing table, and delivers unicast messages destined to the registered address while the connection remains open. If the connection is closed, the message router queues messages for a limited amount of time, and delivers them when the connection is back up.

Storage backend

The message router uses a key-value store as storage backend for two purposes: to store its state (unicast routing table and multicast subscriptions), and to store queued messages.


Messages between services use the CBOR [12] serialization format with CDDL [13] specifications.

CDDL is both human and machine-readable, and thus allows both documentation and machine verification of CBOR data structures.

Messages have the following CDDL specification.

; Message types with their IDs
message =

; Unicast message

; Multicast message

; Message body
BODY = bstr

; Signature over the header and body
SIG = sig

; Last hop public key the message arrived via
VIA = pubkey

; Source public key address
source = (src: pubkey)

; Destination public key address
destination = (dst: pubkey)

; Group public key address
group = (grp: pubkey)

; Time-to-live: duration the message can be queued relative to its arrival, in seconds
ttl = (ttl: uint .size 32)

; Expiry time: absolute time the message expires at and should be deleted from queues,
;              in minutes since 2020-01-01 00:00 UTC
expiry = (exp: uint .size 32)

; List of message IDs recently seen by the sender
seen = ( sn: [ 0*16 msg-id ])

; Message ID: BLAKE3 hash over the message type, header, body, and signature.
msg-id = hash

; 256-bit BLAKE3 hash
hash = bytes .size 32

; Curve25519 public key
pubkey = bytes .size 32

; EdDSA signature over the message type, header, and body.
sig = bytes .size 64

Message type

We define two message types that can be sent between services: unicast, and multicast. Both unicast and multicast messages are addressed using 256-bit Curve25519 public keys.


The source, destination, and VIA fields contain a Curve25519 public key unicast address of a service. The group field contains a Curve25519 public key multicast group address. The optional VIA field contains either the last-hop or next-hop router address. When receiving a message from a remote node, the local message router sets it to the remote message router's address. When sending a message, a local service may set it to a remote message router address to indicate source routing.

Time-to-live and expiry

The ttl and expiry fields allow limiting the lifetime of a message in the network. The ttl field specifies the maximum time in seconds a unicast or multicast message can be kept in a message queue by a message router after delivery. When set to 0, the message gets dropped right away if the recipient is unavailable.

Thus the ttl field contains a relative time in seconds after reception, while the expiry field is an absolute time. Whichever of the two comes sooner is used as the expiration time.

Using both a relative and absolute expiry is necessary because clocks in the network are not synchronized. A relative time after receptions allows using only the local clock, while a more coarse absolute expiry time specifies an upper limit for the message lifetime.

Message ID

The message ID for both unicast and multicast messages is computed as the 256-bit BLAKE3 hash of the message, but not present in the header itself.

For the hash computation all of the message fields are used, except VIA, the message type, unicast or multicast header, message body, and signature: The hash is computed by concatenating the CBOR-encoded fields of the message:



The seen field contains recently seen message IDs by the sender at the time of sending the message. This allows nodes participating in the P2P dissemination of a pub/sub topic to detect missed messages without being able to decrypt message payloads.

Relay nodes in the core network only have access to the message header, and can only use the seen header field for missed message detection, while group members who are able to decrypt message payloads also process causality information.

Message body

The message body in BODY contains an application-layer message payload that services send to each other. When end-user data is sent, applications should encrypt the data before sending it over the network, to ensure end-to-end encryption.


The signature in SIG authenticates the message that it indeed originated from the source public key. It is computed as the EdDSA signature over the message type, unicast or multicast header and the message body, signing the concatenation of the respective CBOR-encoded message fields:

  • S(source_priv_key, TYPE+UNICAST_HEADER+BODY)
  • S(group_priv_key, TYPE+MULTICAST_HEADER+BODY)

Message interface

Follows the CDDL interface specification for the message router.

; Message types with their IDs
message =
  [ 1, JOIN ] /
  [ 2, JOIN_ACK ] /
  [ 3, LEAVE ] /
  [ 4, LEAVE_ACK ] /
  [ 5, PULL ]

; Message types

; Join a multicast group
JOIN = {
  local: bool

; Join acknowledgement for a multicast group

; Leave a multicast group

; Leave acknowledgement for a multicast group

; Request a missed message sent to a multicast group
PULL = {

; Public key group address
group_addr = ( addr: pubkey )

; Curve25519 public key
pubkey = bytes .size 32

; Message ID
msg_id = (id: hash)

; 256-bit BLAKE3 hash
hash = bytes .size 32

; Result codes
result = ( result: success / failure )
success = 0
failure = 1

Unicast routing

Unicast messages are sent between local services and between services of directly connected nodes.

When the destination is not a local service, the message router looks up the destination in its routing table to determine the remote message router to forward the message to. When the VIA field is set by a local service, it specifies the next-hop router address, which is then used directly instead of a routing table lookup.

The message router is responsible for forwarding each incoming unicast message according to its unicast routing table.

The unicast routing table is updated in either of the following ways:

  1. Incoming connections from local services and remote nodes: once an incoming connection is established, the message router adds the public key of the local service or remote message router as a directly connected entry in the routing table.

  2. Incoming messages from remote nodes: when the message router receives a message from a remote service, it adds the public key of the service to its routing table associated with the remote message router's public key, to ensure replies can be sent to the remote service address.

  3. The message router receives peer advertisements by joining the multicast group of the peer discovery service service. Peer advertisements contain the public key and transport address of a node. See the Peer Discovery Service section for more details.

The message router is configured with a list of public keys that belongs to local services, in order to be able to identify whether a message originated locally or remotely, and uses this list to authorize relaying messages from local services to remote nodes.

Unicast routing algorithm

The message router makes unicast routing decisions based on the source and destination header fields, and when present, the VIA field.

It handles an incoming unicast message M the following way.

  1. It checks whether the source_signature is a valid signature corresponding to the source public key. If the signature is invalid, it drops the message.

  2. If the message arrived via a remote connection, then it sets the VIA field to the public key of the source message router, and adds the public key of the source service to the routing table.

  3. It checks the public key D in the destination field.

    1. If the message is addressed to the message router X itself, i.e. D = X, then it processes the message.

    2. If there's a local service registered with public key D, then it forwards the message to D.

    3. It checks whether source S is a local service. If so, it proceeds with the following:

      1. If VIA is set to V, it forwards the message to the message router V

      2. Otherwise it looks up D in the routing table and forwards the message to the message router associated with D.

  4. If none of the above lead to a forwarding decision, it drops the message.

When forwarding a message to a remote message router R, the message router first checks if a connection is already open to R. If not, it creates a new connection to R and queues the message until the connection is set up for a maximum duration specified in the ttl and expiry fields. In case of a local destination the message router waits for it to connect, it does not actively initiate a connection.

  1. Example

    When a local service PSx wants to send a message to a remote service PSy via the local router X and remote router Y, the following steps are taken:

    1. PSx creates a unicast message with destination PSy and an application payload.
    2. PSx sends the encapsulated message to X
    3. X verifies that PSx is a local service
    4. X looks up PSy in its routing table, and finds an entry to a remote node Y
      • If a connection is open to Y, it reuses that
      • Otherwise it creates a new connection to Y and queues the message until the connection is set up for a maximum duration specified in the ttl and expiry fields.
    5. X forwards the message to Y
    6. Y verifies the message signature
    7. Y looks up the destination, PSy, in its routing table, and continues processing since it's a local service
    8. Y sets the VIA field to X, to indicate where the message arrived from
    9. Y adds an entry to its routing table to indicate the source service PSx is reachable via the message router X
    10. If PSy is connected, Y forwards the message to PSy, otherwise it queues the message for a limited time specified by ttl and expiry fields
    11. PSy delivers the message locally
    12. PSy can reply by addressing a message to the source address (PSx)

Multicast routing

The message router performs one-to-many multicast routing to its direct subscribers who have requested to join multicast groups.

Services join a multicast group by sending a JOIN message to the message router, while leaving is done by sending a LEAVE message.

The message router thus maintains a persistent subscription state that should be restored even after restart.

JOIN & LEAVE messages

JOIN & LEAVE messages are unicast messages sent to the message router to request joining or leaving a multicast group.

These are typically restricted to local services and edge nodes the message router provides service to, the public keys of which are part of the message router configuration. However, in certain cases a node may choose accept JOIN requests from any node, for instance in case it intends to provide service to all nodes on an edge network.

The JOIN contains the multicast group address of the group. The message router adds the requestor as a local subscriber of the group, and starts forwarding multicast messages sent to the group to the requestor.

The local flag in the JOIN request indicates the multicast group is local to the node. If it is not set, the message router sends a SUB request to the local pub/sub service to request joining the P2P dissemination overlay for the respective P2P pub/sub topic with the same public key address.

Upon receipt of a LEAVE message, the message router removes the requestor from the local subscription list of group and stops forwarding messages to the requestor for this group. Once there are no more subscribers left for a group, it also sends an UNSUB request to the pub/sub service, since there's no more need to participate in the P2P pub/sub dissemination overlay.

To acknowledge the request, a JOIN_ACK or LEAVE_ACK message is sent in response with a success or failure result.

PULL message

A PULL message is used to request a specific message by ID from another group member, that was missed during the push dissemination phase and detected using the seen routing header or the dependencies header of the pub/sub message payload.

Multicast routing algorithm

The message router takes multicast routing decisions based on the group header field, which contains the group's public key address G.

It saves incoming messages in a key-value store for a limited time in order to serve disconnected clients and PULL requests from remote nodes.

When the message router receives a multicast message, or a unicast message with a multicast message payload, it processes it according to the following.

  1. It verifies the group_signature in the header, to make sure it matches the group public key. If the verification fails, it drops the message.

  2. If VIA is set to V, it forwards the message to the remote message router V, and stops processing.

  3. It checks the group address G in the header. If there are no local subscribers for G, it drops the message.

  4. It checks whether the message has been received before by looking up the message ID in the list of recently received messages. If found, it drops the message, since it has been processed before.

  5. It adds the message ID to the list of recently received messages. The list size is bounded by a local configuration parameter.

  6. It saves the message in the key-value store with the message ID as key.

  7. If the message arrived from a remote node R, it sets the VIA field to R.

  8. It forwards the message to all local subscribers.

    If a subscriber is not connected, it queues the message for a limited time, as specified by the ttl and expiry fields, by adding the message ID to the message queue of the recipient.

  1. Example

    It takes the following steps to route a multicast message M addressed to group G from the PSx pub/sub service on node X to the PSy pub/sub service on node Y and to the edge nodes A and C.

    1. PSx sets VIA to Y
    2. PSx sends the message to X
    3. X forwards the message to Y
    4. Y verifies the message signature
    5. Y checks if it has any subscribers for group G, and finds PSy, A, C
    6. Y checks if this message has been received before by looking up the message ID in the key-value store: if found, it drops the message
    7. Y sets VIA to X
    8. Y sends the message to all connected subscribers of group G (PSy, C)
    9. Y queues the message for disconnected subscribers (A)
    10. PSy receives the message and delivers it locally. The pub/sub service would forward the message to a limited number of group members other than X according to a P2P pub/sub protocol.
    11. C receives the message and delivers it to its local subscribers.

Peer Discovery Service

The peer discovery service is responsible for the discovery of other nodes in the network. The underlying discovery protocol depends on the type of the network the node is connected to.

Edge nodes run a peer discovery protocol on the LAN where each node on the network periodically announces itself via a signed PEER_ADVERTISEMENT message sent to a well-known IP multicast address.

While there are service discovery protocols available for LANs, they are request-response based that is more suited to static services available on networks. In a P2P setting a periodic announcement is a simpler and more efficient way to gather a list of currently available nodes on the LAN.

Core nodes run a random peer sampling protocol instead, starting from an initial list of well-known nodes, they maintain a partial view of nodes participating in the network, and exchange their views via a gossip protocol. An entry in the view is a signed peer advertisement.

In addition to discovery protocols, peer advertisement messages can also be shared out-of-band, e.g. via a link or QR code.

Peer advertisements contain an entry with the public key and transport address of the node's message router service, signed by the private key of the message router. The peer discovery service announces peer advertisements it receives via a multicast message that other local services can subscribe to. The message router joins this multicast group and adds received records to its routing table.

Peer advertisements may also contain a list of services publicly available on the node, such as the peer discovery service or the pub/sub service.

A list of public pub/sub groups may be also part of the advertisement, to facilitate discovery of public groups on edge networks.

A peer advertisement is valid for a limited time, specified in the ttl and expiry fields, after which the message router deletes the record from its routing table.

These fields behave the same way as the similarly named fields of the unicast & multicast routing headers. The ttl field contains a relative time in seconds after reception, while the expiry field is an absolute time. Whichever of the two comes sooner is used as the expiration time.

Message interface

Follows the CDDL interface specification for the peer discovery service.

; Message types with their IDs
message =


; Public key address of the message router service of the node
node_id = ( id: pubkey )

; Transport address
transport_addr = (addr: addr)

; Services: list of public services available on the node
services = (svcs: {* service_type => service_addr})

; Groups: list of public pub/sub groups available on the node
groups = (grps: [* group_addr])

; Public key group address
group_addr = ( addr: pubkey )

; Revision number of the advertisement
revision =  (rev: uint)

; Signature by the private key that belongs to node_id
signature = (sig: sig)

; Time-to-live: duration the message can be queued relative to its arrival, in seconds
ttl = (ttl: uint .size 32)

; Expiry time: absolute time the message expires at and should be deleted from queues,
;              in minutes since 2020-01-01 00:00 UTC
expiry = (exp: uint .size 32)

; Curve25519 public key
pubkey = bytes .size 32

; EdDSA signature
sig = bytes .size 64

addr =
  [ 1, ip, port ] / ; TCP
  [ 2, ip, port ]   ; UDP

ip = ip4 / ip6
ip4 = bytes .size 4
ip6 = bytes .size 16

port = uint .size 2

service_type =
  1 / ; peer discovery
  2   ; pub/sub

service_addr = pubkey

Publish-Subscribe Service

The publish-subscribe service on each node is responsible for managing the subscriptions of the node to pub/sub topics, and participating in P2P pub/sub protocols for subscribed topics. The P2P gossip-based pub/sub protocols we use are described in [2].

Since the message router takes care of store-and-forward multicast routing and subscriptions for local clients, the pub/sub service does not need to deal with this, only the P2P protocols between nodes.

The pub/sub service accepts SUB and UNSUB requests from the message router only, thus it does not directly interact with local subscribers.

Upon receiving a SUB request, the pub/sub service joins the P2P dissemination overlay of the topic, and also sends a JOIN request to the message router for the same group in order to receive multicast messages for the group sent by other nodes.

The pub/sub service when forwarding a multicast message to another node in the group, it encapsulates it in a unicast message addressed to the message router of that node. This allows the message router to forward it to its local subscribers, which includes the pub/sub service itself.

Upon receiving an UNSUB request, the pub/sub service stops participating in the P2P dissemination overlay and sends a LEAVE request to the message router.

As an acknowledgement, the pub/sub service sends a SUB_ACK and UNSUB_ACK message in response to a SUB and UNSUB request, respectively.

Message interface

Follows the CDDL interface specification for the pub/sub service.

; Message types with their IDs
message =
  [ 1, SUB ] /
  [ 2, SUB_ACK ] /
  [ 3, UNSUB ] /
  [ 4, UNSUB_ACK ]

SUB = {




addr = ( addr: pubkey )

; Curve25519 public key
pubkey = bytes .size 32

; Result codes
result = ( result: success / failure )
success = 0
failure = 1

Application messages

A message sent by an application to a pub-sub topic has the following CDDL specification.

; Multicast message
message = [ header, body ]

; Message header
header = {

; Causal dependencies: list of message IDs this message directly depends on
dependencies = (deps: [* msg-id ])

; Sender: member's public key address
from = (from: pubkey)

; Recipient: group's public key address
to = (to: pubkey)

; Per-sender sequence number
seq-no = (seq: uint)

; Sender's signature over the rest of the message
signature = (sig: sig)

; Message ID
msg-id = hash

; 256-bit BLAKE3 hash
hash = bytes .size 32

; Curve25519 public key
pubkey = bytes .size 32

; Curve25519 private key
privkey = bytes .size 32

; EdDSA signature
sig = bytes .size 64

; Message body
body =
  [ 0, ACK ] /
  [ 1, UNICAST ] /
  [ 2, MULTICAST ] /
  [ 11, MEMBER_ADD ] /

ACK = {}

; Unicast message to one or more group members
UNICAST = bytes

; Multicast message to the whole group

; Add member

; Remove member

; Members: list of public keys
members = (mem: [+ pubkey])
“Distributed replicated edge agency machine.” [Online]. Available:
T. G. x Thoth, “UPSYCLE: Ubiquitous publish-subscribe infrastructure for collaboration on edge networks,” 2021 [Online]. Available:
“Distributed mutable containers,” 2021 [Online]. Available:
C. Baquero, P. S. Almeida, and A. Shoker, “Making operation-based CRDTs operation-based,” in IFIP international conference on distributed applications and interoperable systems, 2014, pp. 126–140.
“AMQP v1.0.” [Online]. Available:
“libp2p - a modular network stack.” [Online]. Available:
J. Benet, “IPFS - content addressed, versioned, P2P file system.” 2014 [Online]. Available:
“Extensible messaging and presence protocol (XMPP).” [Online]. Available:
“Matrix specification.” [Online]. Available:
M. Weidner, M. Kleppmann, D. Hugenroth, and A. R. Beresford, “Key agreement for decentralized secure group messaging with strong security guarantees.” Cryptology ePrint Archive, Report 2020/1281, 2020 [Online]. Available:
C. Bormann and P. Hoffman, “Concise binary object representation (CBOR).” 2020 [Online]. Available:
Birkholz H., Vigano C., and C. Bormann, “Concise data definition language (CDDL): A notational convention to express concise binary object representation (CBOR) and JSON data structures.” 2019 [Online]. Available: