We are pleased to announce an initial reference software implementation of UPSYCLE Router.
Introduction
As part of DREAM we are researching and developing decentralized communication protocols that facilitate peer-to-peer group collaboration with strong privacy and scalability guarantees. The component of DREAM which implements this is called UPSYCLE.
An integral part of the UPSYCLE architecture, upsycle-router
is the message router and service component which is run on all devices in the system — a phone, a laptop, a core node in a data center — which handles authentication, encryption, and translation of cryptographic public keys into routing information. Services connect to the nearest message router, and message routers connect to each other. In this way the message routers form the backbone of the system and all parts of the system can communicate with each other. Once connected, services can publish updates to a topic (a multicast group which is identified by its public key), or directly send a message to another service or router (whose address is also a public key), and the message router makes the routing, sending, and receiving transparent.
The message router also contains a cache of recently seen messages, and a temporary queue for messages which should be retransmitted later ; it maintains a subscription registry of which nodes in the system are subscribed to which topics. Applications using UPSYCLE pub/sub should implement end-to-end encryption such that an intermediate message router can only read the outermost layer of a message to know how to handle it, but not does not have access to the payload.
As part of UPSYCLE there is also ongoing research and development into peer-to-peer protocols including peer discovery, clustering, and dissemination, which form another major part of the system. Once these are ready they can be connected to this implementation, for example by creating a service which understands these protocols and communicates through its local message router.
See the technical specification for upsycle-router
.
Implementation
The repository is hosted at our GitLab and the API docs are available here.
This release is tagged 0.1.0-alpha1.
We provide library and application code in OCaml for running one or more message router(s) and one or more local services associated to each message router. The library code does all the work of setting up the services and message routers and connecting them to each other. The various message types have all been implemented and we provide high-level functions for encapsulating messages in unicast/multicast wrappers and encoding/decoding them to/from the binary encoding format CBOR, as well as high-level functions for sending the various kinds of messages with arbitrary payloads.
The application code sets up a console-based front-end to launch the code, allow various customizations using the associated YAML files, and query and manipulate the state. One can also optionally send commands using the keyboard or a control interface, which is useful for scripting, testing and demo purposes. The applications are loosely coupled to the library code and interested developers are encouraged to write their own.
We also provide an example scenario using three message routers and four services, which you are encouraged to experiment with.
The general architecture is based on the Lwt (”light-weight threads”) library. The word “thread” is used a bit loosely here — they are better thought of as cooperative asynchronous promises which greatly reduce the challenges of classical threading, like deadlocks and race conditions. A server thread running a streaming parser places incoming messages on an internal queue, which are then handled by a handler thread, and the promises can be manipulated by functional programming techniques (the data structure in which promises are stored form a so-called monad). This design allows very high throughput and a system with very high up-time that is easy to reason about, also in the case of errors.
Try it out!
See the README for instructions for configuring and running the software and trying the examples, and also for a more in-depth discussion of various topics.
What’s next
The code has been designed with the goal of eventually running in a container, for example a MirageOS unikernel. This is done by clearly separating I/O from pure code and by using module functors to abstract away the Unix dependency where possible. However, at the moment it is still dependent on Unix in a few places, in particular the streaming Angstrom parser, which uses angstrom-lwt-unix
, and the TLS code, which uses conduit-lwt-unix
. Conduit is part of the MirageOS ecosystem and we are using the pure OCaml TLS stack, so the Unix-specific parts are expected to be easy to abstract away. (The Mirage version of Conduit provides abstractions for input and output which are not necessarily tied to Unix file descriptors.) The rest will require some effort in a future stage, and the various C libraries which are bound to by the various OCaml dependencies also need to be audited for compatibility.
Discussion
You’re welcome to discuss the software with the DREAM Catchers!
This is experimental software made for P2Pcollab within the DREAM project.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme within the framework of the NGI-POINTER Project funded under grant agreement No 871528.