tl;dr: 2021-05-17 – DREAM releases D1.3, the SHRUTHI Service Prototype Release, closing the second Milestone of our one-year plan (WP1: Design).
Introduction
SHRUTHI is the Unikernel hosting component of the DREAM project. The acronym stands for Self-Hosted Robust Unikernel Testing & Hosting Infrastructure.
As part of achieving DREAM’s goal “to enable the convergence of distributed P2P networks and linked data models, within social solidary economy and organize among trusted groups.” we believe providing a practical and deterministic way for system administrators and collectives to self-host our components is an important endeavor.
To be able to sustainably install and maintain the DREAM software components in their server configuration – in which they take the form of Mirage Unikernels – we have extended the libvirt-based Rhyzome Provisioner, adding support for the deployment of such unikernels.
Unikernel?
Unikernels are specialized, sealed, fixed-purpose software images that make use of minimal computing resources to accomplish specific tasks, running as lightweight virtual machines or as sandboxed processes. Because they’re not conceived as general purpose computers, they present a number of interesting properties: they are efficient, robust, minimalist and thus well-suited for running self-organized, self-managing, and self-healing systems.
SHRUTHI is an effort to realize a unikernel-based testing & hosting infrastructure that can orchestrate large-scale tests for the evaluation of P2P protocols, and allow users and hosting providers to host P2P nodes that provide storage and proxy services.
Why SHRUTHI?
As already noted, SHRUTHI was developed to lower the threshold to self-hosting DREAM, but the question may still stand as to why we didn’t opt for a more widely used runtime orchestration platform like Docker, or even one more specific to Unikernels such as Albatross. We based our decision on the following principles:
-
Stability |
libvirt
is a well tested, and mature project with solid documentation, making it a stable base to build upon. -
Simplicity | Docker,
systemd-nspawn
, Openstack, etc., are all widely used tools for orchestration, but with more batteries and complexity than we need. The more simple, the less can go wrong, and thus the easier to maintain. -
Ubiquity | Since
libvirt
and Go are both widely available on many OSes and distributions, we have a straightforward installation path from small boxen at home, in your association or local hackerspace, up to cloud providers. -
Portability | Using something like Albatross, while an excellent choice for MirageOS would unfortunately limit our OS and architecture support, which goes against our striving for broad usability and dispersion[1].
Provisioning
For information on building and installing Rhyzome, please see the README.
Demonstration
In this screencast, three steps to the deployment of a unikernel are shown ; the demo first verifies that the rhizome
service is running – it was started with the command ./rhizomesrv
from the Rhizome repository – and then proceeds to build a new HTTPS service unikernel from Mirage examples ; once the sample unikernel is ready – and it only takes seconds on a standard laptop – Rhizome can provision it, passing kernel parameters to declare an IP address and its local network gateway. Faster than it was to build the unikernel, a specific instance is deployed and can now serve its purpose, as demonstrated in the second part of the demo which shows on the left a view from within the unikernel, and on the right the client view querying the HTTPS service.
Running the Daemon
As a pre-requisite, the user running the daemon should be in the libvirt
group.
Configuration
Rhyzome will look for a configuration file named rhyzome.conf
in the current directory.
For a basic example config, we are assuming the libvirt bridge interface is named virbr0
, and you are running the server on localhost
:
{
"BridgeInterface": "virbr0"
"Hostname": "localhost",
"Bind": ":8080",
}
Once the configuration is in place, we can start the daemon by simply running:
./rhyzomesrv
Deploying the UPSYCLE Message Router
Now that the server is running on our localhost
, we can deploy some unikernels!
With our client we will tell the server four important pieces of information:
- That we want to create a new instance of a unikernel
- What to name this instance
- The path to the unikernel
- Any configuration parameters to pass the unikernel at boot (Unikernels are stateless!)
./rhyzomectl --server http://127.0.0.1:8080 instance create \
--name msgrouter \
--kernel /path/to/kernel/on/server/msgrouter.virtio \
--kernel-parameters "--ipv4-gateway 192.168.100.1 --ipv4 192.168.100.43/24"
Previous Work
Development on SHRUTHI builds upon the Rhyzome Provisioner and other prior work from the Entanglement.Garden project.
Outlook
Deterministic unikernel provisioning such as the one researched within DREAM and deployed by the Entanglement.Garden remains experimental and requires further research:
- how to split DREAM software components most efficiently so as to keep a performance-complexity ratio as high as possible?
- what economic models can be set up to facilitate both core node hosting and bespoke unikernel definition?
- how does resource consumption compare with traditional cloud providing? This is especially important as intuitively, the energy gain would be tremendous, but the exponential number of unikernels might alleviate such gain (see Jevons Paradox), especially if more smaller hardware would be used, leading to less economies of scale.
As the DREAM project moves on to implement its design in the #dream:wp2-software and demonstrate it in the #dream:wp3-community work packages, it will inform and nourish these deliverables and we expect to see D1.4 -- DREAM Specification to answer some of these questions by the end of the year. We will be using SHRUTHI to explore the first two questions set forth in this outlook section.
You may follow updates in the related software repositories.
We welcome feedback and criticism! Our forum is open for friendly cooperation among #dream-catchers. Do not hesitate to contact us and contribute to the code: our research is made to improve the digital commons.
-
previous in-house experience with “decentralized” independent hosting demonstrated that more than 80% of independent ISPs in Europe were using the same data centers, actually centralizing most alternate hosting at a single cloud provider. This led us to consider the concept of dispersion, since decentralization seems not to be enough. ↩︎