D1.3 – SHRUTHI Service Prototype Release

tl;dr: 2021-05-17 – DREAM releases D1.3, the SHRUTHI Service Prototype Release, closing the second Milestone of our one-year plan (WP1: Design).

Introduction

SHRUTHI is the Unikernel hosting component of the DREAM project. The acronym stands for Self-Hosted Robust Unikernel Testing & Hosting Infrastructure.

As part of achieving DREAM’s goal “to enable the convergence of distributed P2P networks and linked data models, within social solidary economy and organize among trusted groups.” we believe providing a practical and deterministic way for system administrators and collectives to self-host our components is an important endeavor.

To be able to sustainably install and maintain the DREAM software components in their server configuration – in which they take the form of Mirage Unikernels we have extended the libvirt-based Rhyzome Provisioner, adding support for the deployment of such unikernels.

Unikernel?

Unikernels are specialized, sealed, fixed-purpose software images that make use of minimal computing resources to accomplish specific tasks, running as lightweight virtual machines or as sandboxed processes. Because they’re not conceived as general purpose computers, they present a number of interesting properties: they are efficient, robust, minimalist and thus well-suited for running self-organized, self-managing, and self-healing systems.

SHRUTHI is an effort to realize a unikernel-based testing & hosting infrastructure that can orchestrate large-scale tests for the evaluation of P2P protocols, and allow users and hosting providers to host P2P nodes that provide storage and proxy services.

Why SHRUTHI?

As already noted, SHRUTHI was developed to lower the threshold to self-hosting DREAM, but the question may still stand as to why we didn’t opt for a more widely used runtime orchestration platform like Docker, or even one more specific to Unikernels such as Albatross. We based our decision on the following principles:

  • Stability | libvirt is a well tested, and mature project with solid documentation, making it a stable base to build upon.

  • Simplicity | Docker, systemd-nspawn, Openstack, etc., are all widely used tools for orchestration, but with more batteries and complexity than we need. The more simple, the less can go wrong, and thus the easier to maintain.

  • Ubiquity | Since libvirt and Go are both widely available on many OSes and distributions, we have a straightforward installation path from small boxen at home, in your association or local hackerspace, up to cloud providers.

  • Portability | Using something like Albatross, while an excellent choice for MirageOS would unfortunately limit our OS and architecture support, which goes against our striving for broad usability and dispersion[1].

Provisioning

For information on building and installing Rhyzome, please see the README.

Demonstration

screencast
Click here for full size.

In this screencast, three steps to the deployment of a unikernel are shown ; the demo first verifies that the rhizome service is running – it was started with the command ./rhizomesrv from the Rhizome repository – and then proceeds to build a new HTTPS service unikernel from Mirage examples ; once the sample unikernel is ready – and it only takes seconds on a standard laptop – Rhizome can provision it, passing kernel parameters to declare an IP address and its local network gateway. Faster than it was to build the unikernel, a specific instance is deployed and can now serve its purpose, as demonstrated in the second part of the demo which shows on the left a view from within the unikernel, and on the right the client view querying the HTTPS service.

Running the Daemon

As a pre-requisite, the user running the daemon should be in the libvirt group.

Configuration

Rhyzome will look for a configuration file named rhyzome.conf in the current directory.

For a basic example config, we are assuming the libvirt bridge interface is named virbr0, and you are running the server on localhost:

{
	"BridgeInterface": "virbr0"
	"Hostname": "localhost",
	"Bind": ":8080",
}

Once the configuration is in place, we can start the daemon by simply running:

./rhyzomesrv

Deploying the UPSYCLE Message Router

Now that the server is running on our localhost, we can deploy some unikernels!

With our client we will tell the server four important pieces of information:

  • That we want to create a new instance of a unikernel
  • What to name this instance
  • The path to the unikernel
  • Any configuration parameters to pass the unikernel at boot (Unikernels are stateless!)
./rhyzomectl --server http://127.0.0.1:8080 instance create \
--name msgrouter \
--kernel /path/to/kernel/on/server/msgrouter.virtio \
--kernel-parameters "--ipv4-gateway 192.168.100.1 --ipv4 192.168.100.43/24"

Previous Work

Development on SHRUTHI builds upon the Rhyzome Provisioner and other prior work from the Entanglement.Garden project.

Outlook

Deterministic unikernel provisioning such as the one researched within DREAM and deployed by the Entanglement.Garden remains experimental and requires further research:

  • how to split DREAM software components most efficiently so as to keep a performance-complexity ratio as high as possible?
  • what economic models can be set up to facilitate both core node hosting and bespoke unikernel definition?
  • how does resource consumption compare with traditional cloud providing? This is especially important as intuitively, the energy gain would be tremendous, but the exponential number of unikernels might alleviate such gain (see Jevons Paradox), especially if more smaller hardware would be used, leading to less economies of scale.

As the DREAM project moves on to implement its design in the #dream:wp2-software and demonstrate it in the #dream:wp3-community work packages, it will inform and nourish these deliverables and we expect to see D1.4 -- DREAM Specification to answer some of these questions by the end of the year. We will be using SHRUTHI to explore the first two questions set forth in this outlook section.

You may follow updates in the related software repositories.

We welcome feedback and criticism! Our forum is open for friendly cooperation among #dream-catchers. Do not hesitate to contact us and contribute to the code: our research is made to improve the digital commons.


  1. previous in-house experience with “decentralized” independent hosting demonstrated that more than 80% of independent ISPs in Europe were using the same data centers, actually centralizing most alternate hosting at a single cloud provider. This led us to consider the concept of dispersion, since decentralization seems not to be enough. ↩︎

@dvn, we should refine the description as the current one belongs to the proposal, before we realized it would not be a spec. I guess we can still publish documentation there, but it should still reflect the work you’re doing.

1 Like

OK, I’ve made updates. Will think about it more, to see how I can expand it.

I think the https://dream.public.cat/pub/dream-hosting-spec should point to a different thread.

It’s the URL written in the contract as a mean of verification that the milestone is complete.

Are you suggesting we open another topic dedicated to SHRUTHI documentation and point this URL to that topic?

Alternatively, the first post can evolve into a presentation of how SHRUTHI works, how to use the prototype release, what’s the approach to provisioning unikernels. Maybe we could redirect the original URL to something more appropriate for this topic, e.g., /pub/unikernel-provisioning.

But it seems to me that ‘dream hosting spec’ could as well be a README to help third parties setup a similar infrastructure.

What do you think?

1 Like

Sounds good to me :slight_smile:

OK, so I will note this.

1 Like

Taken from the pad for D1.2, I think it can be used in D1.3:

  • Limitations Beyond the limitations considered in the scope, maybe mention the technical trade-offs between using unikernels and using UNIX sockets or processes in terms of resource usage and performance.
    • mirage unikernels can be run as either VMs or as sandboxed UNIX processes, running them as VMs provide additional isolation, but requires an additional orchestration system (SHRUTHI) for resource allocation and management of VMs, and thus reserved for server deployments, for desktop systems we use processes with seccomp syscall filters (solo5 only needs a handful of syscalls)
    • ocaml (and thus unikernels based on ocaml as well) is limited to single core (multi-core ocaml is in the works since ages), can be mitigated with horizontal scaling in server environments by launching a unikernel for each user or a group of users
    • message transport+serialization overhead between unikernels, can be mitigated by packaging components that need to talk to each other a lot in a single unikernel
    • outlook: caramel (ocaml on the BEAM erlang vm) may be interesting alternative to mirage unikernels once it matured, which has lighter weight processes with supervision

This page is an evolving document. The intention is for it to eventually describe how to setup and use SHRUTHI

Commitment

Implement & document the first stage of a unikernel provisioning system architecture. The design phase jumps start into a minimal implementation of the following features:

  • Provisioning of Unikernels
    • Must be able to create and start Unikernels
    • Must allow for passing configuration [meta]data to unikernels - eg. key material, network info

Thus providing a starting point for the software and documentation that will evolve during the project.

Outcome

Due date: 2021-05-28


@tg-x could you help with answering these questions?

After pasting it out of my editor where it’s been sitting over the weekend, I feel like it’s obvious the “Why SHRUTHI?” section should come before “Provisioning DREAM Software”… What do you think @how?

I moved “Why SHRUTHI”, and added some more to that section.

I would like some more info about how our unikernels should work together, so that I can properly fill out the “Provisioning DREAM Software” section to be more specific to our components. @tg-x could we have a chat?
I also plan to add a short section on what to do after the unikernel has been provisioned, tomorrow.

@pukkamustard @misterfish @natacha @Nemael @arie with that said, I would appreciate any feedback on the current state of the top post. :slight_smile:

1 Like

Yes, I’m going to give a pass about the economic model / incentives and try to provide some less technical context, expanding on the Docker, etc. stuff.

I’m also preparing an email to Mirko et al. for them to review the D1.* and validate the M2.

they should be able to talk to each other over IP, and the message router should also be able to talk to remote nodes on the internet.
we can talk about it today

I don’t have many comments, @dvn, just that, beyond fixing a part on the unikernel split within DREAM, we should probably add something about the economic model, and why not a screencast of setting up unikernels as a demo – since I don’t think it would make any sense to ‘demo’ a unikernel deployment as it would only show the result, not the process.

Maybe other team members have useful comments!

1 Like

We’re thinking alike - I’m working on this

1 Like

Good sell of Unikernels. I usually explain Unikernels by differentiating from Monolithic Kernels (like Linux) and Microkernels. I like the direct explanation.

Nice four bullet points.

What might be nice is to pick up this again (or maybe in another section “How does SHRUTHI help?”).

What are the things that would have to be done manually if one does not use SHRUTHI. What exactly are the things that SHRUTHI does that makes hosting DREAM stuff (or other stuff) easier?

It would also be nice to highlight the extent of SHURTHI and what SHRUTHI expects as given. E.g. SHRUTHI is not a build system. You will need a way to build the Unikernels before SHRUTHI can provision them.

I guess a complete example also solves my point above.

I can compile both server and client (Go makes this easy) and start the server using the configuration given in the top post. :tada: Is there further documentation on Rhyzome? It was unclear to me what the ImageHost and CloudInit configurations do. Both options are present in the Readme, but not in the top post (good choice of configuration in the top post, btw).

To deploy some unikernels, you must first invent the universe build some unikernels. I guess we need to do more work to prepare DREAM unikernels and document them (Crafting Unikernels). Should the top post use the example mirage-skeleton in order to have a end-to-end running example?

Are there ways planned of encoding this information in a file (e.g. JSON)? This would allow parameters used to start a Unikernel to be checked in to a Git repo.

Related to question above (what does SHRUTHI do): Does Rhyzome restart a Unikernel if it crashes?

I see that there is a rhyzomectl instance list command. What are other planned commands? Will there be a stop command?

Btw, while running the command ./rhyzomectl --server http://127.0.0.1:8080 instance list I get following panic:

2021/05/18 17:28:24 Error fetching instance list from  http://127.0.0.1:8080
panic: invalid character 'v' looking for beginning of value

I share this feeling. Great to have this in the considerations!

All in all, looks fantastic. Great work!

I’ll try and get the mirage-skeleton running to have something really running on Rhyzome. Is there something else I could use to easily test it? Maybe something fun from the qemu advent calendar or so?

1 Like

I’ve added the top-post to a new git repo, as a markdown file. Easier for me to work on it this way, and keep track of the edits.

Really nice work, @dvn. And great comments by @pukkamustard.

Which branch should we be on when trying rhyzome? I’m in boot-parameters because I clicked on the readme that’s the default.

I made a rhyzome.conf (and got a message saying it’s being successfully read) but I’m not sure the values are really being used? If I change the bind port it still appears to use 8080.

Doesn’t crash on mine. Maybe it’s because of the branch. Output:

Good point, hopefully we can develop a feeling for this as the months go by and/or work out some back-of-the-envelope intuitions. Could one run a core node on a Raspberry PI for example? If so, how many topics / subscriptions / connected edge nodes?

Aside from the electricity, some other things we could think about: how many unikernels (let’s say, using a small amount of memory), could a modern server concurrently execute using this system? As an order of magnitude estimate. And does each one in this system get a static amount of memory and then crash if it tries to use more, or can it request more (and release it?) ?

1 Like

Since people seem to appreciate the reference to “Jevons Paradox”, I wanted to point out that the addition of that section was all @howIntegrate additions and changes from @hellekin (36c8615c) · Commits · DREAM / SHRUTHI / Documentation and Specs · GitLab

That’s the correct branch for now. I set it as the default branch on the gitlab, so it should go there when you navigate to the project page.

That is odd, and sounds like a bug. I will investigate, thanks.

@pukkamustard you have excellent points, and I will integrate my answers into the next updates to the top post. I think a lot of it will be answered via a more detailed and elaborate installation example.