DREAM OFFDEM 2022 Demo Session

Here’s a pad:


Distributed Replicated Edge Agency Machine

It started at the first OFFDEM. Devon and …??

Hey look, we have this project, maybe we can do something with P2P and Linked Data. These communities don’t speak to each other, that’s the opportunity. DREAM is an exploration of what we can do between P2P and LinkedData.

Transmitting data and working offline as groups.

We started with 3 vectors of research :

  • Dromadar, renamed DMC Distributed Mutable Containers
    • Store data in something that is not a blockchain, it is commutative
    • Content at rest, you don’t need internet
    • One main usage of CRDTs is insuring you don’t have conflicts of edition in a document. We don’t use it for communication, we use it for merging data sets after the fact.
  • Upcycle : the router,
  • SHRUTHI : Self… routing and hosting infrastructure

Sync & search demo

Insert slide here

The ability to retrieve just part of the data is very important ; not everyone has the capacity to store Tb of data.

We focus on human aspects of the project more than technical problems. After a long time of asking people what’s wrong with the project, I still don’t know what’s wrong.

In hacker communities, I’ve seen that a lot. We don’t easily share emotions.

In legalities, those that applies to european funding, you can’t fund the same thing twice. We had this problem.

100 % of some funding was on the code ; 100 % of NGI funding on the design.

++DREAM has 3 components++:

  • sharing / adressing of data
  • P2P network : most P2P systems expect you to be online, not ours
  • SHRUTHI : the deployment. It became Rhizome. Its purpose is to automatise the deployment of networks. It allows you to host seperate networks, that’s why it could be interesting for hosters.

The data structure guarantees that there is no conflict.
Observe remove set, one of the crdts structure. All operations on the set (substraction, addition) are possible, unlike blockchain.


That’s the network component of DREAM, a P2P system. It has interesting properties like pricacy and security. There Core Nodes(always on) and Edge nodes is your phone or laptop. All nodes are pubicly adressed. Everything is content adressed.

HyperCore can be compared to it ; one of the developer of Upcycle is personnaly working on it.

By mid 2022, we will have the v1.0 of these specifications.

It seems to me that it more revolves around HTTP.

HyperCore is another protocol. They are peer-to-peer and based on hashes. So that’s your address space. And they are also the merge is basically when the peers are online. Peer to peer without the intermediary senders.

HyperCore is peer to peer. But with the discovery node which connects peers, basically when you try to sync something you need to connect with a peer who has it. So if you’re offline for a week it doesn’t really matter so long as when you come online, there’s another peer online. If there are multiple versions, it will sync to the last available version. The addressing is based on hashes.

Another point of comparison would be how the data is transmitted or encoded. With aeres it’s either one kibibit/byte or 32 KiB blocks, which align to a number of devices and also some filesystems for optimization, especially on smaller hardware. All this is encoded using CBOR. And there was a plan but we didn’t reach that point yet, so if you’re interested in the topic we are interested also, to use HDT (Header Data Triples). It’s a way to encode RDF as binary data that can be searchable without having to decompress or decode things. With DREAM, we touched a lot of very very interesting computer science research topics. Unfortunately we didn’t have much power to bring the community aboard and make this a bit more fun. After of them we’re going in an event meeting some people in VR. OCaml developers to talk about this. If there’s interest we’re probably going to push for a bit more research but this time with much more community activation because I’m a bit bummed out by this project.

A recurring conversation in our meetings, it was clear what the project was about technically, the necessity was always clear for everyone. But “What was it for” was never possible to clarify. We knew it was necessray, but not what for. We finally came with the sci-hub proposition. (/s GitHub is not a proposition, it’s a trap). This is one of the oldest issues of the software, dealing with publications and availability of content and access to knowledge. That’s really a long long long history. We came back to our own community issues with a proposotion, beause sci-hub is a big risk, you cannot just put it on a stick and that’s it. I wonder if, it brings me back to the whole question of who these things are for. It does not only apply to the DREAM issue. We know it’s necessary, but who’s going to use it, how can they use it, and how can we make it usable. It’s not about making the interface easy, it’s more practical: how can we use it?

One thing that changed in recent years is that it became very easy to get money for techincal stuff. But it’s not the case for everything else that counts: documentation, community building, and so on. These aspects are overlooked by all of the funding institutions. This is also part of why we wanted to have this format of OFFDEM, of having a single track, where everyone can share what they have on their minds and we can work together. Because there are so many aspects that we never talk about, and that need to be on the table. So, I could talk endlessly about DREAM and how it became a nightmare, you can lose sleep over DREAM and stuff like this, but I’d rather keep all of this in mind and have Anis introduce the part we couldn’t do ??, that; so, DREAM is an NGI pointer project, NGI next generation intenet pointer, uh… internet architects of tomorrow? etc etc. You need to plug the [I need 5 minutes]. So, Anis work also got NGI pointer funding, which was fun because then we were happy with the prospect of working together, but it never happened. But here we are. I don’t know if you’re familiar with MirageOS, but it reminds me that Florence was not familiar with sci-hub. For the record, sci-hub is an open science project by Alexandra Albakyan,

russian or ukranian… ukranian scientist.

I thought she was Khazak. Yes. (Get your USSR out of here!)

She started sharing articles that were behind paywalls ; it grew very fast and it became like a “terrorist orgnisation”. SciHub is DNS blocked in France and Belgium as a result. We come from a sharing culture : sharing is caring.