This workshop is a work in progress. Contributions are greatly appreciated!
Have an idea for a holochain app?
Follow along this codelab to get a first draft of a design for your hApp.
You'll learn to:
It is highly recommended to do this workshop after:
This way you will already know the basic primitives and building blocks available to us.
Holochain's crucial aspect regarding DHT architecture is the fact that every happ creates its own network or DHT. Those networks can be public, private, permissioned, or impose certain conditions for an agent to be able to enter.
This makes architecturing your network topology a big first step when designing a happ.
Any holochain application system can be described as a set of high-level DHT components. For purposes of this workshop, you can think about DHTs as a shared space, in which all agents can see every piece of data published in it.
These rules don't cover all the cases, but can guide you towards the appropriate solution.
Holochain applications are composed of different modules or zomes. Each zome is an aggregation of:
Zomes don't actually specify which entries they are going to be storing, but defining which type of entries are supported. In a traditional database system, this would be akin to a database schema specification.
Different technical aspects apply to the relationship between multiple zomes:
valid_membersproperty while a "mutual-credit" zome may require a
credit_limitproperty. All properties should be provided for all zomes to work.
However, by far the most important concept in breaking down functionalities into differents zomes is the Single Responsibility Principle, which states that a single module should only be responsible for one autonomous part of the functionality. Here, knowledge and experience from other kinds of systems are very useful and needed.
In holochain, all that an agent can do to modify state is create entries. It does so by appending the new entry and its header to their own source chain, and advancing the top of that source chain to the latest header. BUT: that entry might be of different types:
After that, it proceeds to calculate which DHT Operation Transforms should every one of those entries produce, and where should those transforms happen in the DHT.
For example: creating a normal entry produces at least 3 DHT Operations: it publishes the entry in the entry's hash location, it publishes the entry's header in the header's hash location, and publishes an internal system link from at the agent's address to point to the new header's hash.
This gives us two very different sets of data storage and retrieval systems:
Local state relative to every agent. Here local order of events is automatically guaranteed by the chain of headers and can be scanned in validation rules, when countersigning, etc.
Global shared state, which arrives to its final state through eventual consistency. Only from the built-in mechanisms, we cannot assume that we are seeing all the actions from all the agents in the network (there are also extreme edge cases as well, like network partitions).
BUT, we can be sure that what we already see happened and is valid. All entries accessible from the DHT are already validated by the neighborhood they live in, so we can assume that they are valid. Also, we can be sure that the content of an entry won't change, although its metadata CAN change (whether it is updated or deleted, or which links it has attached).
Lastly, holochain's DHT works like a big CRDT. That means that global order of events cannot influence the final state of the data. This means that, for example, if Alice creates entry A, then Bob removes entry A, and then Alice creates again entry A, entry A will stay removed forever. Holochain internals mechanisms don't have any way of discerning in which global order those events happened (does global time exist?) so it will solve the dispute by giving the remove entry order priority over all the other ones.
All these mechanisms and primitives influences our validation rule design, and we have to keep them in mind at all times not to write bad validation rules. Validation rules must be deterministic: for a given entry or link, all validations any agent might run at ANY POINT IN TIME MUST result in the same execution.
This gives us some constraints when designing validation rules:
Example: when validating if an agent has some role assigned, we must check if that agent had that role assigned when they committed the entry, not at current point in time.
Example: I write a validation rule that checks that the number of links from a given entry is less than 2. This will be true at some point in time and for some perspective, but as links are added to the entry, the validation rule will change its result.
Example: if we are validating a countersigned transaction, in which both Alice and Bob sign and commit the transaction separately. No matter "whose commit" we are validating, the validation should give the same result from Alice's point of view as well as Bob's point on view.
Example: if I'm validating a transaction in a mutual credit application, I can make that transaction be valid only if both parties have the appropriate headers published on the DHT. If the validation rules fail to find the headers at some execution, they will pause and try later.
By complex information flow we mean any piece of information or protocol that goes beyond a simple
get_links, and involve multiple parties or components.