Threshold Consistency Protocols

A trivial intuition that one can derive from the construction of PBFT algorithms is that if number of peers (n) in a system is known/can be fixed, you can solve a lot of problems by defining models to require input from a threshold of n before taking an action.

We assume that such a system is immune to sybil attacks because n is fixed and known - new identities cannot be created, or the validation for new entities is outside the scope of consideration.

If you can combine such a threshold mechanism with a level of cryptographically verifiable pseudonyms ( and perhaps historical peer trust ratings), you can construct protocols that can make determinations about events within a group without necessarily reaching for consensus.

(By consensus I refer to traditional PBFT algorithms where more than two thirds of peers need to agree that more than two thirds of peers agree on a particular event)

I think that there probably exists a set of problems that can be solved in a metadata resistant system without an independent observer and without consensus i.e. problems that require knowing some state, but not every peer needs to know the same state (or agree on which state).

(see metadata resistant consensus for a definition of independent observer and the inspiration for this note)

To construct a trivial example, borrowing from CFDT, we can imagine an alarm flag represented by a single boolean value. The flag is false is the alarm has never been tripped, and true if it has been tripped. Once tripped the flag cannot be untripped. Any single peer within a group can sign a message and trip the alarm. Consensus is not needed. Which peer trips the alarm is also unimportant.

Above would be an example of a protocol requiring a threshold of 1/n. At 23 of peers you reach problems that require PBFT consensus. I think there are some useful solutions sitting in between.

Those problems have certain attributes:

  1. They cannot require strict ordering of events - we are interested in systems without independent observers.
  2. They cannot require consensus between nodes - we are interested in thresholds below 2/3n, see the literature for impossibility results of achieving consensus with less than that threshold of honest nodes.
  3. They should not rely on peer correctness - while we assume sybils are not a problem, we do not assume that any single peer can be trusted, without additional information.

Our tripwire example might be seen as failing on point 3, a malicious peer could trip an alarm when no such alarm should have been tripped. As such the nature of the alarm matters. We can get around this for now by stating that our alarm is to indicate that there is a malicious actor within the group.

By introducing asymmetric trust into our protocols we can start to build more complex operations without relying on group consensus. If a peer trusts another peer, then they can weight their input higher than that of another peer, and thus reduce the threshold necessarily needed to reach a decision.

At the extreme end of the trust spectrum, if a peer trusts every other peer, then they will believe everything they are told, if they trust no peers, then they must weight for group consensus.

In between exists a world of chaotic trust whereby individual peers may make distinct determinations about the world. This is obviously not a desirable property for many kinds of actions within a decentralized group, but might be acceptable for some kinds of protocols.

My instincts tell me that there are ways of driving such protocols towards eventual consistency, but that eventual consistency is not necessarily a goal of such protocols.

As I write this I am drawn to make comparisons to alarm pheromones, indeed I think that there might be a strong analogy here, and that one example of a threshold consistency protocol that might have realistic application would be file storage among a group.

Bad actors within the group might attempt to gain an advantage of file access without contributing storage or bandwidth. Peers that do take on that burden might use a threshold consistency protocol / alarm pheromone to alert other peers to that untrustworthiness or to signify that they need to offload storage burden.

Such a protocol has not been well thought out and is but the seed of an idea, that I might be tempted to explore further in the future.