This fieldnote documents my notes with regards to a characterization of systems I’ve been calling “optimal privacy” but which intersects and overlaps with many other concepts in the literature e.g “privacy and computational complexity” or “approximate privacy”. Starting with an observation, it is clear that privacy-preserving protocols are less efficient than non-privacy preserving protocols - that is to say that it seems that one must always communicate more to say less.
If the safety/operations/security of a system do not rely on trust, does centralization within the system matter? This is a question I’ve been considering lately as I start to think through the consequences of Cwtch server deployments. There are ways of using Cwtch that promote heavily centralized topologies e.g. only using a small number of Cwtch servers. The servers themselves though are untrusted - we assume they will try to do bad things and design the protocol around subverting or, at a minimum, detecting bad behavior.
Decentralization is important because building systems that distribute power is important. Building systems that resist abuses of power is important. In it’s simplest definition, decentralization is the degree to which an entity within the system can resist coercion and still function as part of the system. Now be careful, coercion doesn’t mean force, it means negative incentives to align with an authority. Note how nothing in the above definition explicitly references the distribution or the ownership of entities.
A trivial intuition that one can derive from the construction of PBFT algorithms is that if number of peers (n) in a system is known/can be fixed, you can solve a lot of problems by defining models to require input from a threshold of n before taking an action. We assume that such a system is immune to sybil attacks because n is fixed and known - new identities cannot be created, or the validation for new entities is outside the scope of consideration.
How do group participants in a decentralized, metadata resistant, asynchronous environment come to a consensus on transcript consistency? This is a fundamental problem when designing such systems and one that we need to solve in order to advance the discipline and build usable tools that people can rely on. Cwtch presents a partial solution to the problem through the introduction of a concept called “Untrusted Infrastructure”. All participants in a group transcript relay their messages through Cwtch Server, the metadata resistant properties of the system mean that, the server gains no information on which to manipulate the transcript in a targeted fashion and as such peers can have some assurance that the transcript they ultimately see reflects reality.
I have a long history of infatuation with the mechanisms at work in colonies of ants and other eusocial insects. The book that got me started down that path was Turtles, Termites and Traffic Jams by Mitchel Resnick which I read when I was 15 and which I think deserves it’s own fieldnote. The idea that a system can be devoid of any leader but still fully functioning will forever be an inspiration for my work.
There is a principle of Defensive Decentralization: when besieged, a well constructed decentralized system will further decentralize. The corollary of which is: A well constructed decentralized system will identify & attack emergent centralization. A problem I’ve been considering recently is the tendency for decentralized systems to develop emergent centralization in response to non-adversarial conditions e.g. to improve scalability. Some opportunistic centralization is great, even necessary for an effective system - but far too many protocols leave such behavior unchecked.
Decentralization makes certain kinds of attacks on privacy more difficult. The truth is that it’s a spectrum of attacks and adversaries. On the centralized cash side, you are vulnerable to single-point interception, and targeted censorship. On the decentralized side to you are vulnerable to small anonymity sets and, in many cases still, large scale correlation attacks. (And in many, many cases still a complete lack of privacy at all) If your adversary is a state, you might be in trouble either way (depending on your threat model), but many of your centralized options are vulnerable to jurisdictional pressure.
In a modern P2P protocol white paper, under a section titled Network Privacy, there is a section that reads: “There is an inherent tradeoff in peer to peer systems of source discovery vs. user privacy.” I disagree with the statement & the impact resulting design decisions have on privacy. The system defines a source discovery as the IP:Port pairing of a peer that has access to the data you want.
The threat model and economics of federated systems devolve to concentrating trust in the hands of a few, while missing out on the scale advantages of purely centralized solutions. Federation results in the data of users being subject to the whims of the owner of the federated instance. Administrators can see correspondence and derive social graphs trivially. They are also in a position to selectively censor inter-instance communication. All of this while gaining none of the benefits of scale and discovery that centralized systems provide.