Demarchy are political systems built using randomness. Demarchy has been used to designate political leaders in Ancient Greece and in France (during the French revolution). In many countries, jury selection is random. On Slashdot, the comment moderation system relies on randomly selected members: as you use the system, you increase your (digital) karma which increases the probability that you will become a moderator. Demarchy has been depicted by several science-fiction authors including Arthur C. Clarke, Kim Stanley Robinson, Ken MacLeod and Alastair Reynolds.

Democracy is deterministic: we send the representative with the most votes. Demarchy  is probabilistic: we need a source of random numbers to operate it. Unfortunately, probabilities are less intuitive than simple statistical measures like the most frequent item. It is also much more difficult to implement demarchy, at least without a computer. Meanwhile, most software uses probabilistic algorithms:

  • Databases use hash tables to recover values in expected constant time;
  • Cryptography is almost entirely about probabilities;
  • Most of Machine Learning is based on probabilistic models.

In fact, probabilistic algorithms work better than deterministic algorithms.

Some common objections to demarchy:

  • Not everyone is equally qualified to run a country or a city. My counter-argument: Demarchy does not imply that everyone has an equal chance of being among the leaders. Just like in a jury selection, some people are disqualified. Moreover, just like Slashdot, there is no reason to believe that everyone would get an equal chance. Active participants in the political life might receive higher “digital karma”. Of course, from time to time, incompetent leaders would be selected, by random chance. But we also get incompetent leaders in our democratic system.
  • There is too much to learn: the average Joe would be unable to cope. My counter-argument: Demarchy does not imply that a dictator for life is nominated, or that the leaders can make up all the rules. In the justice system, jury selection covers a short period of time and scope (one case), is closely supervised by an arbitrator (the judge), and typically involves several nominates (a dozen in Canada). And if qualifications are what matters, then why don’t we elect leaders based on their objective results to some tests? The truth is that many great leaders did not do so well on tests. Most notably, Churchill was terrible at Mathematics. Moreover, experts such as professional politicians are often overrated:  it has been claimed that picking stock at random is better than hiring an expert.
  • Demarchy nominates would be less moral than democratically elected leaders. My counter-argument: I think the opposite would happen. Democratic leaders often feel entitled to their special status. After all, they worked hard for it. Studies show that white collar criminals are not less moral than the rest of us, they just think that the rules don’t apply to them because they are special. Politics often attract highly ambitious (and highly entitled) people. Meanwhile, demarchy leaders would be under no illusion as to their status. Most people would see the nomination as a duty to fulfill. It is much easier for lobbyist to target professional politicians than short-lived demarchy nominates.

Further reading: Brian Martin’s writings on demarchy and democracy

34 Comments

  1. The thing is, the idea of having somebody run things, however they are selected, is itself problematic.

    Politics has always focused on how to select and support leaders. Information age politics has to be about how to do away with the need for them altogether.

    Comment by Stephen Downes — 11/1/2011 @ 15:02

  2. “probabilistic algorithms work better than deterministic algorithms”

    Whoa. I disagree. Hash tables are deterministic. Cryptography is about complexity, not probabilities. Machine learning is over-rated. For argument’s sake: For every probabilistic algorithm, there is a deterministic one, which is at least as good.

    Comment by beza1e1 — 11/1/2011 @ 16:22

  3. Sounds like the base for a scalable solution.

    I suspect that there is merit to making very local selection of (potential) representatives, and this made Congress work better when this country was first formed. Wrote about this a ways back.

    http://bannister.us/weblog/2006/11/19/making-politics-more-local/
    http://bannister.us/weblog/2007/05/12/are-political-parties-the-problem/

    The problem with local perceived-merit selection is that it does not scale well to large populations. You want to keep the number of voters somewhat small per selected representative. This allows voters to know more about their representative, and limits the value of subverting the selection process for any one representative.

    Add random selection from the pool of local merit-selected candidates, and you have a scalable solution.

    Comment by Preston L. Bannister — 11/1/2011 @ 16:23

  4. Daniel, the notion of ‘approximation resistance’ is interesting here: basically, cases where the best approximation algorithm is one generated at random !

    http://cstheory.stackexchange.com/questions/3656/what-are-the-problems-with-the-best-approximation-ratio-achieved-by-algorithm-ret

    @beza1e1 that’s not at all true: while we don’t know if randomization has strictly more power, there are many instances (including hashing) where randomization gives power

    Comment by Suresh — 11/1/2011 @ 18:03

  5. @Suresh

    I like this… approximation resistance. There is a lot of approximation resistance in politics.

    Of course, with demarchy, you don’t have to rely on uniform distributions… For exampe, we could designate journal editors based on how many papers they have published in the journal. It would actually make a lot of sense.

    Comment by Daniel Lemire — 11/1/2011 @ 19:01

  6. As a machine learning researcher, i just need to reject the idea of random decisions performing better than directed ones :) Randomness may help you to explore a new search space (decision or candidates space) or reduce time, but search and final acceptance (for the machine learning case) should be performed in a smart way.

    Comment by paberlo — 12/1/2011 @ 2:57

  7. Directed decisions only work better when the algorithm used for decisions is better than random. Very often – especially in the political space – the algorithm is lousy.

    Given perfect knowledge, directed decisions should always be better. If knowledge is less than perfect … then the result is far less certain.

    Five centuries back Machiavelli was trying to understand why democracies sometimes yield a better result (centuries further back) than an expert (a well-educated prince). Often, when a prince (an expert) adopts an inappropriate algorithm, the results can be a disaster. Experts are occasionally prone to adopt algorithms (beliefs) that are far from optimal.

    Random selection has a place when knowledge is imperfect.

    Comment by Preston L. Bannister — 12/1/2011 @ 3:20

  8. I’ve been interested in demarchy ever since I read about it in Arthur C. Clarke’s “Songs of Distant Earth”.

    What bothers me most about this system is that when society is fractured (for example along ethnic, economic, or religious lines), then this system may fail to adequately represent some of the subgroups of society. In cases where there is strong animus between the various groups, this may lead to very harsh results.

    I live in Israel. Taking into account all the different communities here, and how they hate each other (and this is definitely not restricted to Arabs and non-Arabs – each group is fractured within itself to mutually hating sub-communities), the thought of randomly choosing a few people, which might turn out to misrepresent some communities, is very frightening.

    Comment by U — 12/1/2011 @ 6:03

  9. @U

    Jury selection often deals with race and gender. Surely, there are crminal courts in Israel. How does jury selection works?

    Moreover, we too often expect democratic leaders to somehow represent everyone. They are supposedly able to abstract out all of their own biases and interests over an entire career. Of course, this is an illusion. There is no reason to believe that democratic leaders are any more enlightened that others. In fact, because they are necessarily very ambitious people, the opposite might be true.

    Any rule you put in place to protect communities can be translated in a demarchy.

    Comment by Daniel Lemire — 12/1/2011 @ 7:47

  10. @paberlo

    As a machine learning researcher, i just need to reject the idea of random decisions performing better than directed ones

    No, plain no, I am not a researcher of any kind and that may be why I am aware that some instances of search, traveling salesman or spin glasses for instance, are definitely out of reach of directed algorithms only simulated annealing can approach (and only approach) the optimal solution, anything else get stuck in local optima of the fitness landscape.

    Comment by jld — 12/1/2011 @ 12:33

  11. @Daniel
    Israel, like many other countries in the world, has no jury system. See this [http://en.wikipedia.org/wiki/Jury#India] for example.
    In fact, the historical reason there are no juries in Israel is exactly the one I gave. Even though Israel (Palestine) was ruled by the British from 1917-1948, and much of the legal system is based on the common law. The British decided a jury system would not work in such a fractured society.

    I do not believe for one moment that democratic leaders represent everyone. However, they have to take account of the political power embedded in the representatives of all the groups. In case power is, by virtue of some random selection, conferred to some groups and not others, the fear is that they will use this power unjustly. I do not trust my fellow citizens not to abuse this power if they have it, because emotions run so incredibly high. I believe this can happen anywhere, but particularly in places where the dividing issues run deep.

    Comment by U — 12/1/2011 @ 14:23

  12. It seems to me that the fundamental problem here is the false equivalence stated or implied between government and algorithms. Randomized algorithms, for certain problems, can be superior to (known) deterministic ones. But governance is not an algorithm.

    I don’t know how you would disqualify people from governance; some propose to do so based on lack of ownership of property. It seems unlikely that it would be based on an understanding of the world around us. That certainly isn’t the case for democracy, either.

    Comment by Mike Stiber — 12/1/2011 @ 17:26

  13. @U

    Fair enough. I don’t know enough about Israel.

    @Stiber

    Nobody knows how well demarchy would work… except that whenever it is used, it seems to work well.

    Shouldn’t you be concerned by the fact that (according to you) the average citizen is unable to understand the world? How can they be trusted to vote in the first place?

    Comment by Daniel Lemire — 12/1/2011 @ 21:34

  14. @jld

    of course, I agree. With random algorithms i meant pure random algorithms, and with directed I meant both deterministic and randomized&directed algorithms.

    Comment by paberlo — 13/1/2011 @ 2:14

  15. Hash tables are deterministic!!!

    +10100 to beza1e1

    Comment by Itman — 13/1/2011 @ 14:38

  16. @Itman and @beza1e1

    I didn’t write that hash tables were non-deterministic, though they certainly can be. However, pick up any textbook introduction on hash tables (say Cormen et al.), and you’ll get a probabilistic analysis.

    Comment by Daniel Lemire — 13/1/2011 @ 16:44

  17. Preamble: it looks like my previous comment did disappear.

    Sorry, Daniel you did:
    Meanwhile, most software uses probabilistic algorithms:

    Databases use hash tables to recover values in expected constant time;

    Classic hashing algorithms are deterministic. That is why one cannot safely say that hash table is a probabilistic algorithm. Perfect hashing is randomized (at least in some cases).

    The argument about probabilistic analysis is not good either. For example, random binary trees have random analysis, e.g., to conclude that their average height is log N.

    Yet, the randomness comes from the data, not from the algorithm itself.

    Comment by Itman — 13/1/2011 @ 23:35

  18. @Itman

    Most random number generators and most hashing are actually deterministic. But they emulate randomness. Down this path you could get into philosophical questions… Does randomness really exist, or is it just a convenient abstraction? Then we could have a debate as to whether “God plays dice”. And so on.

    What matters to me is that the hash value of an object “looks random”. We could get into what this statement exactly means… but we both know intuitively what it means.

    Comment by Daniel Lemire — 14/1/2011 @ 9:23

  19. @Daniel.
    It is not quite correct, because the seed is usually random. Hashing should be deterministic, this is the point. Otherwise, you will have trouble finding where you put elements.

    Comment by Itman — 14/1/2011 @ 9:44

  20. @Itman

    Regarding hash tables, it only matters that for the duration of the life of the hash table, the same object gets the same hash value. But you should use a different (randomly picked) hash function for each new hash table. And it is not just a theoretical possibility. One problem with having a fixed hash function, like we do in Java, is that it poses a security threat. If I know the hash function, I can choose the objects so that the performance of your hash table will degenerate to linear complexity, by maximizing collisions. Thus, you could potentially crash a server by exploiting your knowledge of its hash function. Of course, generating new hash functions at random is slightly inconvenient, so we don’t tend to do it often…

    Comment by Daniel Lemire — 14/1/2011 @ 10:09

  21. Of course, generating new hash functions at random is slightly inconvenient, so we don’t tend to do it often…

    This is an interesting theoretical possiblity, but I never saw it implemented.

    Comment by Itman — 14/1/2011 @ 10:14

  22. @Itman

    Both Ruby and Perl use random hashing: Ruby systematically so, and Perl when needed. Universal hashing is a real thing.

    In case you think I might make this up as I go, let me quote wikipedia:

    Using universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.

    http://en.wikipedia.org/wiki/Universal_hash

    Comment by Daniel Lemire — 14/1/2011 @ 10:51

  23. I did not doubt it exists in theory.

    In addition, your another point is that probabilistic algorithms work (you imply always) better than deterministic.

    What about tries vs hashing? Tries can be as efficient as hashing (compact tries are slower but by a small factor) and has additional functionality. They are not affected by adversary insert sequence either! How does it stand for being worse than “probabilistic” hashing?

    Comment by Itman — 14/1/2011 @ 11:07

  24. @Itman

    Randomized tries (assuming there were such a thing???) might have benefits over deterministic tries.

    Most of the code I write is deterministic. But, at least on paper, random algorithms have very nice properties compared to deterministic ones.

    The statement in my blog post was rhetorical, not scientific.

    Comment by Daniel Lemire — 14/1/2011 @ 12:15

  25. This discussion has drifted far from Daniel’s initial observation about political systems. Within political systems we see many ways in which the deterministic algorithms in use can be subverted, and otherwise malfunction.

    From the work on algorithms for computers, we have learned that sometimes (often?) adding a bit of randomness can improve the result. Sorting is the classic example. When the deterministic algorithm is perfect for the incoming data, then there is no advantage to shuffling. Partially ordered data could make the sort perform badly. A bit of shuffling makes the partially ordered case fantastically unlikely, and may yield better overall performance.

    Our political algorithms are considerably less precise. With highly motivated parties interested in altering the incoming data, we get failures.

    To Daniel’s arguments I will add the observation that local selection – when the number of voters is limited compared to the number of candidates – seems to work very well. When the candidate is sufficiently local, voters can meet the candidate first-hand, and attempting to subvert all the candidates is impractical.

    The problem comes when scaling up to huge populations of voters. The selection-by-voting algorithm does not scale.

    If representatives are selected by local voters, then roles are assigned randomly from the total pool of representatives … the combined algorithm might work better than anything we have at present. Once elected, a representative would get randomly assigned a role at the city, county, state, or national level. This gets us both the generally-best folk in the pool, and the least chance of subversion in the selection process.

    Comment by Preston L. Bannister — 14/1/2011 @ 13:19

  26. @Bannister

    Very creative.

    You could also reverse this and put the randomness first. From each community, someone is picked at random. Then people vote the “randomly selected people” (based on their public profile) to various positions.

    Comment by Daniel Lemire — 14/1/2011 @ 13:47

  27. Absolutely not.

    As the links I posted above should indicate, I am a great believer in local selection. You want merit-based selection, when it works – and local selection is a very good approximation. But it has to be sufficiently local.

    I have seen two candidates in a very local forum who were later elected. One whom I liked, did a terrific job in office. After listening to the other, I did not trust the guy, and he turned out corrupt.

    http://en.wikipedia.org/wiki/Mike_Carona

    I take this as a pretty strong hint that local selection can work. :)

    One of the main objections to random selection is that you might get an idiot selected. On a local level, folk self-select (someone who does not feel qualified is less likely to run), and are filtered by local reputation (folk who know you will vote for or against, and influence other voters). Seems to me that this is a pretty good approximation.

    Comment by Preston L. Bannister — 14/1/2011 @ 14:33

  28. Oh, make that three elected folk. The third I met at soccer, PTA, Boy Scouts, and around the school when picking up kids. She was an easy choice, and did very well.

    Just realized I am making an unjustified assumption about shared knowledge, in this discussion, that is key to why local selection works.

    When someone is standing in front of you, you are much less likely to be deceived. If you are deceived, your friends might not – and you probably have a good notion who of your friends are most perceptive. That means an unscripted meeting in a local venue is likely to expose the true character of the candidate to the voters present. Body language is hard to fake.

    http://en.wikipedia.org/wiki/Body_language

    I was assuming that all in this discussion knew that this aspect of psychology was extremely relevant.

    Comment by Preston L. Bannister — 14/1/2011 @ 14:51

  29. @Bannister

    I’m not opposed to your model, I was merely pointing out that there are various alternatives. Randomness does not have to be at every level.

    Anyhow. I think it would be great to just pick at random whether the next American president is Republican or Democrat. Might save a lot of money.

    Comment by Daniel Lemire — 14/1/2011 @ 15:07

  30. contratulations on getting such a ‘hot post’.

    Comment by paberlo — 14/1/2011 @ 15:51

  31. Perhaps my prior response was a bit too strong. If there were no value in local selection, then using random choices at the local level is entirely reasonable.

    From first-hand experience (and some old observations from Machiavelli) I have long believed in the value of local selection. At the time the United States was founded, the much smaller population meant that local selection applied even to Congressmen. What the Founding Fathers lacked was any notion of how things would change as the population expanded. Put differently, our form of representative democracy works better for smaller populations, and poorly for large populations.

    What I lacked with local selection was a compelling model for how to scale up usage. Your writing about Demarchy supplied – what seems to me – the missing bit needed to apply local selection to large populations.

    The next question – how does random assignment of roles affect who runs for local selection? Not everyone wants to be a nation-level representative. Some might choose to opt-out. Should opt-out be allowed? As an old Greek observed, the guy that does not want the job might be better suited. :)

    Comment by Preston L. Bannister — 14/1/2011 @ 16:42

  32. @Bannister

    Some might choose to opt-out.

    Can you opt out of jury duty?

    Comment by Daniel Lemire — 14/1/2011 @ 16:46

  33. You cannot easily opt-out of jury duty, at least around here. On the other hand, judy duty is more invasive over a short-term – so not an exact analog.

    Comment by Preston L. Bannister — 14/1/2011 @ 17:11

  34. I wonder how much randomness really helps. For instance, Gil Kalai’s post suggests that BPP=P is a popular belief
    http://gilkalai.wordpress.com/2009/12/06/four-derandomization-problems/

    Comment by Yaroslav Bulatov — 14/1/2011 @ 23:37

Sorry, the comment form is closed at this time.

« Blog's main page

Powered by WordPress