Beyond Condorcet winners: Majority Judgement

631 words

There have been rumours on Wikipedia that Michael Balinksi passed away this Februari 4th. If this is indeed the case, then my heartfelt sympathies go out to those around him. I never knew him personally, but he was an amazing scientist and I’ve used results of his more than once. He was quite a phenomenal speaker.

Today I want to talk about a single-winner voting system that Balinksi introduced with Rida Laraki. It is called majority judgement and it is so brilliant that it almost makes me wonder what voting theorists could have been doing both before and since then.

One big concept in social choice theory is majority rule: if most people think that A is better than B, then A should be the winner. Most multi-candidate voting systems generalize this in various ways, always preserving that if candidate A would beat every other candidate B, C, etc, in a pairwise competition, then A should win the election. If candidate A satisfies this criterium, we call A a Condorcet winner. The leading wisdom in social choice theory was that any good voting rule should let the Condorcet winner win (if it exists).

According to my informal sample from the Effective Altruism community, EA’s favourite voting system seems to be approval voting, which is one of these voting systems that generalizes majority rule to multiple candidates.

The genius of majority judgement is that it moves past the Condorcet winner paradigm and considers a perspective beyond that.

To illustrate, let’s assume we’re running an election among two candidates, the red candidate and the blue candidate, and every voter gets a ballot with for each candidate the options “Amazing”, “Good”, “Mediocre” and “Terrible” to express how good of a president they think the candidate would be. Let us for simplicity assume that our voting population consists of 5 groups, the A’s, B’s, C’s, D’s and E’s, and everyone in a group votes the exact same way. The outcome of the election is in the table below.

ABC
% of population402040
Red candidateAmazingMediocreTerrible
Blue candidateGoodTerribleAmazing

The red candidate wins the Condorcet-style election: 60% of the population prefers the red candidate over the blue candidate. But the blue candidate is clearly better: 80% of the population considers the blue candidate to be “Good” or better, while 60% of the population considers the red candidate to be “Mediocre” or worse.

Majority judgement is a voting rule designed to have the blue candidate win in the above election. The actual algorithm is a bit involved, but the first step is to compare the median vote: if at least 50% of the voters think the blue candidate is “Good” or better and at least 50% of the voters think that the red candidate is “Mediocre” or worse, than the blue candidate will win. If the median opinion is a tie, a more complicated tie-breaking rule is entered. ((The exact rule satisfies a number of optimality criteria and is the only rule to do so. For this post I want to skip the details.))

I think the concept is very elegant, and I believe that the outcomes really would be better than with a system that elects Condorcet winners. In a talk that Balinksi gave, which I was lucky enough to attend, he pointed out another advantage of the majority judgement rule: it allows voters to express what they think of the candidates. You wouldn’t be asking anyone to “vote for the lesser evil”, everyone can keep their conscience clear. Majority judgement admits a clear way of expressing frustration with both candidates: rank both of them very badly. It also helps that the different options are given in words instead of ranking by numbers, for the latter turns out to entice voters to rate their favourite candidate 10/10 and all others 0/10.

Funding opportunity in academic publishing

472 words

I’ve recently listened to a talk by Jean-Sébastien Caux, the founder, implementer and chairman of SciPost. It looks like a charity that might be worth funding.

Academic publishing is a complete disaster. The incentive structure is messed up. Papers must have more than a certain minimum of content*coolness, but simultaneously they shouldn’t be too long. This results in researchers sticking two unrelated papers together and calling it one thing, and cutting single papers up into multiple pieces to get published. If your result is too simple then it will get rejected so people make their papers more difficult on purpose. There is no pipeline for communicating minor improvements and comments on other people’s papers to the outside world.

Peer review might not literally be a farce but it is closer to being so than anyone is really comfortable with. Because it all happens behind closed doors, peer reviews seldom have constructive feedback in them and reviewers will most likely harm their own careers if they spend time and effort into reviewing that could be spent doing research. People submit everything to the best journals first, trying one rung lower when their paper gets rejected. The review process can take years. Reviews are all hidden from the wider public, so the only way to judge a paper’s quality if you’re not in the field is by looking at citation counts and the journal a paper appeared in.

Publishers make a lot of profit selling the research community’s own results back to them. Journals and impact factors are silly inventions that date back to the dark ages before internet existed and serve little to no use in modern times.

Enter SciPost. Imagine a love child of the free software movement, open access academic publishing, and modern discussion platform design. Submission is free. A manuscript is public from the moment one of the editors decides it is probably worth getting reviewed, with all the fancy DOI’s and what-not that you could ask for. Both the content and the platform itself are licenced under free licences. Reviews are public, either with the authors name or anonymously, which turns out to greatly improve review quality. Reviews are citable objects with DOI’s and everything. A number of reviews get invited, but anyone can submit a review if they’d like. People can post comments on reviews. Their funding comes entirely from sponsors. Their average costs per publication are under $400, way less than the article processing fees of most open access journals. They keep themselves to principles of openness way beyond the Fair Open Access Principles.

Right now SciPost publishes papers in physics. They want to expand to other disciplines, but money is the major bottleneck. Over the past 3 years they’ve gotten around $250k in total funding, so the marginal gains from additional funds should be pretty good.