A case for donation splitting

TLDR: if welfare compounds then risk-aversion is good.

Within EA circles, the question of splitting donations pops up every once in a while. Should you donate all your money to the singular top-rated charity your singular top-rated cause area, or is there reason to split your donations between various different causes or interventions?

People other than me have written and talked about this under various headers, I’ll list a small subset. Reasons not to diversify (Giving What We Can)Reasons to diversify: the value of information, explore vs exploit (Amanda Askell @ 80k)Reasons both for and against: risk aversion, diminishing returns, EV maximization (Slate Star Codex). In-depth blog post with mahy arguments both for and against (EA forum). Not listed but probably talked about before: splitting your donations gives you extra practice at donating which might lead to you making better donation decisions in the future.

In this post I want to make an argument in favour of splitting donations based on compounding economic returns and measurement error. Specifically, compounding returns favour more consistent growth over a slightly higher but variable growth.

Let’s consider a 100-year time horizon. Suppose that there are 100 charities, C_1,\dots,C_{100}, whose effectiveness is heavily-tailed: donating $1000 to charity C_i allows them to produce i*\$1000 in welfare after a year. Charity evaluator BestowCapably measures the effectiveness of every charity C_i every year j and finds an effectiveness of i + s_{i,j}, where the s_{i,j} are independently normally N(0, \sigma^2) distribution. Let’s assume BestowCapably’s measurement error \sigma does not go down over time.

The way I think of these quantities is that effectiveness is a heavy-tailed distribution and that measurement error is multiplicative (instead of additive).

We assume all welfare gains are reinvested in charity the next year, so that the gains compound over years. The initial welfare is 1. We consider three different donation strategies: donate everything to the single best rated charity, split the donation between the top three rated charities, or split the donation between the top ten rated charities. We plot the compounded welfare after 100 years versus \sigma below.

In the above graph, we see that,for low measurement error, donation splitting is worse than donating everything to the best charity, but for high measurement error, the situation reverses and splitting donations wins out.

Section of doubt

The code I’ve used (included below) to simulate the scenario has a couple researcher degrees of freedom. It is unclear whether measurement error should scale with charity effectiveness. I used Gaussian noise without any justification. My choice of range of \sigma to plot was chosen to have a nice result. The range of charity effecicies has close to no justification. The same stable result can be gotten by donating everything to AMF and nothing to speculative cause areas. The splitting incentive I illustrated only holds at the margin, not for the average donation. Because \sigma is fixed, the magnitude of the effect of donation splitting in this model depends heavily on the number of charities (less charities means greater effect).

Nonetheless, if you care about multi-year impacts, it might be wise to consider more than just short-term expected value. Risk-aversion translates to expected counterfactual impact when results compound.

Appendix: Python code

import random
import matplotlib.pyplot as plt
import math

charitycount = 100
yearstocompound = 100

# The charities are {1,...,n}
# Charity i has effectiveness i
# Effectiveness measurement carries exp noise of size stddev
# Outputs list of (i, i + noise)
def measurecharities(n, stddev):
    charities = []
    for effectiveness in range(1,n+1):
        charities.append((effectiveness,random.gauss(effectiveness,stddev)))
    return charities

# Given list of tuples (x, y),
# calculates the average of x's for
# the k tuples with highest y value.
def avgtop(list, k):
    sortedlist = sorted(list, key=lambda tup: tup[1], reverse=True)
    sum = 0.0
    for i in range(k):
        sum += sortedlist[i][0]
    return sum/k

# Split donations among k charities
for k in [1,3,10]:
    x = []
    y = []
    # We plot the effect for different noise magnitudes
    for stddev in range(1,251):
        logwelfare = 0.0
        for i in range(yearstocompound):
            welfaregain = avgtop(measurecharities(charitycount, stddev), k)
            logwelfare += math.log(welfaregain)
        x.append(stddev)
        y.append(max(1,logwelfare))
    plt.plot(x, y,label=k)
plt.legend()
plt.xlabel('Error in measuring effectiveness')
plt.ylabel('Log(' + str(yearstocompound) + '-year compounded welfare gains)')
plt.title('Donating to top k out of ' + str(charitycount) + ' charities')
plt.show()

Evaluating the AI Safety cause area: a two-year plan

My pretending to have readers makes this blog a self-commitment scheme: if I write that I will do something, the imagined possibility of social shaming will make me more committed to actually doing the thing. That is why I’ll line out my plans for evaluating whether I believe the AI Safety cause area as perceived by EA’s is any good.

I used to think the arguments were pretty convincing, and I liked how it made my skills in math and CS super relevant for a morally important thing. But then I listened to Bostrom’s Superintelligence: Paths, Dangers, Strategies and found the book’s arguments thoroughly lacking. There is an asymmetry of passion in the AI risk circles where the believers are hyping the cause area while the non-believers don’t interact with the ideas at all.

The following step plan is based on the assumption that the LW-aligned view of most EA’s is misguided, but that there is important work to be done to make sure that what we are currently calling AI is used properly. Every step in the plan is meant to defend part of my beliefs, and I trust myself to notice when my argumentation is shaky. If my view changes, I should feel free to change the plan to look into my new views instead.

After completing a step, I will write a blog post describing the outcome and related thoughts. I will try to complete at least one step every two months so that the plan is done before you re-enter the job market after the end of your PhD. I don’t have to do the steps in the listed order.

  1. Make a plan listing the steps to think through the AIS cause. Completed.
  2. Uneducated guessing: directly oppose the astronomical waste argument. See if you can collect the necessary entropy to argue that, even if astronomical waste would be astronomically bad, supporting the AI Safety cause is still bad in expectation. Read at least what Bostrom, Beckstead and FRI have written on the topic, maybe more.
  3. Reread Superintelligence. Try to list the major points in the arguments and list objections. Does it actually argue for a Yudkowsian view, or is everyone misinterpreting it and does it secretly argue for a more mainstream view?
  4. Read various online sources to get more of an idea of what the mainstream view is among AIS EA’s. See if you can list research agendas on AIS different from MIRI’s.
  5. Argue that MIRI is not an effective charity regardless of the status of AIS as a field. I am not sure I truly believe this, but they seem so deeply incongruent with the standard academic practice that I should spend some time thinking about them. I kind of expect that this post will feel like punching down.
  6. I think Paul Christiano is a smart and serious person. He co-authored one of the best papers in your field of the past decade so he is not a crank. Read some of his writing on AIS to see if it holds up to scrutiny.
  7. Argue that average utilitarianism is superior to total utilitarianism, and astronomical waste cannot exist.
  8. Argue in more familiar terms why a paperclip maximizer wouldn’t act as some people fear they would.
  9. Argue that the capitalist’s alignment problem, insofar as it is meaningful and solvable, will be solved by the market.
  10. The social democrat’s alignment problem is a meaningful concept.
  11. Educated guessing: redo the entropy gathering from earlier, but now while having more knowledge.
  12. Cast mainstream “AI” related research in TCS in terms of the social democrat’s alignment problem. See what is out there at conferences like STOC/FOCS, COLT and NIPS. Check O’Neil’s Weapons of Math Destruction, Dwork et al’s line of work on algorithmic fairness, the work on learning non-discriminatory predictors, etc.

Is TERF ideology taking over EA?

TERF stands for Trans-Exclusionary Radical Feminism. TERFS are a hate group that views trans rights as a threat to women’s rights.

“Trans women are pervert men that prey on women in women’s bathrooms and trans men are confused lesbians. Medical transition is a patriarchal notion to keep gender roles intact and is antithetical to any body-positivity movement. Bleh bleh bleh.” – TERFs

So, suppose you are a transphobe and want an air of scientific legitimacy for your views. Enter the two-type taxonomy of Blanchard. It proposes trans women come in two separate varieties: effeminate homosexual men who think they would have an easier time by pretending to be a woman and straight men with the paraphilia of being sexually attracted to the idea of themselves having female bodies. The latter are called autogynephilic transsexuals.

Where does this divide come from? Julia Serano summarizes that, after noting that not all trans woman seemed to fit the picture that gatekeeping therapist had of trans women (outwardly feminine from early childhood, transition early in life, attracted to men),

Blanchard subdivided MtF transsexuals by sexual orientation into four groups—androphilic, gynephilic, bisexual, and asexual. He found that a majority of the gynephilic (87.5%, n = 16), asexual (75%, n = 12) and bisexual (65.7%, n = 35) groups reported having experienced cross-gender arousal in response to wearing women’s clothing on at least one occasion in their lives, while only 15% (n = 100) of the androphilic group responded similarly (Blanchard, 1985). He also found that the gynephilic, bisexual, and asexual groups, on average, reported less recalled childhood feminine gender expression and presented for sex reassignment later in life than the androphilic group (Blanchard, 1988). Based on these results, Blanchard argued that there are two fundamentally different types of MtF transsexuals—androphilic and nonandrophilic (where nonandrophilic includes the gynephilic, bisexual, and asexual groups).

Furthermore, statistically, it seems that trans women from the second group are more likely to have STEM jobs, pass less well and share some other traits. So I mean, sure, there might really be an axis of correlated traits, though the theory does not manage to pass some basic sanity checks.

  • Can we trust numbers consisting of self-reports to gatekeepers?
  • Axis of correlated traits, sure, but are these really two clusters?
  • How is this narrative possibly the best explanation of the observations?
  • Why do >99% of trans women say this does not describe their experience, instead describing the feeling of gender dysphoria?
  • How does this theory relate to the observations surrounding phantom limbs in transgender people?
  • Why are both groups of trans women at such risk for suicide, in contrast to other paraphilic populations?
  • What about trans men? Non-binary peoples?
  • Are cis women autogynephiles?
  • Sex and sexuality are disgusting and nobody would let those guide their actions. [Never mind, we will fix that later.]

For more about the theory and why it is unscientific bullshit, see Contrapoints for hilarious jokes, woke feminist theory and personal experience or Julia Serano for a more scientific treatment. Mind that the prevailing gender identity theory has its own problems, but probably the least of any theory that treats gender in essentialist terms.

But this post is not about the theory, but about the fact that people in my favourite community are taking it seriously. Among those are major EA feminist blog Thing of Things, aspiring Less Wrong Gender Czar and self-identified autogynephilic The Scintillating But Ultimately Untrue Thought and even the otherwise amazing Putanumonit. Putanumonit seems to forget his usual sceptic view of things presented as evidence, Scintillating claims without further exposition that Blanchard’s theory has more explanatory power and that introspection never gives scientifically valid information, and Thing of Things expresses disagreement with Blanchard’s theory but does treat it as a valid theory and gives it a podium.

Belief in Blanchard’s theory is leaking into the EA community, against the mainstream scientific view. I find this scary and I don’t know what to do about it. But at the very least, I think we should make it clear that Blanchard’s model is an unnecessarily stigmatizing fringe theory on shaky evidential ground. It is more like hate group ideology than good science.