News and updates from the world of effective altruism
Data on the EA community
Effective altruism is growing — but how quickly? And what problems are people actually working on?
New posts from Benjamin Todd of 80,000 Hours make this information much easier to find:
One of Todd's findings is that funds are growing faster than the number of people working to deploy them. This implies that people with certain skills
— grantmaking, research, entrepreneurship, and more — could have especially strong opportunities for impact.
(If this sounds like you, consider talking to someone at 80,000 Hours
A new look at progress on child mortality
Much of EA’s focus goes toward issues that don’t make headlines — often because they affect groups of people who tend to be overlooked by media in the developed world.
Max Roser sees child mortality as one of these “everyday tragedies”: too ordinary to be news, but a horrific, life-altering experience for more than 10,000 families each day.
Thankfully, there’s been remarkable progress in this area:
- In 1950, one in nine European children died before their fifth birthday. Today, the rate is one in two hundred.
- Even if you look at the entire world, the average child mortality rate is roughly one-third of Europe’s rate in 1950.
This chart shows the fraction of children who die before the age of five in different regions of the world. Source: Our World in Data
Our World in Data lets you see the numbers for yourself; we can’t recommend them highly enough, for visual information on this and other global issues. It’s important to remember that despite the world’s horrors, we are capable of making life much, much better.
Fighting air pollution to help nearly two billion people
Open Philanthropy recently wrote up their first grants for a new focus area: air quality in South Asia
Pollution in this region has dire effects; the Institute for Health Metrics and Evaluation estimates that it accounts for nearly 3% of the world’s total
health burden (prematurely lost years of healthy life).
Open Phil’s initial grant is focused on air quality monitoring, which may help experts and governments create better anti-pollution policy.
Meanwhile, the research organization J-PAL is trying another route — working with the University of Chicago to launch an emissions trading scheme
, which they hope will incentivize corporations in the Indian state of Punjab to use cleaner production methods.
When should we expect to see transformative AI?
Holden Karnofsky, who leads Open Philanthropy’s work aimed at improving the long-term future, thinks that the creation of sufficiently powerful AI could lead to “transformative” changes on the scale of the Agricultural or Industrial Revolutions.
Over the last month, he’s been writing about a specific hypothetical AI system — one which could fully automate the processes humans use to make scientific and technological progress. His catchy acronym: PASTA, or “Process for Automating Scientific and Technological Advancement”.
So far, he’s addressed the following questions:
The posts aren't very technical, and should be accessible even if you don't know much about AI.
You can see all of Karnofsky's blogging on Cold Takes
, and leave comments on the EA Forum
The disaster everyone saw coming
When a hedge fund lost $8 billion in ten days, and Credit Suisse lost $5.5 billion as a result, finance writer Matt Levine dove into the Swiss company's official reports and wrote up his take on the situation
(Bloomberg subscribers only).
He explains that the fund managed to hold off Credit Suisse’s attention in very human ways: responding slowly to emails and cancelling meetings at the last minute, as collapse loomed. The bankers knew something was wrong, but didn’t act in time:
“Everyone saw all the problems here, evaluated them reasonably, came up with sensible solutions, and then didn’t do them.”
Why are we talking about this here?
Because of Kelsey Piper’s commentary on Levine’s post
, which draws parallels between this situation and the kinds of catastrophic events that get more attention in EA (like the creation of an engineered pandemic
Piper, on what we can learn from a financial disaster in plain sight:
“I could imagine [a risky project being released] despite smart people all agreeing that they had a lot of reservations [...] Appreciation of the high stakes does motivate smart people to do something, but might not motivate them to do anything confrontational enough to actually solve the problem.”
For more stories, try these email newsletters and podcasts.
In other news
Links we share every time — they're just that good!
Opportunities to work on some of the world's most pressing problems
The 80,000 Hours Job Board features more than 700 positions. We can’t fit them all in the newsletter, so check out the others on their website!
You can see more positions in the EA Job Postings group on Facebook.
If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.
Applications due soon
Business Development and Communications Officer, Suvita (Remote) (apply by 13 September)
Communications and Marketing Specialist, Ploughshares Fund (SF Bay Area or Washington, DC) (apply by 31 August)
Executive Assistant to Nick Bostrom, Future of Humanity Institute (Oxford) (apply by 6 September)
Research Engineer, Bayesian Deep Learning, Oxford University, Department of Computer Science (Oxford) (apply by 1 September)
Research Fellow, Law and Governance of Net Zero, Oxford University, Department of Geography and the Environment (Oxford) (apply by 3 September)
Senior Advisor, Monitoring, Evaluation, and Research, Schistosomiasis Control Initiative (London) (apply by 7 September)
, Metaculus (Remote)
, Centre for Effective Altruism (Remote)
Head of Technology
, Suvita (Flexible, India) (apply by 19 September)
Junior Operations Analyst
, Longview Philanthropy (London) (apply by 3 October)
, Effective Altruism and Consulting Network (Remote) (apply by 19 September)
Policy Analyst, Energy Project
, Bipartisan Policy Center (Washington, DC)
, Centre for Effective Altruism (Remote) (apply by 30 September)
, Astera (SF Bay Area)
Senior Program Officer, Global Biological Policy and Programs
, The Nuclear Threat Initiative (Washington, DC)
Senior Research Associate: Academic Programme Manager
, Cambridge University, Centre for the Study of Existential Risk (Cambridge, UK) (apply by 20 September)
, GiveWell (SF Bay Area or remote)
, The Good Food Institute (Various locations)
, Open Philanthropy (SF Bay Area or remote)
, Ought (SF Bay Area or remote)
Books, events, community projects, and more!
Save the date for EA Global: London 2021
Applications for EA Global: London 2021 will open on 1 September.
The goal of the conference is to increase attendees’ knowledge, skills, and networks to help them do as much good as they can. Content will be aimed at existing EA community members who already have a solid understanding of effective altruism, and would like to network, gain skills, master more complex problems, or move into new roles.
Note that plans may change depending on developments with COVID-19; the organizers recommend that you consider refundable options when booking travel and lodging.
Tuition grants for incoming international undergrads
Open Philanthropy wants to provide support for highly promising and altruistically-minded students who hope to start an undergraduate degree at one of the top universities in the USA or UK, and who do not qualify as domestic students at these institutions for the purposes of admission and financial aid.
Visit their website to apply, and to see more details on the program (including the list of eligible universities).
Grants available for outreach projects
Open Philanthropy is looking to fund projects that grow the community of people who aim to improve the long-term future.
They’re especially interested in two categories of outreach:
But they’re also interested in hearing about ideas that don't fit neatly in either bucket.
- Programs that engage with promising young people (e.g. summer or winter camps, scholarship or fellowship programs).
- Projects that aim to share high-quality, nuanced content related to improving the long-term future (related to broad areas such as EA, rationality, longtermism, or x-risk reduction, or with a more specific focus) with large numbers of people.
Check out this post for more info, including examples of projects they think could be impactful. It contains two application forms: one to apply to start a project (here) and one to express interest in helping with a project (here).
They’re also open to applications for 3-month “planning grants” for people to focus on developing their ideas. This might serve as a way to explore your idea without fully committing.
You can see updates from a wide range of organizations on the EA Forum.
Ideas that have shaped the way we think about doing good
We typically focus on the short-term effects of our actions, because it’s hard to tell what will happen in the longer term.
But these long-term effects still exist; most of our “impact” (good or bad) happens long after we’ve acted.
To Hilary Greaves and Will MacAskill, this implies that “impact on the far future is the most important feature of our actions today”. It’s an unusual claim, but one that guides the way many people in the EA community approach their work.
You can explore this counterintuitive idea in Greaves and MacAskill’s “The Case For Strong Longtermism”. The paper first came out in 2019, but a recent update brings in new discussion about the most important arguments and objections.
For a briefer introduction to this material, see MacAskill’s “What we owe the future” — but if you have the time, we recommend the original paper, which goes into more depth and presents the case in its strongest form.
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
(Actions we'd love to hear about include donating to charity, applying to a job, or joining a community group.)
Finally, if you have feedback for us, positive or negative, let us know!
– The Effective Altruism Newsletter Team