Your March 2019 EA Newsletter    
Hi <<First Name>>,

It’s been a month of major growth for effective altruism: It’s a lot to take in. But as always, we’re here to help you learn about some of the world’s biggest problems — and the ways that you can help.

We hope you enjoy this month’s edition!

 The EA Newsletter Team
Articles and Community Posts

Scientists are developing new malaria vaccines. To test their work, they need infected mosquitoes… and human volunteers. John Beshir, a member of the EA community, talked to Vox about his decision to help out by getting malaria on purpose.

The progress of fast-growing “middle-income” countries often leads to a poverty paradox: They no longer receive much aid, but haven’t fully developed their domestic social welfare programs.

When you’re trying to do as much good as you can, it’s tempting to judge every decision by its cost-effectiveness. Julia Wise reminds us that we don’t always have to think this way: It’s fine to have more than one goal.

How can we figure out which problems are most important to work on? The Global Priorities Institute just released their newest research agenda, which summarizes the field of cause prioritization.

EA Forum Highlights:
Timeless Classic

“The mathematical challenge of finding the greatest good can expand the heart. Empathy opens the mind to suffering, and math keeps it open.”

Derek Thompson, a staff writer at The Atlantic, used to have an “approximately average” interest in charity — until 2014, when he became the roommate of EA philosopher Will MacAskill.

Less than a year later, Thompson published  “The Greatest Good”, in which he explores effective altruism with the rigor and clarity of a professional journalist. He also ties the movement’s core ideas into a moving story about his mother’s death, and his wish to honor her by helping others.

Highlight: EA Global Videos

CEA has released videos for more than 20 talks given at EA Global: London 2018, and is working to get them all transcribed.

For a video that isn’t focused on a specific cause area, try Joey Savoie’s talk on how to found a high-impact charity (transcript here):
Other transcripts published so far include:

As always, 80,000 Hours’ High-Impact Job Board features a wide range of positions. (They added more than 100 this month alone.)

If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

To learn about new jobs as they arise (or post one yourself), check out the EA Job Postings group on Facebook.
Special note: The Center for Security and Emerging Technology just launched with a major grant from the Open Philanthropy Project. 

It isn’t often that a brand-new organization launches with so many open jobs, so we decided to highlight CSET for this edition. They’re hiring for technical, research, and communications positions.

80,000 Hours
80,000 Hours added 104 positions to their job board; went on Rationally Speaking to explain how their views have changed; released a podcast debating the merits of various possible reforms to capitalism and democracy; and did a lengthy interview about journalism careers with Kelsey Piper, a staff writer for Vox’s Future Perfect vertical.

Animal Charity Evaluators
Animal Charity Evaluators recently published their biannual Recommended Charity Fund update, as well as a post from Open Cages president Dobrosława Gogłoza, who discusses her organization's approach to salary transparency. ACE's Animal Advocacy Research Fund is accepting applications for its next round of funding through 31 March 2019.

Center for Human-Compatible AI
CHAI PI Joe Halpern was elected to the National Academy of Engineering. CHAI researcher Rohin Shah and former CHAI intern Dmitrii Krasheninnikov published Learning Preferences by Looking at the World on the Berkeley AI Research blog; Rohin wrote an expanded version on the AI Alignment Forum and also completed the Value Learning Sequence. Daniel Filan published a post on Test Cases for Impact Measures.

Centre for the Future of Intelligence
CFI published Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research (press release). This major report was commissioned by the Nuffield Foundation and the Ada Lovelace Institute. CFI also had three papers accepted at the 2nd AAAI/ACM AI Ethics and Society Conference (AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI; “Scary Robots”: Examining Public Responses to AI; and The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions).

Centre for the Study of Existential Risk
CSER researchers published Exploring AI futures, submitted advice to the UN High-Level Panel on Digital Cooperation and EU AI High-Level Expert Group, and wrote two BBC articles with >1.5 million total views: Are we on the road to civilisation collapse? and What are the biggest threats to humanity? CSER also hosted the Ground Zero Earth art exhibition.

Lord Martin Rees was interviewed by the LA Review of BooksSpear’s MagazineScience Studio Radio, the Sunday Times, and the Guardian, and gave keynote talks at the European Union Parliament and the Long Now Foundation.

Charity Entrepreneurship
Charity Entrepreneurship’s video from EA Global: London 2018, on their incubation program for starting new charities, is now available. CE also published an intervention report on switching meat consumption from chicken to beef and an article on targeting new and growing areas in factory farming.

Forethought Foundation
The Forethought Foundation for Global Priorities Research has announced its first cohort of Global Priorities Fellows. The fellows will receive a £5,000 stipend, and will be attending a meetup in Oxford this summer.

Foundational Research Institute
Caspar Oesterheld's paper on "Approval-directed agency and the decision theory of Newcomb-like problems" was published in the Synthese special issue on "Decision Theory and the Future of AI". Kaj Sotala published a new post in his Multiagent Models of Mind series.

Future of Life Institute
FLI released three podcasts this month: two episodes in their AI Alignment series and a special two-part episode featuring Max Tegmark and Matthew Meselson. They also produced a video on the breakdown of the INF Treaty. At the end of March, FLI will co-host the Augmented Intelligence Summit (you can apply here).

GiveWell wrote about their plans to expand their research team and broaden their scope in order to determine whether there are giving opportunities in global health and development that are more cost-effective than those they have identified to date.

Global Catastrophic Risk Institute
GCRI Executive Director Seth Baum published a new paper (Reflections on the risk analysis of nuclear war) with the UCLA Garrick Institute for the Risk Sciences, examining how risk analysis can inform nuclear war policy. For example, do we understand the associated risks well enough to know whether we should support or oppose nuclear disarmament?

Machine Intelligence Research Institute
Ramana Kumar (DeepMind) and Scott Garrabrant (MIRI) published “Thoughts on Human Models”, a blog post arguing against the common assumption that AI safety research should focus on modeling humans and their psychology or values. Instead, the authors suggest that it might be better to design and train the first generally intelligent systems in such a way that they lack any models of humans at all.

Open Philanthropy Project
The Open Philanthropy Project announced support to launch the Center for Security and Emerging Technology at Georgetown, to the Center for International Security and Cooperation for Megan Palmer’s work on biosecurity, and to EicOsis for clinical trials for non-opioid pain therapy. CSET's launch was covered by the Washington Post.

Wild Animal Initiative
Wild Animal Initiative released its two-year strategic plan, which lays out plans to continue academic field-building and conduct supporting research in the welfare biology space. WAI also released an update on its progress in assessing the relative painfulness of different insecticides.
Go forth and do the most good!

We hope you found this edition useful.

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

And if you have feedback for us, positive or negative, please let us know.

Aaron, Justis, Max, Michał, Pascal, and Sören
– The Effective Altruism Newsletter Team

The Effective Altruism Newsletter is a joint project between the Centre for Effective Altruism, the Effective Altruism Hub, and Rethink Charity.
Click here to access the full EA Newsletter archive
A community project of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) – Centre for Effective Altruism, Littlegate House, St Ebbes Street, Oxford
OX1 1PT, United Kingdom
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.