Copy
Hello!

Our favorite links this month include: There are also once more many open job applications ⬇️ with pressing deadlines. 

— The EA Newsletter Team
 

Articles
 

 

Is deworming an effective intervention? 

 
Intestinal worms affect over 1.5 billion people, causing health issues and appearing to damage lifetime income. Thankfully, deworming pills are cheap, and deworming has been one of GiveWell’s priority programs for the last 15 years. Not everyone agrees that deworming should be a top priority, though. A recent analysis disagrees with GiveWell, studies haven’t replicated the effect of intestinal worms on life outcomes, and GiveWell’s own report on deworming is uncertain about its effects. What’s going on?

A recent article in Vox’s Future Perfect argues that “the logic for deworming currently goes: whether combating intestinal parasites has a large effect on long-term health and therefore on life outcomes isn’t yet clear. But since it’s so cheap and might have those huge benefits, it’s a good use of money in the crowded field of ways to improve the lives of those in extreme poverty even if it’s reasonably likely to not work as well as projected.” 

The Vox article also emphasizes that we “can’t dodge acting under uncertainty: [...] to do as much good as possible, sometimes you have to act on mixed and limited evidence.”


The surprising alliance between AI safety and AI capabilities work 

 
AI capabilities researchers — people making powerful artificial intelligence happen sooner — and AI safety researchers — who believe AI is one of the greatest risks to the future of humanity and work on making it safer — could be considered natural enemies, analogous to oil companies and environmental activists. But the reality is different; the two groups mostly act as if they’re part of one large community. A recent blog post explores why this is the case and whether the current approach (alliance and cross-pollination) is a mistake. 

The post doesn’t take a strong stance on the question but finds a number of risks to AI safety proponents making enemies of AI capabilities researchers for reasons like:
  • A number of the best AI companies have safety teams (that aren’t just for show) 
  • We wouldn’t want to hinder the most conscientious companies differentially, giving more careless companies an edge in AI progress
  • Cooperation can make it much easier to pass certain kinds of safety regulation
There’s recently been more writing on AI safety, including a poll of ML researchers that produced an aggregate forecast of a 50% chance of human-level AI in the next 37 years, a post on the dangers of AI, and an article exploring why AI safety researchers and AI ethicists seem divided.


How does moral progress happen? The decline of footbinding as a case study.

 
Can a civilization realize that a practice is morally wrong, and end the practice? Or will changes like this only happen when they’re profitable to someone in power?

In a recent post, Rose Hadshar uses the decline of footbinding — an extremely painful custom to stop young girls’ feet from growing through tight compression that often broke bones — in 20th-century China as a case study. She notes that there was a series of moral campaigns against the practice and even a decree forbidding it, but it’s not clear that the decree was enforced, or if people in rural areas were exposed to the anti-binding arguments. Moreover, foreign imports and the growth of factories reduced the value of handwork, meaning girls who had undergone footbinding would be less productive in the future and so it became economically advantageous to leave girls’ feet unbound. 

The post concludes that the campaign might have been very successful, but the practice would have probably ended anyway in some years or decades for economic reasons. This suggests that moral campaigns might be more important — although likely also harder to accomplish — when they push for something that isn’t already incentivized by new developments in the economy. 
 


In other news

For more stories, try these email newsletters and podcasts
 

Resources

Links we share every time — they're just that good!

Jobs 

Boards and resources:
  • The 80,000 Hours Job Board features more than 700 positions. We can’t fit them all in the newsletter, so you can check them out there.
  • You can see more positions in the EA Job Postings group on Facebook, and on the "Who's hiring?" thread on the Effective Altruism Forum. 
  • The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more — including part-time and entry-level job opportunities.
  • If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

Applications due soon


80,000 Hours
  • Marketer (London, apply by 23 August)
Family Empowerment Media Open Philanthropy Operations Team at the Centre for Effective Altruism Rethink Priorities:

Other Positions


Berkeley Existential Risk Initiative is hiring for a Deputy Director (NYC preferred, remote possible)

Charity Entrepreneurship is hiring a Research Analyst (London or remote)

Founders Pledge GiveWell Momentum is hiring a Designer (San Francisco)

Announcements 
 

 

Open Philanthropy Technology Policy Fellowship


Applications are open for the Open Philanthropy Technology Policy Fellowship, which supports people interested in shaping safer development of new technologies by providing mentorship and training, and helping them find work in government-affiliated offices or think tanks in the US.

Apply by 15 September.
 

 

Career advising for mid-career professionals 


Rethink Priorities invites mid-career professionals who are sympathetic to ideas from effective altruism but haven’t seriously tried doing work motivated by those ideas to request free 1-1 career advice via the EA Pathfinder site.
 

 

New course: Introduction to ML safety


The Center for AI Safety has announced a new course, Introduction to ML Safety, which is freely available online and is aimed at anyone with a background in ML and (even a minor) interest in AI safety. 

This complements the existing AGI Safety Fundamentals course run by Effective Altruism Cambridge, which is a broader and more conceptual course (with two tracks: alignment and governance).
 

 

Organizational Updates

You can see updates from a wide range of organizations on the Effective Altruism Forum.
 

Timeless Classic
 
A revamped classic from Our World in Data examines statistics about child mortality over time to argue the following:

The world is awful. The world is much better. The world can be much better. All three statements are true at the same time. [...]

If we want more people to dedicate their energy and money to making the world a better place then we should make it much more widely known that it is possible to make the world a better place. 
 
We hope you found this edition useful!

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

Finally, if you have feedback for us, positive or negative, let us know!

– The Effective Altruism Newsletter Team
Click here to access the full EA Newsletter archive
A community project of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) – Centre for Effective Altruism, Trajan House, Mill Street, Oxford OX2 0DJ, United Kingdom
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.