Copy
Hello!

Our favorite links this month include: — Lizka (for the EA Newsletter Team)
 

Articles
 

 

Why unconventional approaches to climate change can be more effective

 
On the 80,000 Hours podcast, Johannes Ackva discusses promising climate strategies and how effective approaches can diverge from popular ones. Two highlights: 
  • Interventions can look a lot more — or less — effective when you evaluate them on a global scale. For example, some groups in Switzerland are advocating for thorough insulation of all homes to make them more energy-efficient. This would reduce emissions in Switzerland, but it would have a small impact globally because most emission growth is in countries where insulation isn’t the problem. Conversely, Germany’s investments in solar power in the 2000s might have seemed ineffective — solar panels were very expensive and Germany isn’t particularly sunny — but in part because of this push and the resulting innovation, solar power today is much cheaper and is deployable on a global scale.
  • We should accelerate the development of new clean energy technologies. It’s very important to have backup solutions in case today's promising clean energy solutions fall short. A diverse portfolio of approaches, like carbon capture and storage or modular nuclear reactors, would provide crucial insurance if that happens.
The episode covers many other considerations. (Ackva's team is hiring.)

In a different piece on climate change, "We need the right kind of climate optimism," Hannah Ritchie explains that we need to avoid both paralysis from doomy pessimism and complacency from naive optimism.

What should we do about risks from AI?

 
Concerns about the risk of catastrophe from artificial intelligence have become much more mainstream. The Financial Times, for instance, recently published “We must slow down the race to God-like AI” (paywalled) as a response to a letter signed by thousands of people (including Elon Musk and Apple’s co-founder Steve Wozniak) asking for a 6-month pause in training powerful AI systems. But the costs and benefits of different proposals are hard to track, and many people are struggling to understand what to do.

Planned Obsolescence, a new blog by Kelsey Piper (Vox) and Ajeya Cotra (Open Philanthropy) is trying to help prepare readers for the challenges we’re facing because of AI. 

One post on slowing AI progress raises some key questions: 
  • Is it better to ask for evaluations — ongoing audits on whether systems are dangerous — instead of a pause? 
  • Is a 6-month pause too short? Is it even the right thing to ask for? A more continuous and iterative approach might be better. (See more.)
  • Will a moratorium like this backfire by worsening competitive dynamics? 
Given the uncertainty, what can all of us do? Stay informed, advocate for alignment, support people working on safety approaches, and use our skills and resources to work on the problem if we can (explore resources for upskilling).
 

The past, present, and future of child mortality

 
An expanded article from Our World in Data explains that until very recently in human history, almost half of all children died before the end of puberty. Today, global child mortality is around 4%. This still means that thousands of children die every day — far too many, but so much better than it used to be. 

Understanding this history — as well as the direct causes of child mortality — can help us continue to make progress.

In other news

For more stories, try these email newsletters and podcasts
 

Resources

Links we share every time — they're just that good!

Jobs

Boards and resources:


Specific job listings

 
Fish Welfare Initiative Founders Pledge GiveWell Anthropic Centre for the Governance of AI (GovAI)

Announcements & opportunities

See also the EA Opportunity Board and AI Safety Training.
  • Research programs on AI safety and policy
    • The SERI ML Alignment Theory Scholars Program (SERI MATS) is accepting applications for its Summer 2023 Cohort. Participants get training and mentorship for work in AI alignment via seminars and independent research with a mentor in Berkeley, CA. It also helps participants connect with alignment researchers and institutions. The program is ideal for those who have an understanding of the AI alignment research landscape and previous experience with technical research. There is generally a stipend. Apply by 7 May.
    • The policy think tank RAND is looking for Technology and Security Policy Fellows interested in focusing on policy research relating to the governance of artificial intelligence. Candidates at all experience levels are welcome to apply. The fellowship will typically run for a year, though sometimes up to three. It can be full-time or part-time, remote or based at any of RAND’s locations
  • Broad educational & scholarship programs
    • The Atlas Fellowship for pre-college students (aged 15-19) offers a $10,000 scholarship and a free, in-person summer program in the San Francisco Bay Area that focuses on topics like forecasting, global poverty, and the future of artificial intelligence. Apply by 30 April.
    • A new round of EA Virtual Programs is happening from May 8–July 2, including the 8-week Intro Program, In-Depth Program, and The Precipice Reading Group. Apply by 23 April.
  • Conferences and workshops
    • You can apply to EA Global: London (19–21 May) — people who have been accepted to EA Global: Bay Area can register directly. Apply by 5 May.
    • The Global Challenges Project has two intensive 3-day workshops that people can express interest in — one in Oxford on 26–29 May, and one in Berkeley on 30 June –3 July. The workshops are for students who want to think seriously about existential risk, and they’re free to attend with travel support available.
Organizational Updates

You can see updates from a wide range of organizations on the EA Forum.
 

Timeless classic on the timing of work aimed at reducing existential risk
 
Is it better to work on risks close to when they would occur, or to get started as soon as possible?

In an analysis from 2014 (and a recent Twitter thread), Toby Ord explores the timing of different kinds of work on reducing risks, and notes some relevant factors: 
  • Nearsightedness: the further away something is, the more uncertainties we have, meaning that our efforts could be misguided. 
  • Course setting: it is harder to redirect a big effort later on, and it can make sense to spend a lot of resources early to lay the groundwork that usefully directs later work. 
  • Self-improvement: skill-building or other lasting improvements to your capacities that require only a small amount of upkeep are useful to work on early. 
  • Growth (movement-building): early efforts can significantly increase the resources that are available to work on the problem when it is looming.
We hope you found this edition useful!

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

Finally, if you have feedback for us, positive or negative, let us know!

– The Effective Altruism Newsletter Team
Click here to access the full EA Newsletter archive
This newsletter is run by the Centre for Effective Altruism, a project of Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organization in the USA, EIN 47-1988398), two separate legal entities which work together.
 
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.