Our favorite links this month include: — Lizka (for the EA Newsletter Team)



An unexpected win for animal welfare

In an unexpected ruling against the pork industry, the US Supreme Court upheld an important animal welfare law. California’s “Proposition 12” bans the sale of some pork products that come from farms where sows are kept in extremely small “gestation crates.” The law was passed via a 2018 ballot measure that was approved by over 62% of Californian voters. 

Proposition 12 doesn’t solve pig welfare problems in California — even if we ignore the fact that it doesn't cover many pork products. But it’s an important step in letting voters decide that a widespread practice is unacceptable in their state. The outcome was far from guaranteed; the Supreme Court passed the ruling by a narrow majority.

If you’re interested in working on farmed animal welfare, you might want to look at the variety of open positions at Legal Impact for Chickens, Mercy for Animals, and The Humane League — and roles like a Regulatory Attorney position at GFI. And you can always donate to effective animal charities.

How releasing billions of modified mosquitos might help fight dengue fever

Mosquitos spread several deadly diseases, including malaria, dengue virus, and Zika. To fight the problem, we can roll out vaccines or treat those affected — or we can try to guard against the mosquitoes themselves by doing things like distributing (insecticide-treated) mosquito nets and stopping mosquitos from reproducing

The World Mosquito Program (WMP) is coming at the mosquito problem from another angle; they plan to build a mosquito farm in Brazil to start releasing modified mosquitos that can’t spread certain viruses. The mosquitos will contain the bacteria Wolbachia, which should prevent the insects from transmitting viruses like dengue, Zika, and yellow fever. The farmed mosquitos will then spread Wolbachia into the wild mosquito population. WMP has run trials; they report that one project led to a 77% reduction in confirmed dengue cases in the affected area. 

I don’t know if this program is especially cost-effective. Though dengue fever is probably more neglected than diseases like malaria, it also affects fewer people. But it’s still inspiring to see the range of ways that diseases can be fought. (GiveWell, which analyzes the cost-effectiveness of different projects and interventions in global health, is hiring for a number of roles.)

How should we regulate artificial intelligence?

A year ago, the idea of out-of-control AI might have sounded a bit like science fiction, and people worried about catastrophic risks from AI thought that getting public interest in regulation would be difficult. But things have changed. Awareness and interest in regulation are growing, and governments are responding. In the US, Sam Altman (CEO of OpenAI) testified before the Senate today and pushed for safety-oriented regulation, and earlier the White House met with Altman and other AI CEOs to talk about potential dangers. And in the EU, a proposed AI Act would classify and regulate AI systems based on their risk levels.

Understanding what regulations are most effective is probably harder. Luke Muehlhauser, a senior program officer at Open Philanthropy, recently suggested 12 tentative ideas for US AI policy. These include tracking and licensing big clusters of cutting-edge chips, requiring that frontier AI models follow stringent information security protections, and subjecting powerful models to testing and evaluation by independent auditors. It’s helpful to understand what strategies can look like, but more research and work are required before the ideas can be implemented.

If you’re interested in working on AI governance, you might consider exploring opportunities here or applying to work at ARC Evals, GovAI, or Rethink PrioritiesSee also Richard Ngo’s AGI safety career advice, and 80,000 Hours on why information security could be a highly impactful career path.

In other news

For more stories, try these email newsletters and podcasts. There are also newsletters and podcasts about AI safety


Links we share every time — they're just that good!


Boards and resources:
  • The 80,000 Hours Job Board features almost 700 positions. We can’t fit them all in the newsletter, so you can check them out there.
  • The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more — including part-time and entry-level job opportunities.
  • You can see more positions in the EA Job Postings group on Facebook.
  • If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

Assorted jobs

Centre for the Governance of AI (GovAI)


Founders Pledge


Giving What We Can

Happier Lives Institute

Open Philanthropy

Rethink Priorities



Contest on AI considerations

The Open Philanthropy AI Worldviews Contest plans to distribute $225,000 in prize money across six winning entries. They’re still looking for novel considerations that might influence Open Philanthropy’s views on AI timelines and AI risk. Submit entries by 31 May.


Virtual fellowships and courses

  • Virtual programs on EA topics: A new round of EA Virtual Programs will run from 5 June - 30 July. These free 8-week courses cover topics in effective altruism and require around a 3-hour commitment per week. They include the Introductory Program, the In-Depth Program, and the Precipice Reading Group. Apply by 21 May.
  • Fellowship for pre-university students: Pre-university students are invited to apply to Non-trivial’s online fellowship, happening from 10 July - 2 September. Fellows get expert guidance over 8 weeks to start an impactful research or entrepreneurial project, and there’s $30,000 in funding available for particularly promising projects. Apply by 11 June.
  • ML safety summer course: Applications are open for the Introduction to ML Safety summer course, an 8-week virtual course running from 12 June - 14 August. The course is designed for people with ML backgrounds who want to get into empirical research careers focused on AI safety. Participants are expected to commit 5-10 hours per week to the program and will receive a $500 stipend upon completion. Apply by 22 May.



  • Applications are open for two EA conferences:
  • The Global Priorities Institute is hosting the 2023 Memorial Lectures, a series of free events open to the public (in-person in Oxford or remote). Registration is required. The Atkinson Memorial Lecture is on 9 June and the Parfit Memorial Lecture will follow on 13 June.
  • Peter Singer’s Animal Liberation Now has been updated and will be available starting 23 May.
Organizational Updates

You can see updates from a wide range of organizations on the EA Forum.

Timeless classic on uncertainty and resilience

80,000 Hours recently shared a post urging readers to deliberately investigate their career options but avoid delaying career decisions too much in an attempt to avoid uncertainty: “aim for a stable best guess, not confidence.” The post draws heavily on earlier writing by Gregory Lewis — most importantly: “Terminate deliberation based on resilience, not certainty.”
We hope you found this edition useful! If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

Finally, we'd love more feedback — positive or negative.

– The Effective Altruism Newsletter Team
Click here to access the full EA Newsletter archive
This newsletter is run by the Centre for Effective Altruism, a project of Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organization in the USA, EIN 47-1988398), two separate legal entities which work together.
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.