Hello!
Our favourite links this month include:
|
|
Also, the EA Forum is hosting a 'career conversations week' next week, with writing on career paths, career changes, and an Ask A Career Expert Anything event running throughout the week. Learn more here. We're also sharing multiple global health and development research roles at Rethink Priorities, High Impact Medicine's free 6-week career planning course, and much more.
— Toby, for the EA Newsletter Team
|
|
|
|
Click on the image to go and vote on the poll!
|
|
Articles
What's Gavi, and why is it in trouble?
Since 2000, Gavi has protected 1.1 billion children with vaccinations and averted nearly 19 million future deaths. Its massive role in the vaccine market means it has been able to bargain the price of a child’s full immunisation schedule down to 1/50th of the US price. Gavi is funded by wealthy governments, the World Bank, the Gates Foundation, and, increasingly, the governments receiving the vaccines.
Now the United States is withholding funding. Previously Gavi’s third‑largest donor, the U.S. supplied roughly $300 million a year — until Health Secretary Robert F. Kennedy Jr. declared on 25 June that no further money would be forthcoming, citing unproven safety fears.
In a piece for Vox, Kelsey Piper warns that this could endanger a million children if donors don’t close the gap quickly.
If you’d like to help shore up the shortfall, Gavi accepts individual donations.
|
|
Who won the AI moratorium fight?
Tucked inside early versions of President Trump’s budget bill, passed last week, was a clause that would have blocked states from writing any AI laws for a decade, preventing significant state-level action against extreme risks from AI. The clause had support from big tech, including venture fund a16z and Meta, but multiple lobbying groups and Republican senators were staunchly opposed.
The fight ended at 4 am on 1 July when Tennessee Republican Marsha Blackburn — after briefly backing a watered‑down five‑year version — joined an amendment to delete the moratorium. The Senate passed it 99‑1.
Was that an AI‑safety triumph? Not quite. The coalition that killed the moratorium wasn’t united in its concern: key senators cared about removing the bill so they could pass child safety laws, while others were concerned about creators’ likenesses being used without permission. Regulations against extreme risks from AI barely featured. As Transformer’s Shakeel Hashim notes, “If the moratorium was simply on SB‑1047‑style regulation focused on tackling extreme risks, I bet it would have passed.”
|
|
Can we be confident about cause prioritisation?
A key part of effective altruism career advice is that the cause you choose to work on is the largest determinant of your overall impact. An extremely effective advocate for pet welfare might still have much less positive impact than a middling advocate for the welfare of factory-farmed animals, who suffer in much, much greater numbers.
But as CEO of Rethink Priorities, Marcus A. Davis, argues in a recent forum post (substack summary here) we don’t know enough to be confident about our cause prioritisation. Questions like “Is death in itself bad, or just suffering?”, “How should we prioritise between averting human and animal suffering?” and “Should we discount the value of future events at all?” are philosophical, and philosophical methods are (to put it lightly) not as robust as those used in other domains. And having different views on these questions leads to different cause prioritisation.
Davis emphasises that these issues don’t affect the reality that, within a particular cause, some interventions are much more effective than others. For example, we know that if you want to save lives, the Against Malaria Foundation is better than almost all global health charities. But we shouldn’t be confident that we know which cause is the correct focus.
If Davis is right, this makes cause prioritisation and career choice much harder to do rationally. But is he? I've set up a debate-poll on the EA Forum where you can discuss this right now.
|
|
In other news
For more stories, try these email newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs
Boards and resources:
Selection of jobs
AI Security Institute (UK Gov)
Animal Equality
Cooperative AI Foundation
- Programme Manager (Remote (UK-based or UK-contract-eligible), GBP £47K–£52K, apply by August 3rd)
Evidence Action
Givewell
High Impact Professionals
Impact Ops
- Systems Engineer (Remote (preferably EU time zones), GBP £45K–£55K, apply by July 20th)
- Systems Admin (Remote (preferably EU time zones), GBP £35K–£40K, apply by July 20th)
Open Philanthropy
- Program Associate/Senior Program Associate, Abundance and Growth (Generalist Track) (Remote (US Central Time) / Washington, D.C., USD $126K–$172K, apply by July 27th)
- Senior Program Associate/Program Officer, Abundance and Growth (Specialist Track) (Remote (US Central Time) / Washington, D.C., USD $172K–$210K, apply by July 27th)
- Salesforce Solutions Architect and Senior Administrator (Remote (US Eastern time overlap required), USD $181.7K, apply by July 16th)
- Operations Coordinator/Associate in San Francisco or Washington D.C. (San Francisco, CA / Washington, D.C., USD $102.3K–$126.7K)
Rethink Priorities
|
|
Announcements
Events and Conferences
- Tickets go on sale tomorrow for the Effective Altruism Summit Vancouver. Find out more here.
- Anima International is organising the Norwegian Animal Advocacy Conference (Dyrevernkonferansen), which will be hosted in Norwegian and in English on 25–26 October in Oslo.
Fellowships and Courses
- Hi-Med’s free 6-week virtual Career Planning Course helps medical students and doctors clarify high-impact paths, set goals, and make career decisions. Open to all globally, this unpaid fellowship runs Aug–Sep 2025 and closes July 19.
- Apply to mentor in the Fall 2025 round of SPAR, a remote part-time program for AI safety and governance research running Sept 15 – Dec 20. Open to experienced researchers, this unpaid role closes July 20 for mentors and August 20 for mentees.
Prizes and Funding
- The 2025 Berggruen Prize Essay Competition invites original essays on the philosophy of consciousness, with $50,000 awards for English and Chinese entries. Open to all disciplines, this online, paid opportunity closes July 31.
- Longview Philanthropy just launched a $2-10M RFP for hardware-enabled mechanisms for AI verification. They're looking for: proposals to red-team new and existing hardware security mechanisms, designs and prototypes of hardware-enabled mechanisms to verify chips' locations, and more. Submit an expression of interest, ASAP.
|
|
Organizational Updates
You can see updates from a wide range of organisations on the EA Forum.
|
|
Timeless Classic
Since we were discussing cause prioritisation in this newsletter — here’s a classic blog post from Holden Karnofsky, written before GiveWell labs spun off to become Open Philanthropy. In it, he makes the case that strategic cause selection is neglected in philanthropy, and there might just be a lot of impact to be had by doing it… In my view, the last decade showed that he was right.
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have any feedback for us, positive or negative, let us know!
– Toby, for the Effective Altruism Newsletter Team
|
|
|
|
|