Hello!
Our favorite links this month include:
We’re also sharing links to upcoming conferences (such as EAGx Toronto, with a deadline of 31 July), new jobs, such as Communications Specialist at Rethink Priorities (with a deadline of 11 August), and many more great links and opportunities. Apologies for the late newsletter; I (Toby) was on holiday.
— Toby, for the EA Newsletter Team
|
|
Articles
Back to The Precipice
The Precipice, Toby Ord’s book on existential risks, was written 5 years ago. How have the risks changed since then? In a recent talk (with transcript), Ord explains several updates to his thinking, including:
- On AI: Ord argues that the current paradigm for AI — generative-AI models trained on human writing — might give us reason to expect AI to stick around at a roughly human level of competence for some time. This would give us longer to prepare for a world with advanced AI. However, the dynamics of profit-seeking competition emerging between Meta, Google, and Microsoft could lead to neglect of safety considerations.
- On biorisk: Ord was dismayed by how quickly partisan politics replaced global collaboration during the Covid pandemic. This could bode badly for future, worse pandemics. On the other hand, advances have been made in metagenomic sequencing, a technology which can help us detect new diseases, including genetically engineered ones, and air quality improvement (which is more important than previously thought for public health).
- More awareness: The world shows signs of taking existential risks more seriously. Since Ord published The Precipice, the UN has ranked existential risks as a priority in its Our Common Agenda report, and the Elders, an organization of influential world leaders, is now focusing on existential risks.
Books are static, but the world isn’t. If we want to reduce existential risks, we need to be receptive to changing circumstances and evidence. If you’re interested in staying up to date with the existential risk landscape, check out these podcasts and newsletters.
|
|
How to catch a lab leak
In 1979, there was an anthrax outbreak in the USSR. Reports in Russian journals blamed it on tainted meat. It took decades to prove otherwise — that hundreds of deaths or illnesses were caused by a leak of aerosolized anthrax from a secret bioweapons laboratory.
In an episode of the Statecraft podcast, Matthew Meselson (who was instrumental in the US banning biological weapons — but that’s another story) lays out how he, his wife Jeanne Guillemin (an anthropologist), and other collaborators, proved that there was a lab leak.
Meselson first suspected a lab leak when he noticed that the locations of the reported anthrax illnesses sat on a suspiciously straight line on the map — more indicative of an airborne release than the distribution of tainted meat. He spent years exploiting personal and professional networks to get into the USSR and investigate more thoroughly. When he got the opportunity, Jeanne Guillemin conducted extensive interviews with the outbreak's victims.
This isn’t a one-off story. Lab leaks are tragically common. If you’re interested in reducing the risks from future pandemics, including man-made ones, check out 80,000 Hours’ biorisk career review.
|
|
Shutting down
In a public post, co-founder of the Center for Effective Aid Policy (CEAP) Mathias Kirk Bonde explains the difficult decision to shut down the charity.
Almost $200 billion was spent on foreign aid in 2021. A very small portion of it was spent on effective global health programs. Charity Entrepreneurship, which incubated CEAP, tentatively estimated that a campaign to influence aid for the better “could expect to have an influence equivalent to directing $300m of additional funds to the world's poorest”, for a cost of only $1.5m.
But CEAP ran into problems:
- Only a small amount of aid isn’t already assigned based on political and diplomatic priorities, and many charities fight over that portion. CEAP didn’t have a sufficient edge to stand out.
- They weren’t able to find politicians who were excited about more effective aid.
- The founders didn’t feel they had strong personal fit.
Aid is still an important issue. A charity aiming to do the same thing could yet succeed. But it is important to celebrate charities shutting down when they realize they aren’t achieving their goals.
Effective altruism has a history of charities which took their self-analysis seriously and shut down, freeing up money and talent to move to more effective interventions (see Alvea and the Maternal Health Initiative). It’s often a personally difficult decision to make, but it shows real commitment to the goal of doing good.
|
|
In other news
For more stories, try these email newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs
Boards and resources:
Selection of jobs
80,000 Hours
Animal Charity Evaluators
Bluedot
Epoch AI
Faunalytics
Founders Pledge
- Many roles, including: Research Assistant - Global Catastrophic Risks (Remote, Part-time, $20 – $30 per hour), Growth Lead - EMEA (Hybrid in London, UK, £60,000 – £70,000), Research Communicator (Remote in UK, Germany, US, $60,000 – $80,000 / GBP £40,000 – £60,000 / EUR €40,000 – €60,000)
GiveWell
- Several roles, including: Senior Accountant (Remote in US, $114,800 – $126,600), Manager, Talent Acquisition (Remote in US, $143,200 – $157,900), and Head of Fundraising Operations and Analytics (Remote, $166,200 – $183,300)
The Good Food Institute
Our World in Data
Rethink Priorities
The School for Moral Ambition
- CEO (New York City or Washington, DC, $200,000 – $250,000, 17 August)
|
|
Announcements
Fellowships, internships, and courses
- The Center for Reducing Suffering is offering a free six-week online fellowship for people interested in reducing s-risks (risks of astronomical suffering). The fellowship will begin on September 2. Apply by 31 July.
- Bluedot Impact will be holding a biosecurity fundamentals course from September to December. Learn about the technical and policy efforts to prevent, detect, and respond to catastrophic pandemics. Apply by 15 September.
- Spend 3 months this winter working on an AI governance research project with GovAI. Apply by 11 August.
- The Horizon Fellowship funds fellows to work on policy challenges relating to AI, biotechnology, and other emerging technologies at host institutions in Washington, DC. Apply by 30 August for their 2025 cohort.
Conferences and events
- Apply to EAGx Toronto (16-18 August) by 31 July. There will also be EAGx events in Berkeley (7-8 September, apply by 20 August), Berlin (13-15 September, apply by 28 August), Bengaluru (19-20 October), and Sydney (22-24 November).
- The final EAG of the year will be held in Boston (1-3 November, apply by 20 October).
Funding and prizes
- Founders Pledge is now accepting requests for funding. They are particularly interested in work related to global health and development, global catastrophic risk, climate, and animal welfare, and generally make grants of between $50K and $300K (though larger grants are also possible).
- The Future of Life Institute is offering up to $4m to support projects which work to mitigate the dangers of AI-driven power concentration. Find out more and apply for funding here. The applications will be reviewed after 30 July, and again after 15 September.
|
|
Organizational Updates
See this month's updates from a wide range of organizations on the EA Forum.
|
|
Timeless Classic: How we fixed the ozone layer
In 1985, British Antarctic Survey scientists published a paper in Nature showing that a hole had developed in the ozone layer above Antarctica, leading to public alarm and pressure. In “How we fixed the ozone layer,” scientist Hannah Ritchie explains how industry and intergovernmental actors collaborated extremely quickly to avoid further damage. Today, we have almost entirely eradicated the products causing ozone depletion.
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have any feedback for us, positive or negative, let us know!
– The Effective Altruism Newsletter Team
|
|
|
|
|