Hello!
Our favourite links this month include:
|
|
Applications for the EA Student Summit in London, for students interested in effective altruism, are open until March 23rd. We’re linking to jobs such as Policy and Operations Strategist at the Centre for Long-Term Resilience and Head of Operations at Kairos (a new AI Safety fieldbuilding organisation). And, as always, many more great articles and opportunities.
— Toby, for the EA Newsletter Team
|
|
Articles
Yes, Shrimp Matter
“And on that farm he had a shrimp, E-I-E-I-O” is not a particularly common verse in Old Macdonald. But, on the stats alone, it should be. 51% of animals being farmed right now are shrimp.
In Yes, Shrimp Matter, Andrés Jiménez Zorrilla, CEO of the Shrimp Welfare Project, makes the case for the importance of shrimp welfare (and why he left a career in private equity to work on it). In short:
Farmed shrimp are extremely numerous: 440 billion are slaughtered annually.
Shrimp may be conscious. Zorilla writes that Rethink Priorities’ Welfare Range Estimates suggest that “shrimp’s capacity for improved welfare is substantial when compared to octopuses, and even pigs”.
Their welfare could be greatly improved. Shrimp farming involves a range of practices that appear brutal if shrimp are conscious, notably: eyestalk ablation (removing eye stalks from female shrimp without anaesthetic in the mistaken belief it will lead to more breeding), and ice slurry stunning (which may paralyse rather than stun, so shrimp die from asphyxiation and crushing).
We can help. Practical solutions exist. Eight electrical stunning machines improve the slaughter conditions for 2.5 billion shrimp annually, and the Shrimp Welfare Project is buying more. Simple pond maintenance reduces toxic buildup. Major retailers like Tesco, Sainsbury’s and Waitrose are implementing welfare policies to reduce eyestalk ablation in their supply chains. Continued advocacy and funding can help billions of shrimp.
You can donate to the Shrimp Welfare Project here.
|
|
Can we handle a century in a decade?
Imagine all the scientific and technological developments which occurred between 1925 and 2025 — the atom bomb, general relativity, CRISPR, the internet — compressed into one decade. Then imagine that governance, culture and civil society didn’t advance at the same rate. This is the scenario Fin Moorhouse and Will MacAskill from Forethought, a new AI macrostrategy research group, predict that we may face very soon.
Their new paper makes three key points:
AI progress is blisteringly fast: AI cognitive labour is “growing more than 500x faster than total human cognitive labour”. AI cognitive labour (the amount of research it can do) is still well below the amount coming from human labour. But if it continues growing at its current rate, it’ll soon surpass humans. Moorhouse and MacAskill think that this “intelligence explosion” could lead to a century’s worth of technological development in a decade. The next decade.
We face grand challenges: A world where AI, even AI that humans control, rapidly speeds up the rate of technological development would face “grand challenges” as a result. These could include defending against highly destructive technologies such as armies of drones, dealing with the political ramifications of radically concentrated wealth, or protecting our collective knowledge against disruption caused by super-persuasive AI systems.
We can (and should) prepare for these challenges now: AI safety is often viewed as an all-or-nothing problem — if we stay in control of advanced AI systems, all will be well. But this might not be the case. If we develop powerful technologies, but no commensurate plans to govern their use, we could cause catastrophe. Today, we can shape laws around autonomous weapons, making it harder to build drone armies in the future, we can establish norms around space governance, stopping laissez faire colonisation of space, and we can prepare AI companies and the public for the possible sentience of digital minds, insuring them against possible harm.
To learn more about these challenges, and the work that is now being done on them, read the paper or listen to Will MacAskill’s interview on the 80,000 Hours podcast, where he discusses these issues in length.
|
|
Truths about Foreign Aid
Foreign aid is under threat. Last month, we discussed the effects of the US foreign aid freeze on the highly effective PEPFAR program. Since then, UK Prime Minister Keir Starmer has announced that the UK can’t afford foreign aid, and it will be reduced to just 0.3% of Gross National Income (GNI) by 2027.
With classically astute timing, Our World In Data published a great piece, outlining some key facts about foreign aid. Notably:
- More than 95% of foreign aid comes from governments rather than private donors (note that “private donors” means organisations, rather than you and me donating).
- The UN’s foreign aid target for developed countries is 0.7% of GNI. Only five developed countries hit that target in 2023. Foreign aid would almost double if all of them did.
- Surveys suggest that people massively overestimate the portion of the budget that governments spend on foreign aid. A 2015 survey of Americans found an average estimate of 31%, where the reality was under 1%. Yet, when asked how much should go to foreign aid, the average answer was 10%.
If you’d like to help counteract the US foreign aid pause, donate to the rapid response fund, a collaboration between Founder’s Pledge and The Life You Can Save, which aims to fill critical gaps in funding faced by effective global health, extreme poverty and humanitarian programmes.
|
|
In other news
- Anima International pivoted from promoting vegetarian options to corporate cage-free campaigns when they realised their impact was far lower than they expected. In this piece, they explain how the transition came about.
- Open Philanthropy highlights some of their progress in 2024, including projects that have featured in this newsletter — the Lead Exposure Action Fund, which has directed millions towards lead exposure reduction, and the Mirror Biology Dialogues Fund, which led to a widely publicised report on the risks associated with developing mirror biology.
- Basic science research can seem silly in isolation. Making frogs levitate doesn’t sound particularly practical. However, basic science research is among the best investments a government can make. Deena Mousa and Lauren Gilbert explain why.
- Blogger Ozy Brennan writes about one aspect of effective altruism — a commitment to moral circle expansion.
- Author and YouTuber John Green is interviewed by Vox Future Perfect about the effect of the foreign aid cuts on Tuberculosis.
- On AI:
For more stories, try these email newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs
Boards and resources:
Selection of jobs
Centre for Long-Term Resilience
TamperSec
Institute for Law and AI
Evidence Action
EAGxBerlin 2025
- Various Roles (Remote / In-person in Berlin, EUR €25–€35/hour, apply by April 27th)
GiveWell
The Humane League
Kairos
Open Philanthropy
Rethink Priorities
The Good Food Institute
- Chief of Staff (Hybrid in Mumbai, Delhi, or Bangalore, INR ₹34L–38L, apply by March 21st).
|
|
Announcements
Fellowships and Internships
- Applications for the Charity Entrepreneurship Incubation Program (CEIP) August 2025 cohorts close on March 24th. Read about the recommended charity ideas for this cohort here, and find out more about the program here.
- The Pivotal Research Fellowship is a 9-week program that provides mentorship, a £5,000 stipend, and more for promising researchers in AI safety, AI governance, and biosecurity. Apply for the June 30th to August 29th program by April 9th.
Conferences and Events
Funding
- Meta Charity Funders: 2025 Spring Round is now open — apply for funding by March 30th or join as a donor. Priority areas include impact advising, effective giving, and movement-building in LMICs. Grants awarded start on May 26th.
|
|
Organizational Updates
You can see updates from a wide range of organizations on the EA Forum.
|
|
Timeless Classic
The David Chalmers episode of the 80,000 Hours podcast is a classic, almost 5-hour-long deep dive into the philosophy of mind, whether philosophy makes progress, careers in academia and much more.
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have any feedback for us, positive or negative, let us know!
– Toby, for the Effective Altruism Newsletter Team
|
|
|
|
|