This month, we're featuring — The EA Newsletter Team

🔗 Articles

AI sentience and risks from AI

A Google software engineer recently publicized a transcript of an eerie conversation he had with LaMDA, a chatbot AI. The AI’s sophisticated responses convinced him that it was sentient. 

Dylan Matthews took this as an opportunity to discuss broader questions about how we should think about artificial intelligence and sentience. Matthews notes: 

AI safety folks don’t worry that AI will become sentient. They worry it will become so powerful that it could destroy the world.

The article describes how an AI might become extremely dangerous, and discusses the relationship between power and sentience. Matthews also talks about some of the problems that experts face when they write about AI for the public.

Holden Karnofsky published a more technical post on how “AI Could Defeat All of Us Combined.”

Why did the world start getting rich 200 years ago? 

Vox recently ran a discussion with the two authors of How the World Became Rich, Mark Koyama and Jared Rubin. They address: 
  • What were the living conditions of most people before the 20th century? 
  • What caused the sudden surge in economic growth 200 years ago?
  • Why did some countries get rich while others didn’t? What role did colonization play in that? 
Most people used to live in what we would now describe as extreme poverty, and this was reflected in average lifespans, huge rates of child mortality, and widespread malnutrition. The authors argue that sustained technological innovation allowed humanity to improve these living conditions by overcoming what’s known as the “Malthusian trap” – the theory that population growth will always outpace agricultural production and lead to cycles of conflict, poverty, and famine.

The article also notes some key uncertainties, and areas where different historians and economists disagree. One of these is: will economic growth continue?

review of How the World Became Rich goes into more detail

Cautionary tales from the history of nuclear weapons

The development of nuclear weapons opened a new age for humanity; we gained the ability to destroy ourselves. In a new post, Haydn Belfield asks: “Why did nuclear scientists like Szilárd, who kept the chain reaction secret and opposed nuclear weapons for decades after the war, advocate for and participate in [the nuclear sprint]?” 

The answer: Ellsberg, Szilárd, Einstein, and many other brilliant scientists falsely believed they were in a competitive race against the Nazis. This belief was wrong; Nazis were nowhere near developing nuclear weapons.

Some of the key scientists involved with the project have described it as the greatest mistake in their lives. Ellsberg, who participated in a different arms race, has shared a lot of behind-the-scenes information on the “institutional insanity” of nuclear weapons. 

In his post, Belfield argues that: 
  • The Manhattan Project — the key project in the nuclear sprint — sped up the advent of nuclear weapons by “perhaps a decade.” Moreover, the project might have heightened the chances of a full hostile exchange during the Cold War. 
  • Other arms races (like the growing stockpiles of long-range missiles and the Soviet bioweapons program, which was developed partly because the USSR falsely believed that the US had better technology and was working on bioweapons) followed similar patterns.
  • We might encounter similar situations in the near future, for instance with artificial general intelligence (AGI). If you find yourself in a secretive context, operating with a great sense of urgency to develop some new technology faster than some other side — make sure you’re really in a race, or you’ll risk destroying everything.

In other news For more stories, try these email newsletters and podcasts


Links we share every time — they're just that good!


The 80,000 Hours Job Board features more than 600 positions. We can’t fit them all in the newsletter, so check out the others on their website!

You can see more positions in the EA Job Postings group on Facebook.

If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

Applications due soon

Rethink Priorities is hiring for an Executive Director for the Insect Welfare Project (Remote, apply by 3 July)

Longview Philanthropy: GiveWell:
Cooperative AI Foundation is hiring for a Chief Operating Officer (Oxford / London / Remote, apply by 23 June)

Other positions

🔗 Announcements

CARE Conference on Animal Rights in Europe

This year’s Conference on Animal Rights in Europe (CARE) will take place on August 26-28 in Warsaw, Poland. Attendees will be able to take part in person or online.

"Cause exploration” prizes from Open Philanthropy

Open Philanthropy, one of the largest funders in the effective altruism space, is interested in causes, interventions, and arguments that they haven’t yet considered, and are offering a prize for writing that helps them discover these.

There is $120,000 in total prize money available: $25,000 for first place, three prizes of $15,000, twenty honorable mentions at $500, and $200 for the first 200 good-faith submissions that don’t otherwise receive prizes. 

The deadline for submissions is August 4, and non-experts are encouraged to submit. Check out the prize’s website to learn more.

Also, applications for the latest round of the Open Philanthropy Undergraduate Scholarship close on August 15.


Contest for the best criticism of work in effective altruism

There’s also a new contest for criticism of theory or work in effective altruism — with a total of $100,000 in prizes. 

The deadline is September 1. You definitely don’t need to consider yourself part of the effective altruism community to enter.

The announcement post lists judging criteria — like importance, transparency, and action-relevance — and explains how to submit work.


Contest for developing regulatory mehtods that account for existential risks

The Legal Priorities Project is announcing a writing competition on “Improving Cost-Benefit Analysis to Account for Existential and Catastrophic Risks.” Learn more and apply here by July 31.


Organizational Updates

You can see updates from a wide range of organizations on the EA Forum.

Timeless Classic

Ideas that have shaped the way we think about doing good

Willingness to reprioritize — to switch to a different cause when it turns out to be more important — is crucial to doing good effectively. But changing minds and causes like this is extremely difficult. 

Claire Zabel’s 2015 post shares some techniques for making cause repriotiziation easier. The post recommends, among other things, that we make small donations to different cause areas, try to check our understanding of key arguments for cause areas different from ours, and de-stigmatize discussions of emotional attachment to particular causes. 

More recently, Julia Wise also published a post on a similar topic: Messy personal stuff that affected my cause prioritization
We hope you found this edition useful!

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

Finally, if you have feedback for us, positive or negative, let us know!

– The Effective Altruism Newsletter Team
Click here to access the full EA Newsletter archive
A community project of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) – Centre for Effective Altruism, Trajan House, Mill Street, Oxford OX2 0DJ, United Kingdom
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.