Hello!
Our favorite links this month include:
- A new article on why a simple and effective cure for dehydration took so long to develop ⬇️
- Open Philanthropy’s AI Worldviews Contest
- High-impact jobs ⬇️, opportunities for building skills and connections ⬇️, and thoughts on recent developments in AI ⬇️.
— Lizka (for the EA Newsletter Team)
|
|
Articles ⚓
GPT-4 and the road to out-of-control AIs
On Tuesday, OpenAI unveiled the AI model GPT-4, an even-more-capable successor to the system that powered the popular chatbot ChatGPT. That same day, Google made one of its most powerful AI models accessible to developers, Anthropic opened access to the AI chatbot Claude, and more — the news continued throughout the week.
Two days before those announcements, Ezra Klein published a column in The New York Times, “This Changes Everything” (paywalled), in which he wrote: “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.” If this “summoning” continues unchecked, humans might find themselves at the mercy of deeply alien and uncontrollable AI systems. (Out-of-control AI might sound like science fiction, but experts are increasingly afraid of this possibility, and in a 2022 survey of machine learning researchers, nearly half said there is at least a 1 in 10 chance that the effects of AI would be “extremely bad (e.g. human extinction).”)
GPT-4 itself probably won't be disastrous, but it is a step towards AI systems that might be. In a critical response to OpenAI’s “Planning for AGI and beyond” statement, Scott Alexander argues that seemingly innocent AI releases actually fuel a misguided AI race and shorten the precious time society has to prepare.
Is there anything we can do? People working on AI safety are trying a variety of approaches. Some, like those on the Evals team at the Alignment Research Center (ARC), are building tools that can help evaluate how dangerous new models are. (In fact, ARC Evals helped test GPT-4.) Some want to slow things down. And there’s a lot more to do.
The road to simplicity for a cure that saved millions of lives ⚓
A simple salt, water, and sugar solution (“oral rehydration solution,” or ORS) is incredibly effective at helping patients suffering from diarrhea-induced dehydration. Since its adoption in the late 1970s, it has saved the lives of over 70 million people — mostly children.
Why did it take so long to discover something so simple? A recent article by Matt Reynolds focuses on this question. One theme highlighted in the piece is that doctors who were painfully close to finding a cure for cholera (which causes fatal dehydration through diarrhea) often missed promising paths because of a lack of understanding of what was happening on a biological level.
But while the first part of the problem was developing any effective treatment that doctors could theoretically administer, a crucial hurdle was finding a simpler, more practical treatment. By the mid-20th century, intravenous salines were often used to treat cholera. This “high-tech” treatment was effective and popular in richer areas, but inaccessible in others. The development of oral rehydration solution was a major breakthrough precisely because of its simplicity.
|
|
How can forecasting help policymakers?
When policymakers need insight into how a decision might affect the world, what sources should they consult?
A recent article argues that generalist forecasters — people who make and track predictions across a wide range of topics — slightly outperform domain experts and statistical models at predicting future events. Moreover, combining different approaches might be more promising, as it can help policymakers avoid biases and weaknesses of any particular group.
Another strength of forecasting is the fact that forecasters’ track records tend to be clear and easy to evaluate, meaning that policymakers can identify top forecasters to consult. The track records of different approaches for aggregating forecasts from different people are also assessed; one recent analysis, for example, compared the performance of the forecasting platforms Metaculus and Manifold Markets.
In other news
For more stories, try these email newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs ⚓
Boards and resources:
- The 80,000 Hours Job Board features more than 700 positions. We can’t fit them all in the newsletter, so you can check them out there.
- The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more — including part-time and entry-level job opportunities.
- You can see more positions in the EA Job Postings group on Facebook.
- If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.
⏳ Applications due soon
Rethink Priorities:
One for the World
📍Other positions
80,000 Hours
Alignment Research Center - Evaluations Project
Anthropic
Founders Pledge
GiveDirectly
GiveWell
- Assorted research positions, including new Research Associate and Senior Malaria Researcher positions (Remote / Oakland, CA)
- Operations Specialist, Recruiting (Remote / Oakland, CA, $90,600 - $98,000)
- Content Editor (Remote / Oakland, CA, $90,600 - $98,000)
Giving What We Can
IDinsight
Open Philanthropy
|
|
Announcements ⚓
Announcing the Open Philanthropy AI Worldviews Contest
Open Philanthropy is running a competition for essays that significantly inform their understanding of how AI will affect the world. The goal of the contest is to surface novel considerations that could influence their views on AI timelines and AI risk.
They plan on distributing $225,000 in prizes across six winning entries. You can find more details here. The deadline for submissions is 31 May.
Applications are open for three EA conferences in Europe
EAGxNordics invites people from around the world to come to Stockholm from 21–23 April and connect with others who are interested in applying EA principles. Apply by 28 March.
Two other EA conferences have open applications. You can apply to EA Global: London (19–21 May) — people who have been accepted to EA Global: Bay Area can register directly. And people from countries like Belarus, Czechia, Turkey, Ukraine, and more who make helping others a core part of their lives are also welcome to apply to EAGxWarsaw (9–11 June).
Opportunities to skill up and connect
- Fellowships
- Applications for the Center on Long-Term Risk’s Summer Research Fellowship are open. Fellows will to explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. Apply by 2 April.
- The ERA Cambridge Fellowship provides aspiring researchers with an in-person, paid, 8-week summer fellowship in Cambridge (UK) to conduct research and connect with people working on mitigating extreme threats to humanity. Apply by 5 April.
- Applications are open for a summer internship on the Groups Team at the Centre for Effective Altruism. Apply by 22 March.
- Workshops and courses
- A free pilot Forecasting course will run 24 March - 15 April (5-10 hours per week). Juan Cambeiro, a Good Judgement Superforecaster, will lead the course, which is aimed at beginner forecasters who want to improve their decision-making skills. Enroll by 22 March.
- The Global Priorities Institute is accepting applications to attend the 12th Oxford Workshop on Global Priorities Research (GPR). The workshop will run June 19-20 and will feature presentations from GPR researchers, and about 150 attendees who are interested in the field (mainly in philosophy and economics). More information and the link to apply can be found here — deadline 19 March.
- A new facilitated book club on information security will help software engineers take first steps to switch into impact-driven infosec work. Discussions will start on 1 April.
|
|
Organizational Updates
You can see updates from a wide range of organizations on the EA Forum.
|
|
Timeless classic on small probabilities
Suppose someone approaches you, claiming to be a time-traveler from the future who needs $10 to save humanity from a terrible disaster. The probability that they are telling the truth is very low — maybe 1 in 10^30 — but the potential consequences of refusing to give them $10 are so high that an expected value calculation might still say that you should pay the stranger $10. This is the classic illustration of Pascal’s mugging, a thought experiment that tests the limits of expected value theory and shows that we should be wary of expected-value calculations when probabilities are tiny.
In Most* small probabilities aren't pascalian, Gregory Lewis argues that the heuristic gets applied in many places it doesn’t belong. For example, someone might say that putting resources into preventing a 1 in 1,000 chance of some catastrophe is unintuitive enough that we shouldn’t take expected value there seriously — the chances are too small. But in fact, we regularly safeguard ourselves against risks much lower than that; fatal plane crashes, for instance, seem to occur at a rate of 1 in 1,000,000, and we rarely argue that we should stop making planes safer. Moreover, some of the risks we face might not be so unlikely. ML experts seem to think that there’s a 1 in 20 chance that AI systems will destroy civilization, and forecasters predict that the chances of a new pandemic causing 20 million deaths in the next decade is around 1 in 5 – far from pascalian territory.
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have feedback for us, positive or negative, let us know!
– The Effective Altruism Newsletter Team
|
|
|
|
|