Hello!
Our favourite links this month include:
You’ll also find exciting jobs and opportunities, including Operation Warp Speed 2.0 Director at 1Day Sooner, the Superintelligence Imagined creative media contest, the EAGx Utrecht conference, and more.
— Toby, for the EA Newsletter Team
|
|
Articles
Should we expect AGI within 4 years?
Leopold Aschenbrenner, formerly of OpenAI, argues that it is “strikingly plausible” that AI systems will be able to replace any remote worker by 2027, and that we may see superintelligence a year after. In his recent blog series, Aschenbrenner asserts that:
- Larger models are better models. We got from GPT-1 (which can occasionally string a sentence together) to GPT-4 (which can pass many university exams and debug your code) in 4 years. Over that time, the underlying architecture of the models remained mostly the same. What changed was the size of the models and the amount of resources needed to train them. If this trend continues, we get a graph like the one below this feature ⬇️.
- “Unhobbling” AI systems makes them more capable. Current AI models would be significantly less useful without RLHF, a method which helps the models identify the responses users value the most. But RLHF happens after training. It doesn’t exactly make the model smarter, it just makes its capabilities more accessible to the user. Aschenbrenner expects new methods to unlock more of the latent power of AI systems over the next few years.
His predictions have, predictably, met with criticism. Kelsey Piper argues that we can’t predict qualitative advancements in AI from past trends in capability, and Anton Leicht critiques Aschenbrenner’s confidence in the profitability of AI systems.
For a deeper understanding of the speed of AI progress, start with three reports from Epoch AI, covering the growth in cash and compute costs for leading AI models, and whether we are running out of training data.
|
|
Push or pull: how to incentivise the creation of lifesaving technologies
Some of the most promising new vaccines are the least likely to be funded. Examples include a TB vaccine that would work for adults (1.5 million of whom die a year) and a universal COVID vaccine.
Part of the problem, argues economist Rachel Glennerster on the 80,000 Hours podcast, is that most vaccine development relies on push funding. Researchers write grant proposals and receive money from wealthy governments and philanthropists. The organisations giving the grants have to decide which research is promising enough to fund, a difficult task which can lead to under-funding the most ambitious yet speculative projects.
Pull funding, in contrast, uses mechanisms like patents and prizes to reward the creation of new technologies. Advance market commitments (AMCs) are a proven model of pull funding. These involve a grantmaker agreeing to pay a certain amount for technology before it is deployed or even invented, creating a market before the product exists. For example, a $1.5 billion AMC, which offered to pay manufacturers if they produced a low-cost pneumococcal vaccine, accelerated the vaccine’s availability by five years, saving an estimated 700,000 lives. AMCs can be used to fund even more ambitious new technologies, including vaccines, which could save millions of lives.
Learn more about AMCs and other market-shaping measures here.
|
|
GiveDirectly collaborates with… MrBeast?!
Although better known for his big-budget challenge videos, YouTube’s ubiquitous creator MrBeast also runs a fund and YouTube channel called Beast Philanthropy. Last week, the channel showcased the direct-cash charity GiveDirectly’s work in Uganda.
In the video, Beast Philanthropy distributes $200,000 to people living in extreme poverty and makes the case for the effectiveness of direct cash aid. The video features recipients of GiveDirectly’s work, who were involved in the review process and approved of how their communities were represented.
If you want to boost this collaboration, while directly helping people living with poverty, you can donate to GiveDirectly’s Beast Philanthropy collaboration via this link.
|
|
In other news
- Two posts by Jacob Trefethen outline US policy ideas to accelerate life-saving technologies. Trefethen’s suggestions include raising more funding for Gavi and getting the FDA to sign confidentiality agreements with more countries (1,2).
- Vox explores how milk got into American schools, and why it is so hard to get it out.
- If you were interested in Leopold Aschenbrenner’s blog series, featured above, you might like this unofficial summary and Aschenbrenner’s appearance on the Dwarkesh Podcast.
- Africa needs malaria vaccines as soon as possible; we need 320 million doses to cover the 80 million children who are currently at risk. This year, we are set to distribute only around 30 million.
- Nobel prize-winning psychologist Daniel Kahneman passed away in March. Four days before he died, he recorded an interview for Peter Singer's Life Well Lived Podcast.
- Rethink Priorities published two reports on wild animal welfare. One explains the goals of the wild animal welfare movement and the other provides an overview of the organizations involved.
- Our World in Data published two new pages on neglected (and not so neglected) tropical diseases: trachoma and guinea worm disease.
- GiveWell’s rebooted blog gives interesting insight into their fundraising and research work.
- Hannah Ritchie explores the environmental and animal welfare trade-offs of eating meat, noting, “It’s tempting to assume that what’s good for the planet is also good for the animal, but unfortunately, this is not the case.”
- AI
- AI Ruined My Year, a new video from AI Safety YouTuber Robert Miles, provides an overview of both the progress of AI and AI safety in the mainstream over the last year.
- Former OpenAI employees have started speaking publicly about the reasons they quit or were fired, and notable insiders, including current OpenAI employees, signed an open letter calling for a right to warn the public about risks from AI.
- The Institute for Progress kicked off a series of articles outlining how America could build infrastructure for AI.
- How can we begin to understand how neural networks learn? A new video from Rational Animations explains what we know so far.
For more stories, try these newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs
Boards and resources:
Selection of jobs
1Day Sooner
BlueDot Impact
- Product Manager (London / Global Remote, £60,000 – £90,000, 30 June)
- Software Engineer (London / Global Remote, £60,000 – £90,000, 30 June)
- AI Governance Teaching Fellow (Global Remote, £4,900 – £9,600 for each round of the course, 7 July)
Center for AI Safety
- Project Manager (San Francisco, CA, $110,000 – $150,000)
- Research Engineer (San Francisco, CA, $120,000 – $160,000)
- Federal Policy Lead (Washington, DC / San Francisco, CA, $180,000 – $200,000)
Centre for the Governance of AI
- Research Fellow (Oxford, UK / Remote, £60,000 – £80,000, 7 July)
- Research Scholar (Oxford, UK / Remote, £60,000 – £75,000, 7 July)
Constellation
Founders Pledge
GiveWell
- Head of Fundraising Operations and Analytics (Remote, $166,200 – $183,300), Research Analysts (Remote, $83,400 – $105,800), Head of Operations (Remote, $264,345 – $294,700), and more.
METR
- ML Research Engineer/Scientist (Berkeley, CA / Hybrid Remote, $158,000 – $276,000)
- Senior ML Research Engineer/Scientist (Berkeley, CA / Hybrid Remote, $276,000 – $420,000)
- Eval Production Lead (Berkeley, CA, $209,000 – $555,000)
Non-Trivial
Tarbell
- Fellowship Manager (London / Remote, $75,000 – $100,000, 7 July)
- Operations Manager (London / Remote, $75,000 – $100,000, 7 July)
- Special Projects Manager (London / Remote, $75,000 – $100,000, 7 July)
|
|
Announcements
Fellowships, internships, and courses
Conferences and events
- EAGx Utrecht (5-7 July) closes applications on 23 June. Apply to EAGx Toronto (16-18 August) by 31 July. There will also be EAGx events in Berkeley (7-8 September) and Berlin (13-15 September).
- The final EAG of the year will be held in Boston (1-3 November).
- The EA Nigeria Summit (6-7 September, Abuja) is a two-night event aimed at networking and knowledge sharing. International applications are welcome, but emphasis will be put on Nigerian and African applicants. Apply by 5 August.
- The Human-aligned AI Summer School (17-20 July, Prague) will hold four days of discussions, talks, and workshops covering the latest trends in AI alignment research. Applicants are expected to understand current ML approaches, but can be PhDs, students, or researchers working in ML/AI outside of academia. Apply here.
Funding and prizes
- The Superintelligence Imagined contest is offering five prizes of $10,000 each for the best media projects (in any medium) which help audiences answer the question “What is superintelligence, and what risks might it bring to humanity?” The Future of Life Institute is running the competition, which is free to enter, and will accept submissions until 31 August.
|
|
Organizational Updates
You can see updates from a wide range of organizations on the EA Forum.
|
|
Timeless Classic
This episode of the (much recommended) Rationally Speaking podcast tells the story of MIT scientist Kevin Esvelt’s work on gene drives, and his argument for the danger of dual use biological technologies.
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have any feedback for us, positive or negative, let us know!
– The Effective Altruism Newsletter Team
|
|
|
|
|