Copy
Hello!

Our favourite links this month include: You’ll also find exciting jobs and opportunities, including Operation Warp Speed 2.0 Director at 1Day Sooner, the Superintelligence Imagined creative media contest, the EAGx Utrecht conference, and more. 

— Toby, for the EA Newsletter Team

Articles
 

 

Should we expect AGI within 4 years?

 
Leopold Aschenbrenner, formerly of OpenAI, argues that it is “strikingly plausible” that AI systems will be able to replace any remote worker by 2027, and that we may see superintelligence a year after. In his recent blog series, Aschenbrenner asserts that:
  • Larger models are better models. We got from GPT-1 (which can occasionally string a sentence together) to GPT-4 (which can pass many university exams and debug your code) in 4 years. Over that time, the underlying architecture of the models remained mostly the same. What changed was the size of the models and the amount of resources needed to train them. If this trend continues, we get a graph like the one below this feature ⬇️. 
  • “Unhobbling” AI systems makes them more capable. Current AI models would be significantly less useful without RLHF, a method which helps the models identify the responses users value the most. But RLHF happens after training. It doesn’t exactly make the model smarter, it just makes its capabilities more accessible to the user. Aschenbrenner expects new methods to unlock more of the latent power of AI systems over the next few years. 
His predictions have, predictably, met with criticism. Kelsey Piper argues that we can’t predict qualitative advancements in AI from past trends in capability, and Anton Leicht critiques Aschenbrenner’s confidence in the profitability of AI systems. 

For a deeper understanding of the speed of AI progress, start with three reports from Epoch AI, covering the growth in cash and compute costs for leading AI models, and whether we are running out of training data

Push or pull: how to incentivise the creation of lifesaving technologies

 
Some of the most promising new vaccines are the least likely to be funded. Examples include a TB vaccine that would work for adults (1.5 million of whom die a year) and a universal COVID vaccine. 

Part of the problem, argues economist Rachel Glennerster on the 80,000 Hours podcast, is that most vaccine development relies on push funding. Researchers write grant proposals and receive money from wealthy governments and philanthropists. The organisations giving the grants have to decide which research is promising enough to fund, a difficult task which can lead to under-funding the most ambitious yet speculative projects. 

Pull funding, in contrast, uses mechanisms like patents and prizes to reward the creation of new technologies. Advance market commitments (AMCs) are a proven model of pull funding. These involve a grantmaker agreeing to pay a certain amount for technology before it is deployed or even invented, creating a market before the product exists. For example, a $1.5 billion AMC, which offered to pay manufacturers if they produced a low-cost pneumococcal vaccine, accelerated the vaccine’s availability by five years, saving an estimated 700,000 lives. AMCs can be used to fund even more ambitious new technologies, including vaccines, which could save millions of lives. 

Learn more about AMCs and other market-shaping measures here.

GiveDirectly collaborates with… MrBeast?!


Although better known for his big-budget challenge videos, YouTube’s ubiquitous creator MrBeast also runs a fund and YouTube channel called Beast Philanthropy. Last week, the channel showcased the direct-cash charity GiveDirectly’s work in Uganda

In the video, Beast Philanthropy distributes $200,000 to people living in extreme poverty and makes the case for the effectiveness of direct cash aid. The video features recipients of GiveDirectly’s work, who were involved in the review process and approved of how their communities were represented

If you want to boost this collaboration, while directly helping people living with poverty, you can donate to GiveDirectly’s Beast Philanthropy collaboration via this link.


In other news

For more stories, try these newsletters and podcasts
 
Resources

Links we share every time — they're just that good!
Jobs

Boards and resources:

Selection of jobs

 
1Day Sooner BlueDot Impact Center for AI Safety Centre for the Governance of AI Constellation Founders Pledge GiveWell METR Non-Trivial Tarbell

Announcements
 

Fellowships, internships, and courses


Conferences and events

  • EAGx Utrecht (5-7 July) closes applications on 23 June. Apply to EAGx Toronto (16-18 August) by 31 July. There will also be EAGx events in Berkeley (7-8 September) and Berlin (13-15 September). 
  • The final EAG of the year will be held in Boston (1-3 November).
  • The EA Nigeria Summit (6-7 September, Abuja) is a two-night event aimed at networking and knowledge sharing. International applications are welcome, but emphasis will be put on Nigerian and African applicants. Apply by 5 August.
  • The Human-aligned AI Summer School (17-20 July, Prague) will hold four days of discussions, talks, and workshops covering the latest trends in AI alignment research. Applicants are expected to understand current ML approaches, but can be PhDs, students, or researchers working in ML/AI outside of academia. Apply here.

Funding and prizes

  • The Superintelligence Imagined contest is offering five prizes of $10,000 each for the best media projects (in any medium) which help audiences answer the question “What is superintelligence, and what risks might it bring to humanity?” The Future of Life Institute is running the competition, which is free to enter, and will accept submissions until 31 August.
Organizational Updates

You can see updates from a wide range of organizations on the EA Forum.
Timeless Classic

This episode of the (much recommended) Rationally Speaking podcast tells the story of MIT scientist Kevin Esvelt’s work on gene drives, and his argument for the danger of dual use biological technologies. 
We hope you found this edition useful!

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

Finally, if you have any feedback for us, positive or negative, let us know!

– The Effective Altruism Newsletter Team
Click here to access the full EA Newsletter archive
This newsletter is run by the Centre for Effective Altruism, a project of Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organization in the USA, EIN 47-1988398), two separate legal entities which work together.
 
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.