Hello!
Our favorite links this month include:
There are also many open opportunities ⬇️, including an upcoming EA conference in New York City, a course on AI governance, mentorship for people of underrepresented genders, and career advising for people at different career stages.
— Lizka (for the EA Newsletter Team)
|
|
Articles ⚓
Philosophical and practical questions in charity evaluation: lessons from GiveWell
GiveWell is a charity evaluator that searches for the programs “that save or improve lives the most per dollar.” To date, GiveWell has directed more than $1 billion to its recommended charities and supported a wide range of projects: from vaccines to fortifying staple foods with iron.
A recent episode of the 80,000 Hours Podcast with Elie Hassenfeld, the CEO and co-founder of GiveWell, focuses on difficult questions that come up in this work, such as:
- How can you compare interventions that have different types of benefits? Asking people about how they would trade off between different benefits can be a good start (GiveWell funded a related survey in Ghana and Kenya), but the questions can also be more philosophical. Some suggest that GiveWell should use self-assessments of “subjective well-being” as a key type of metric for their evaluations. The podcast discusses some of the upsides and downsides of this approach. (There are older discussions on the topic.)
- Should GiveWell fund more interventions that speed up economic growth in poor countries? Many life improvements over history were primarily driven by economic growth, so some argue that interventions aiming to boost growth are potentially a lot more impactful than interventions like preventing malaria directly.
GiveWell is hiring people to work on research, outreach, operations, and content editing.
Is antimicrobial resistance a global health priority? ⚓
Antibiotics and other life-saving medicines are becoming less effective due to antimicrobial resistance (AMR), which occurs when bacteria, viruses, fungi, and parasites adapt to the methods commonly used to combat them. A new report suggests that AMR is responsible for millions of deaths each year (particularly in sub-Saharan Africa) and has serious economic costs — by one estimate, $55 billion every year in the US alone. The problem is neglected and getting worse as the overuse of antibiotics in healthcare and food production continues and more drug-resistance bacteria evolve.
There are promising approaches for working on AMR. These include creating incentives to accelerate the development of new antimicrobial medicines, contributing to quantitative research on the causes of AMR, running fellowships for policymakers, improving diagnostics (and their accessibility) to prevent misuse of antimicrobials, and raising the profile of AMR. You can read the full report or explore donation and charity-founding opportunities.
|
|
Hundreds of AI scientists and public figures sign a public statement on AI extinction risk ⚓
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The above statement was signed by hundreds of AI scientists and notable public figures. Debates about how exactly existential risks from AI compare to other AI-related issues (and other disagreements) have sometimes obscured less controversial points that are nevertheless important, so it’s exciting to see this diverse group of people in a coalition.
Governance proposals for preventing the deployment of dangerous AI models
A recent survey of expert opinion found a lot of agreement on best practices in AI safety and governance. The survey (which was run by the Centre for the Governance of AI and got 51 responses out of 93 experts contacted) asked participants how much they agreed with 50 statements about what AGI labs should do; participants, on average, agreed with all of them.
Proposals based on evaluating models for risky qualities to prevent the deployment of dangerous models got the most approval. This strategy is also the subject of a new paper co-authored by researchers from Deepmind, OpenAI, Anthropic, and more: Model evaluation for extreme risks.
If you’re interested in working in this area, consider applying to a course on AI governance (deadline 25 June) or exploring other AI safety-related opportunities.
|
|
In other news
For more stories, try these email newsletters and podcasts.
|
|
Resources
Links we share every time — they're just that good!
|
|
Jobs
Boards and resources:
- The 80,000 Hours Job Board features almost 700 positions. We can’t fit them all in the newsletter, so you can check them out there.
- The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more — including part-time and entry-level job opportunities.
- You can see more positions in the EA Job Postings group on Facebook.
- If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.
Highlights
Alignment Research Center
Centre for Effective Altruism
- Content Specialist (Remote / Oxford / Boston / other, £54.6k -£67.3k / $96.2-$124.k, apply by 19 July)
Fish Welfare Initiative
Founders Pledge
GiveWell
Open Philanthropy
Wild Animal Initiative
|
|
Announcements and opportunities ⚓
- Upcoming conferences: New York and online
- EAGxNYC will run 18–20 August. You’re welcome to apply whether you’re new to EA or already professionally engaged, particularly if you have a connection to the New York (or East-coast US) area. Apply as early as possible, by 31 July.
- Save the date: EAGxVirtual 2023 will take place (online) on 17-19 November.
- AI governance course (free, online)
- Learn more about the governance of transformative AI in a free, remote course. The course will run for 14 weeks, from mid-July to September, and is open to everyone. People working in policy and people with a technical background interested in helping with AI governance are especially encouraged to apply. Explore the curriculum and apply by 25 June.
- Mentorship for people of underrepresented genders
- Magnify Mentoring provides one-on-one mentorship and more to women, non-binary, and trans people of all genders who are enthusiastic about pursuing high-impact career paths. You can apply to the program by 25 June.
- A new AI forecasting mentorship program by Epoch and FRI will provide guidance to women, non-binary people, and trans people of all genders who want to contribute to the field. The program will help mentees produce research on AI forecasting. Mentees should expect to commit 10 hours/week from 3 July - 1 September (the program is free, with an optional $1000 stipend). Apply by 19 June.
- Career advising and support
- Probably Good is accepting applications for their new 1-on-1 career advising service. They’ll help you brainstorm career paths, evaluate options, plan next steps, and connect with relevant people and opportunities. Learn more and apply.
- Successif offers workshops, mentoring, peer support groups, and more for mid-career and senior professionals who’d like to transition into high-impact work. Apply here.
- A new funding opportunity
- If you have a project aimed at helping humanity flourish in the long-term future, you can apply for a grant via Lightspeed Grants. The application is minimal and grant requests of any size ($5k - $5M) are welcome. Urgent requests receive a response within 14 days of application. Apply here by 6 July. (You can also explore other funding opportunities.)
|
|
Organizational Updates
You can see updates from a wide range of organizations on the EA Forum.
|
|
Timeless classic on scaling up renewable energy technologies
The price of solar power declined by 89% in just one decade. A writeup from Our World in Data explores trends in prices of different energy sources and argues that this isn’t just a side-benefit: making renewable energy cheap is key to a low-carbon future.
(The head of research at Our World in Data is running an AMA on the EA Forum this week — you can ask him questions about everything from his work on COVID-19 data to how they prioritize what to work on by 23 June.)
|
|
We hope you found this edition useful!
If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.
Finally, if you have feedback for us, positive or negative, let us know!
– The Effective Altruism Newsletter Team
|
|
|
|
|