We’ve posted an update about new features on the Forum, including profile changes, new ways to discover content, and ability to upload EA Global application info into Forum profiles.
Posts we recommend:
- My Most Likely Reason to Die Young is AI X-Risk (AISafetyIsNotLongtermist, 5 min)
- The Track Record of Futurists Seems ... Fine (Holden Karnofsky, 17 min)
- AI safety university groups: a promising opportunity (and two other posts about AI safety university groups) (mic, 21 min)
- Emphasizing emotional altruism in effective altruism (Michel Justen, 15 min)
- (Even) More Early-Career EAs Should Try AI Safety Technical Research (levin, 18 min)
- The Future Might Not Be So Great. (Jacy, 51 min)
- Why AGI Timeline Research/Discourse Might Be Overrated (Miles Brundage, 14 min)
- How I Recommend University Groups Approach the Funding Situation (sabriac, 16 min)
Announcements and updates:
- An update on GiveWell's funding projections (Elie Hassenfeld)
- Announcing: EA Engineers (Jessica Wen, Sean Lawrence)
- Announcing the Harvard AI Safety Team (Alexander Davies)
- Future Fund June 2022 Update (Nick Beckstead, Leopold, Avital Balwit, William MacAskill, Ketan Rama)
- Introducing the Fund for Alignment Research (We're Hiring!) (Adam Gleave, Scott Emmons, Ethan Perez, Claudia Shi)
Questions seeking answers:
- What are some current, already present challenges from AI? (nonzerosum)
- What is the top concept that all EAs should understand? (Nathan Young)
Classic Forum post:
We don't have room for every good post. See the All Posts page for more!
You can see all of our past classic reposts here.
Read past issues