Trevor Levin (@trevposts) 's Twitter Profile
Trevor Levin

@trevposts

Trying to help the world navigate the potential craziness of the 21st century, currently via AI Governance and Policy at @open_phil; dad rock enjoyer; he/him

ID: 1532808182

calendar_today20-06-2013 04:27:39

1,1K Tweet

2,2K Followers

1,1K Following

Rob Wiblin (@robertwiblin) 's Twitter Profile Photo

I interview Anthropic co-founder Nicholas Joseph about the policy Anthropic uses to ensure their AI models never go rogue or cause a catastrophe, and whether it's good enough. Nick sees 3 big virtues to their 'responsible scaling policy' approach: 1. It allows us to set aside

Ajeya Cotra (@ajeya_cotra) 's Twitter Profile Photo

Unfortunately I disagree with Arvind Narayanan and Sayash Kapoor: I think companies who say they want to build AGI *are* still building AGI. They're making products to make $, but the $ will be plowed into trying to make gods. aisnakeoil.com/p/ai-companies…

Lawfare (@lawfare) 's Twitter Profile Photo

"Meaningfully regulating AI over industry objections was always going to be a tall order, but by training their sights on each other, AI doomers and ethicists are helping clear the field for tech lobbyists," argue Zachary Arnold & Helen Toner (CSET) lawfaremedia.org/article/ai-reg…

Andrew Curran (@andrewcurran_) 's Twitter Profile Photo

This morning the US Government announced that OpenAI and Anthropic have signed a formal collaboration on AI safety research, testing and evaluation. Under the deal the USAISI will have access to major new models from both OpenAI and Anthropic prior to release.

This morning the US Government announced that OpenAI and Anthropic have signed a formal collaboration on AI safety research, testing and evaluation. Under the deal the USAISI will have access to major new models from both OpenAI and Anthropic prior to release.
Helen Toner (@hlntnr) 's Twitter Profile Photo

Ajeya with (as always) smart thoughts on a dynamic that's central to fights about AI risks, namely: Everyone would love to only act on the basis of rock-solid empirical evidence... but some serious thinkers worry we're not going to get that in time. How do we handle that??

Lessig 🇺🇦 (@lessig) 's Twitter Profile Photo

Governor Gavin Newsom – safety and innovation can coexist in California. Along with dozens of academics and experts, I’ve signed a letter urging you to pass this reasonable regulation.

Flo Crivello (@altimor) 's Twitter Profile Photo

Just realized I never said it here, but obviously I'm in full support of SB-1047. I'll repeat that I say this as someone who 1/ strongly leans libertarian 2/ hates the state and regulation (I left my home country for this specific reason) 3/ dislikes paperwork as much as the

Cas (Stephen Casper) (@stephenlcasper) 's Twitter Profile Photo

🧵🧵🧵 I hope that California Governor Gavin Newsom will #signSB1047. I joined in with dozens of other academic AI researchers in support of it. Here is a thread. safesecureai.org/academics TL;DR -- I think SB 1047 improves transparency & incentivizes a race to the top on safety norms.