Luke Muehlhauser (@lukeprog) 's Twitter Profile
Luke Muehlhauser

@lukeprog

Open Philanthropy Senior Program Officer, AI Governance and Policy

ID: 24847747

linkhttp://lukeprog.com calendar_today17-03-2009 06:00:52

3,3K Tweet

8,8K Followers

300 Following

Nora Ammann (@ammannnora) 's Twitter Profile Photo

SFF has launched a new funding round focused on demonstrating the feasibility and advancing the technical maturity of 'Flexible Hardware Enabled Governors' (flexHEGs). Applications close on Sep 15th. 🏁 More info: survivalandflourishing.fund/sff-2024-flexh… 🧵

Ajeya Cotra (@ajeya_cotra) 's Twitter Profile Photo

Excited to share a new blog post on Planned Obsolescence (first one in a while!) by my colleague Luca Righetti 🔸! Luca says dangerous capability tests need to get *way* harder: planned-obsolescence.org/dangerous-capa…

Ian Hogarth (@soundboy) 's Twitter Profile Photo

"I’m increasingly frustrated by the tendency to pit 'doomers' against AI 'optimists'...most optimists are not, fundamentally, AI optimists - they are superintelligence skeptics" slowboring.com/p/what-the-ai-…

Alexander Berger (@albrgr) 's Twitter Profile Photo

YIMBYs have been taking victory laps this week on the back of some great speeches at the DNC. Open Philanthropy has been the movement's biggest funder for most of the past decade, and I’m super proud of the progress. But we’re still very early in this fight: 1/ x.com/JerusalemDemsa…

Ryan Briggs (@ryancbriggs) 's Twitter Profile Photo

Worth observing that these people would be just as dead if it was p-hacking or other questionable-but-not-outright-fraudulent research practices that created the incorrect result

Jan Brauner (@janmbrauner) 's Twitter Profile Photo

Great opportunity for AI and governance experts. AI office role, deadline on September 6. You need to be a EU citizen, but the role is in SF. …ternational-partnerships.ec.europa.eu/jobs/policy-of…

Allan Dafoe (@allandafoe) 's Twitter Profile Photo

We are hiring! Google DeepMind's Frontier Safety and Governance team is dedicated to mitigating frontier AI risks; we work closely with technical safety, policy, responsibility, security, and GDM leadership. Please encourage great people to apply! 1/ boards.greenhouse.io/deepmind/jobs/…

Lennart Heim (@ohlennart) 's Twitter Profile Photo

Commerce has released the interim rule for reporting on AI models (>1e26 ops). Quarterly reports on training activities, cybersecurity measures, model ownership, and red-team testing results. A great policy: minimal burden, gives gov't needed visibility—no more, no less. 1/

Commerce has released the interim rule for reporting on AI models (>1e26 ops). Quarterly reports on training activities, cybersecurity measures, model ownership, and red-team testing results.
A great policy: minimal burden, gives gov't needed visibility—no more, no less.
1/
Carnegie Endowment (@carnegieendow) 's Twitter Profile Photo

🛜 "If-then" commitments are the new frontier in AI safety. This emerging framework aims to mitigate AI risks without needlessly stifling tech advances. Increased interest in the framework will accelerate its progress & maturity, writes Holden Karnofsky. carnegieendowment.org/research/2024/…

Yoshua Bengio (@yoshua_bengio) 's Twitter Profile Photo

The global nature of AI risks makes it necessary to recognize AI safety as an international public good, and work towards coordinated governance of these risks. Statement made in Venice: idais.ai/idais-venice/ Associated The New York Times article: nytimes.com/2024/09/16/bus…