Christoph Winter (@christophkw) 's Twitter Profile
Christoph Winter

@christophkw

Director @law_ai_. Law Prof @ITAM_mx. Research Associate @Harvard.

ID: 4919320696

linkhttp://www.christophwinter.net calendar_today16-02-2016 13:20:17

289 Tweet

510 Takipçi

285 Takip Edilen

Miles Brundage (@miles_brundage) 's Twitter Profile Photo

My brilliant ex-colleague Cullen O'Keefe sadly left OpenAI recently but our loss is The Discourse's gain as they will now be publishing more often. If you have a legal background + aren't interested in industry/gov't/academia, you should 100% consider working with them. (1/2)

Max Roser (@maxcroser) 's Twitter Profile Photo

OurWorldInData.org is now exactly ten years old! Huge thanks to everyone who made these beautiful years possible. Most of all, the amazing team that joined me during the last decade to do this work! (I had started working on it some years before 2014, but at that time, my

OurWorldInData.org is now exactly ten years old! 

Huge thanks to everyone who made these beautiful years possible. Most of all, the amazing team that joined me during the last decade to do this work!

(I had started working on it some years before 2014, but at that time, my
Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

In one of the most important legal developments of the year, the Supreme Court is poised to eliminate the doctrine of "Chevron deference" before the end of June. Our latest blog post explains what that means for efforts to regulate artificial intelligence: law-ai.org/chevron-defere…

Global Priorities Institute (@gpioxford) 's Twitter Profile Photo

Crying wolf: Warning about societal risks can be reputationally risky, the working paper by Lucius Caviola, Matt Coleman, PhD, Christoph Winter and Joshua Lewis, has been added to our Working Paper Series: globalprioritiesinstitute.org/crying-wolf-wa…

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

The US Supreme Court has eliminated Chevron deference, an important legal doctrine that required courts to defer to agencies' interpretations of certain laws. We previously discussed Chevron and what its repeal might mean for AI governance on the LawAI blog:

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

LawAI is hiring! bit.ly/law-ai-open-po… The Institute for Law & AI (LawAI) is growing substantially! We’re hiring at all levels of seniority, across our teams, including roles in operations, programs, research, and consulting. If you’re looking for a supportive,

Michael Aird (@michael__aird) 's Twitter Profile Photo

I've known and been impressed by LawAI's work for years, and they're now doing a ton of hiring for a variety of roles - please consider applying or sharing!

Lawfare (@lawfare) 's Twitter Profile Photo

Cullen O'Keefe puts forward a framework, "Chips for Peace," that outlines 3 commitments the U.S. should push for to address AI's challenges, including frontier AI safety regulation, benefit-sharing, and nonproliferation for high-risk AI systems. lawfaremedia.org/article/chips-…

Luke Muehlhauser (@lukeprog) 's Twitter Profile Photo

(1/4) I’m super excited to finally be able to share that Open Philanthropy’s AI governance and policy team is launching an RFP for work to mitigate potential catastrophic risks from advanced AI systems!

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

In a new Commentary originally published by Lawfare, LawAI Director of Research Cullen O’Keefe proposes a “Chips for Peace” framework: coordination between the US and its allies on safety regulation, benefit-sharing, and nonproliferation for advanced AI systems. Read it here:

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

In the absence of federal AI legislation, efforts to regulate frontier AI models will have to rely on existing legal authorities. Our latest working paper discusses whether and how existing authorities can contribute to tracking and licensing regimes for frontier AI:

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

LawAI Director of Research Cullen O’Keefe joined the Lawfare podcast to discuss the “Chips for Peace” framework published earlier this month: coordination between the US and its allies on safety regulation, benefit-sharing, and nonproliferation for advanced AI systems.

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

Can liability solve AI governance? In our latest blog post, Summer Research Fellow Gabriel Weil explains that, while liability is a powerful tool, it does have some limits.

Trevor Levin (@trevposts) 's Twitter Profile Photo

When I posted a poll that found big margins in favor of SB 1047, much discourse ensued about whether the poll was too biased to share. Anyway, here's the question used by the poll that some opponents are now sharing before the Assembly votes on whether to send it to Gavin Newsom:

When I posted a poll that found big margins in favor of SB 1047, much discourse ensued about whether the poll was too biased to share.
Anyway, here's the question used by the poll that some opponents are now sharing before the Assembly votes on whether to send it to <a href="/GavinNewsom/">Gavin Newsom</a>:
Jan Brauner (@janmbrauner) 's Twitter Profile Photo

Great opportunity for AI and governance experts. AI office role, deadline on September 6. You need to be a EU citizen, but the role is in SF. …ternational-partnerships.ec.europa.eu/jobs/policy-of…

Max Roser (@maxcroser) 's Twitter Profile Photo

Many of us can save a child’s life, if we rely on the best data. I think this is one of the most important facts about our world, and the topic of my new Our World in Data article: ourworldindata.org/many-us-can-sa…