Vincent Conitzer (@conitzer) 's Twitter Profile
Vincent Conitzer

@conitzer

AI professor. Director, @FOCAL_lab @CarnegieMellon. Head of Technical AI Engagement, @UniofOxford @EthicsInAI. Author, "Moral AI - And How We Get There."

ID: 47837509

linkhttp://www.cs.cmu.edu/~conitzer/ calendar_today17-06-2009 03:17:27

1,1K Tweet

4,4K Followers

1,1K Following

Vincent Conitzer (@conitzer) 's Twitter Profile Photo

In this upcoming AI, Ethics, and Society Conference (AIES)'24 paper, we show that one has to at least be very careful when using active learning to learn people's moral preferences. arxiv.org/abs/2407.18889

Vincent Conitzer (@conitzer) 's Twitter Profile Photo

Thrilled with our three amazing additions to the AI100 Standing Committee David Autor, Ryan Calo, and Yejin Choi -- very much looking forward to the work we'll do together! ai100.stanford.edu/people Eric Horvitz Peter Stone

IJCAIconf (@ijcaiconf) 's Twitter Profile Photo

Congratulations to the winner of the🏆IJCAI-24 Computers and Thought Award, Nisarg Shah Nisarg Shah, University of Toronto University of Toronto 🗣️Don't miss his talk @ #IJCAI2024: Democratic Foundations of Fair AI via Social Choice #keynote 📆7 Aug, 9 AM ➡️ijcai.org/awards

Congratulations to the winner of the🏆IJCAI-24 Computers and Thought Award, Nisarg Shah <a href="/nsrg_shah/">Nisarg Shah</a>, University of Toronto <a href="/UofT/">University of Toronto</a>
🗣️Don't miss his talk @ #IJCAI2024: Democratic Foundations of Fair AI via Social Choice #keynote
📆7 Aug, 9 AM
➡️ijcai.org/awards
Vincent Conitzer (@conitzer) 's Twitter Profile Photo

In preference elicitation (or active learning), we usually never ask the same question twice, because we think we already know the answer. In this upcoming AI, Ethics, and Society Conference (AIES) paper, we study how stable people's responses about moral preferences actually are. arxiv.org/abs/2408.02862

Vincent Conitzer (@conitzer) 's Twitter Profile Photo

To me the first error seems very human-like, but the inability to then say "oh wait, let's take a step back" in response to the later questions does not. (Try for yourself -- what do you think?)

To me the first error seems very human-like, but the inability to then say "oh wait, let's take a step back" in response to the later questions does not.  (Try for yourself -- what do you think?)
Vincent Conitzer (@conitzer) 's Twitter Profile Photo

Our Moral AI book is once again Amazon's #1 Science & Maths Ethics best seller! (It also was around the time of its release in February.) Also #7 in AI & Semantics, and #8 in Ethics & Morality. amazon.com/Moral-AI-There…

John Tasioulas (@jtasioulas) 's Twitter Profile Photo

Coming soon: The Institute for Ethics in AI in association with Balliol College will be advertising a 3 year postdoctoral fellowship in Ethics in AI for candidates with a background in theoretical philosophy (phil of mind, epistemology, phil of logic and language, metaphysics). Stay tuned.

Coming soon: <a href="/EthicsInAI/">The Institute for Ethics in AI</a> in association with <a href="/BalliolOxford/">Balliol College</a> will be advertising a 3 year postdoctoral fellowship in Ethics in AI for candidates with a background in theoretical philosophy (phil of mind, epistemology, phil of logic and language, metaphysics). Stay tuned.