Max Lamparth
@mlamparth
Postdoc at @Stanford, @StanfordCISAC, Stanford Center for AI Safety, and the SERI program | Focusing on interpretability, robustness, and ethical AI/LLMs.
ID: 1588663024969125888
http://www.maxlamparth.com 04-11-2022 22:43:21
373 Tweet
613 Followers
488 Following
🚨 Our paper was accepted at AI, Ethics, and Society Conference (AIES) happening in October! #AIES We compare human national security experts vs. LLM simulations in wargames. The results? Surprising differences in decision-making that could impact real-world conflicts. Paper: arxiv.org/pdf/2403.03407
🚨 Our paper was accepted for Conference on Language Modeling! As we face a mental health crisis and lack of access to professional care, many turn to AI as a solution. But how does ethical automated care look like and are models safe enough for patients? Paper: arxiv.org/abs/2406.11852
Should AI be aligned with human preferences, rewards, or utility functions? Excited to finally share a preprint that Micah Carroll Matija Franklin Hal Ashton & I have worked on for almost 2 years, arguing that AI alignment has to move beyond the preference-reward-utility nexus!
Great and important work by xuan (ɕɥɛn / sh-yen) et al.!
Couple weeks left to get in your abstracts. We'll also have invited papers from Henry Farrell and Hahrie Han, Arvind Narayanan, Alondra Nelson, Deirdre K. mulligan, Daniel Susskind and others—leading thinkers on politics, economics, and technologies of democracy. V keen for PhDs/ECRs to apply.
I used to be shy to publicly state that I had both of my kids during my PhD, but now I'm like: "Damn straight, I had two babies AND STILL managed to become a doctor in 5 years!" 👩🎓Academic Mom, PhD