Niloofar Mireshghallah
@niloofar_mire
Postdoc @uwcse-@uwnlp, Ph.D. from @ucsd_cse /Privacy, ML, NLP, @winlpworkshop chair, NAACL 2025 D&I chair, ex @MSFTResearch
ID: 1467438402
http://homes.cs.washington.edu/~niloofar/ 29-05-2013 14:47:52
2,2K Tweet
4,4K Followers
1,1K Following
Great #usesec24 talk from Kaiming Cheng on "When the User Is Inside the User Interface: An Empirical Study of UI Security Properties in Augmented Reality" w/ Arka Bhattacharya, Michelle Lin, Jae (Jaewook) Lee, Aroosh Kumar, Jeffery F. Tian, and Franzi Roesner: ar-sec.cs.washington.edu/ar_ui/
📢 Don't miss these two fascinating #PLAMADISO talks TOMORROW (online)! 🚀🚀🚀 🌐1st talk: Shayne Longpre (Massachusetts Institute of Technology (MIT)) at 2.00pm 🌐2nd talk: S. Puntoni (The Wharton School) at 3.30pm Register NOW & spread the word, thx!🙏 👉More information & free registration here: plamadiso.weizenbaum-institut.de/events/
I am seeking student researchers to hire for Fall focusing on Reliability and Steerability of Large Language Models. Ideal candidates are last year PhD students with relevant publications. If interested, please email your CV to [email protected]. Thank you!
I'm very excited to be starting my dream job as faculty at UBC Computer Science CAIDA_UBC in 2025 and postdoc-ing with Christopher Potts at Stanford HAI Stanford NLP Group this year! I am recruiting students this cycle who are curious to explore the mysteries and limitations of LMs / GenAI ...
You can find the materials here: cs.utexas.edu/~gdurrett/cour… developed as part of the U.S. National Science Foundation Institute For Foundations of Machine Learning Institute for Foundations of Machine Learning at UT Austin.
Memorized training data are inside models. That’s just how it is. James Grimmelmann & I explain how this means models are copies (in the copyright-law sense) of data they've memorized This doesn’t mean models are infringing. But sound copyright policy needs to contend with this reality.
In the “arms race” between social media bots and those trying to stop them, the best way to detect #LLM-powered bots may be with #LLMs themselves, according to research by University of Washington #UWAllen UW NLP's tsvetshop + Shangbin Feng. washington.edu/news/2024/08/2… #AI #NLProc #ACL2024NLP #UWserves