Sarah Schwettmann (@cogconfluence) 's Twitter Profile
Sarah Schwettmann

@cogconfluence

Research Scientist @MIT_CSAIL PhD @MITBrainAndCog, @BKCHarvard affiliate, teaching @MITMuseum Studio

ID: 4020498861

linkhttp://cogconfluence.com calendar_today23-10-2015 00:33:08

1,1K Tweet

2,2K Followers

945 Following

Tamar Rott Shaham (@tamarrottshaham) 's Twitter Profile Photo

Accepted to #ICML2024!! 🚀 Meet MAIA- a Multimodal Automated Interpretabilty Agent that helps users understand AI systems. Given a user query (eg "label a model’s feature"), MAIA designs experiments iteratively by forming and updating hypotheses based on experimental results.

David Bau (@davidbau) 's Twitter Profile Photo

I am delighted to officially announce the National Deep Inference Fabric project, #NDIF. ndif.us NDIF is an U.S. National Science Foundation-supported computational infrastructure project to help YOU advance the science of large-scale AI.

I am delighted to officially announce the National Deep Inference Fabric project, #NDIF.

ndif.us

NDIF is an <a href="/NSF/">U.S. National Science Foundation</a>-supported computational infrastructure project to help YOU advance the science of large-scale AI.
Carroll Wainwright (@clwainwright) 's Twitter Profile Photo

This is why I signed the letter at righttowarn.ai. AI companies must create protected avenues for raising concerns that balance their legitimate interest in maintaining confidential information with the broader public benefit. 7/8

Sarah Schwettmann (@cogconfluence) 's Twitter Profile Photo

Very excited about this new work from Yossi Gandelsman–a compelling demonstration of the usefulness of interpretability for model auditing. Automatic decomposition of polysemantic neurons (no need for expensive SAE training!) into associated text directions enables generation of

Sarah Schwettmann (@cogconfluence) 's Twitter Profile Photo

Looking forward to participating in this panel next Tuesday, organized by the wonderful Kartik Chandra (also on Mastodon and Bsky) & co.! 🧠💻More on coggraph, a new workshop at the interface between cognitive science and computer graphics: coggraph.github.io 🔗Zoom registration link:

Sarah Schwettmann (@cogconfluence) 's Twitter Profile Photo

Thrilled to announce the AI + Open Education Initiative at MIT Open Learning! 🔗: aiopeneducation.pubpub.org To kick things off, we’re announcing a call for rapid-response papers examining how AI can help sculpt the open access education ecosystem. Papers can describe projects

Sarah Schwettmann (@cogconfluence) 's Twitter Profile Photo

MAIA code is out! This codebase is a good starting point for working with Interpretability Agents. We're excited about the role AI agents can play in scaling interpretability and will be at ICML next week to chat more! ✈️

Jacob Andreas (@jacobandreas) 's Twitter Profile Photo

Hi ICML! We're presenting papers this week: - understanding how LMs learn new langs in-context arxiv.org/pdf/2401.12973 - agents for automated interpretability research: …imodal-interpretability.csail.mit.edu/maia/ - calibrating LMs by marginalizing over missing context: arxiv.org/abs/2311.08718

Sarah Schwettmann (@cogconfluence) 's Twitter Profile Photo

One week left to submit to our #ECCV2024 European Conference on Computer Vision #ECCV2024 workshop on evaluating vision foundation models! 🔎🤖👁✨ Deadline: July 31 23:59 GMT Topics include: interpretability, visual reasoning & grounding, hallucination, visual abstraction, in-context learning, & others here:

One week left to submit to our #ECCV2024 <a href="/eccvconf/">European Conference on Computer Vision #ECCV2024</a> workshop on evaluating vision foundation models! 🔎🤖👁✨

Deadline: July 31 23:59 GMT

Topics include: interpretability, visual reasoning &amp; grounding, hallucination, visual abstraction, in-context learning, &amp; others here:
MIT CSAIL (@mit_csail) 's Twitter Profile Photo

As AI models become more powerful, auditing them for safety & biases is crucial — but also challenging & labor-intensive. Can we automate and scale this process? MIT CSAIL researchers introduce "MAIA," which iteratively designs experiments to explain AI systems' behavior:

As AI models become more powerful, auditing them for safety &amp; biases is crucial — but also challenging &amp; labor-intensive. Can we automate and scale this process?

MIT CSAIL researchers introduce "MAIA," which iteratively designs experiments to explain AI systems' behavior:
Neil Chowdhury (@chowdhuryneil) 's Twitter Profile Photo

Our Preparedness team evaluates frontier models’ abilities as software engineering agents, a prerequisite skill that could one day enable models to operate autonomously and self-improve. SWE-bench has become the community standard for evaluating models on software engineering,