Kim Stachenfeld
@neuro_kim
Research scientist at DeepMind. Likes hippocampi and relational learning and other things brains have and do, too. she/her.
ID: 778041970895781892
http://www.neurokim.com 20-09-2016 01:23:56
637 Tweet
6,6K Followers
801 Following
Article in Nature about mass editorial resignations at journals: nature.com/articles/d4158… If you are a disgruntled cognitive science editor, reach out to me and Ted Gibson (Ted Gibson, Language Lab MIT). We run Open Mind (direct.mit.edu/opmi). Free to publish, free to read.
Giving a talk on this work with Kim Stachenfeld at #ICLR2024, Tuesday 10 AM for oral session 1A! We relate deep RL representation learning to multi-region computations in the brain. Our poster will be right after the oral session. Come say hi 😀
🔥We are hiring🔥! The ClopathLab is looking for Postdocs, come and do cool science😎 with us! Just ping me informally if you are interested!
New preprint, w/ Kim Stachenfeld! We characterize the compositional generalization behavior of kernel models. Our theory derives a new compositional generalization class, highlights key failure modes, and is empirically valid for deep neural networks. (1/24) arxiv.org/abs/2405.16391
With Harvard University, we built a ‘virtual rodent’ powered by AI to help us better understand how the brain controls movement. 🧠 With deep RL, it learned to operate a biomechanically accurate rat model - allowing us to compare real & virtual neural activity. → dpmd.ai/3RobU7e
Excited to be here in Boston for #CCN2024 CogCompNeuro!
Congrats to colleagues Maria Eckstein summerfieldlab @summerfieldlab.bsky.social Nathaniel Daw Kevin Miller on getting this thought-provoking work, questioning whether humans actually use reinforcement learning to learn from reinforcement 🙃
Had tons of fun CogCompNeuro 2024! One reflection: there was quite a lot of discussion of metrics for comparing neural systems. In our recent paper on representation biases, we highlight some phenomena that might make for an interesting test case for future battles! 1/5