Nasir Ahmad (@nasiryahm) 's Twitter Profile
Nasir Ahmad

@nasiryahm

ID: 734059066125881344

linkhttp://nasiryahm.github.io calendar_today21-05-2016 16:31:34

181 Tweet

315 Followers

372 Following

Tim Kietzmann (@timkietzmann) 's Twitter Profile Photo

Face perception researchers, if you could design a new large-scale image dataset (say >20k images), what features/variables would you be looking for, and what parameters are you missing from existing resources? Drop your suggestions below, please.

Pablo Lanillos (🤖🧠) (@planillos) 's Twitter Profile Photo

🦾🔄🧠BayesianBrainers, ActiveInferencers and PredictiveCoders, Check this cool preprint u may like it 👇 The predictive brain in action: Involuntary actions reduce body prediction errors biorxiv.org/content/10.110… Collaboration between Artificial Cognitive Systems Radboud AI & TU München

Nasir Ahmad (@nasiryahm) 's Twitter Profile Photo

Really neat idea and the performance looks incredible (even for very sparse nets)! Love the easily digestible blog post explanation btw

Laura B Naumann (@lbnaumann) 's Twitter Profile Photo

👇 Registration for this year's free and virtual #BernsteinConference is open! 🙌 If you're a junior researcher (master/grad student, early postdoc) – don't forget to sign up for the #PhD Symposium! 🧠🎓🥳 bit.ly/bc20_phd

srdjan ostojic (@ostojic_srdjan) 's Twitter Profile Photo

During my physics undergrad, I have never heard of Singular Value Decomposition (SVD). Why? Almost all matrices in physics are symmetric, and in that case SVD reduces to eigenvalue decomposition. 1/n

Friedemann Zenke (@hisspikeness) 's Twitter Profile Photo

I'm accepting applications for PhD students in computational neuroscience who have a keen interest in understanding information processing in spiking neural networks. Particular focus is on cell-type diversity, sparsity, and synaptic plasticity zenkelab.org/jobs/

Tim Kietzmann (@timkietzmann) 's Twitter Profile Photo

Are you using DNNs in your work? Then our new paper may be of interest to you: "Individual differences among deep neural network models", now out in Nature Communications: rdcu.be/caG90 A quick run-through:

Nasir Ahmad (@nasiryahm) 's Twitter Profile Photo

Check it out! The predictive coding architecture (error and prediction units) simply emerges if you constrain energy (unit output and weight magnitudes) in an RNN model. Super fun work, was awesome to be part of the team🤸

Tim Kietzmann (@timkietzmann) 's Twitter Profile Photo

New paper & resource alert: our ecoset project is finally out and you are all invited to use it. A thread. 1/n doi.org/10.1073/pnas.2…

Christian Pehle (@christianpehle) 's Twitter Profile Photo

Happy that our work on Backpropagation in Spiking Neural Networks is published! We show that it is possible to compute gradients in networks of LIF Neurons exactly and in an event-driven fashion. Great collaboration with Timo Wunderlich! (1/n) rdcu.be/cmMAq

Happy that our work on Backpropagation in Spiking Neural Networks is published! We show that it is possible to compute gradients in networks of LIF Neurons exactly and in an event-driven fashion. Great collaboration with Timo Wunderlich! (1/n) rdcu.be/cmMAq
Artificial Cognitive Systems (@artcogsys) 's Twitter Profile Photo

We (Nasir Ahmad; Nasir Ahmad , Ellen Schrader & Marcel van Gerven Marcel van Gerven) are proud to share our recent paper: Constrained Parameter Inference as a Principle for Learning! arxiv.org/abs/2203.13203