Rogerio Feris (@rogerioferis) 's Twitter Profile
Rogerio Feris

@rogerioferis

Principal scientist and manager at the MIT-IBM Watson AI Lab

ID: 1226554235770298369

linkhttp://rogerioferis.org calendar_today09-02-2020 17:11:45

45 Tweet

1,1K Followers

352 Following

Rameswar Panda (@rpanda89) 's Twitter Profile Photo

Very happy to announce that our work "Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation" received the Best Paper - Honorable Mention award at #WACV2023! MIT-IBM Watson AI Lab

Very happy to announce that our work "Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation" received the Best Paper - Honorable Mention award at #WACV2023! <a href="/MITIBMLab/">MIT-IBM Watson AI Lab</a>
Sivan Doveh (@sivandoveh) 's Twitter Profile Photo

Our paper "Teaching Structured VL Concepts to VL Models" just got accepted to #CVPR2023 🤩🤩🤩 arxiv.org/abs/2211.11733

Our paper "Teaching Structured VL Concepts to VL Models"  just got accepted to #CVPR2023 🤩🤩🤩
arxiv.org/abs/2211.11733
James Smith (@jamessealesmith) 's Twitter Profile Photo

Happy to share that we had two papers accepted to #CVPR2023! Both are on continual adaptation of pre-trained models (ViT for image classification and BLIP for NLVR). More details (and code) will be coming soon! arxiv.org/abs/2211.13218 arxiv.org/abs/2211.09790

Happy to share that we had two papers accepted to #CVPR2023! Both are on continual adaptation of pre-trained models (ViT for image classification and BLIP for NLVR).

More details (and code) will be coming soon! 

arxiv.org/abs/2211.13218
arxiv.org/abs/2211.09790
John Nay (@johnjnay) 's Twitter Profile Photo

Multi-Task Prompt Tuning Enables Transfer Learning -Learn single prompt from multiple task-specific prompts -Learn multiplicative low rank updates to adapt it to tasks Parameter-efficient & state-of-the-art performance across diverse NLP tasks Paper: arxiv.org/abs/2303.02861

Multi-Task Prompt Tuning Enables Transfer Learning

-Learn single prompt from multiple task-specific prompts
-Learn multiplicative low rank updates to adapt it to tasks

Parameter-efficient &amp; state-of-the-art performance across diverse NLP tasks

Paper: arxiv.org/abs/2303.02861
Rogerio Feris (@rogerioferis) 's Twitter Profile Photo

We are looking for a summer intern (MSc/PhD) to work on large language models for sports & entertainment, with the goal of improving the experience of millions of fans as part of major tournaments (US Open/Wimbledon) IBM Sports & Entertainment MIT-IBM Watson AI Lab Apply at: krb-sjobs.brassring.com/TGnewUI/Search…

MIT-IBM Watson AI Lab (@mitibmlab) 's Twitter Profile Photo

New technique from the MIT-IBM Watson AI Lab and its collaborators learns to "grow" a larger machine-learning model from a smaller, pre-trained model, reducing the monetary and environmental cost of developing AI applications and with similar or improved performance. news.mit.edu/2023/new-techn…

New technique from the <a href="/MITIBMLab/">MIT-IBM Watson AI Lab</a> and its collaborators learns to "grow" a larger machine-learning model from a smaller, pre-trained model, reducing the monetary and environmental cost of developing AI applications and with similar or improved performance.
news.mit.edu/2023/new-techn…
Dario Gil (@dariogila) 's Twitter Profile Photo

We can all agree we’re at a unique and evolutionary moment in AI, with enterprises increasingly turning to this technology’s transformative power to unlock new levels of innovation and productivity. At #Think2023, IBM unveiled watsonx. Learn more: newsroom.ibm.com/2023-05-09-IBM…

We can all agree we’re at a unique and evolutionary moment in AI, with enterprises increasingly turning to this technology’s transformative power to unlock new levels of innovation and productivity. At #Think2023, <a href="/IBM/">IBM</a> unveiled watsonx. Learn more: newsroom.ibm.com/2023-05-09-IBM…
Dmitry Krotov (@dimakrotov) 's Twitter Profile Photo

Recent advances in Hopfield networks of associative memory may be the guiding theoretical principle for designing novel large scale neural architectures. I explain my enthusiasm about these ideas in the article ⬇️⬇️⬇️. Please let me know what you think. nature.com/articles/s4225…

Dmitry Krotov (@dimakrotov) 's Twitter Profile Photo

What could be the computational function of astrocytes in the brain? We hypothesize that they may be the biological cells that could implement the Transformer's attention operation commonly used in AI. Much improved compared to an earlier preprint: pnas.org/doi/10.1073/pn…

Yann LeCun (@ylecun) 's Twitter Profile Photo

IBM & Meta are launching the AI Alliance to advance *open* & reliable AI. The list of over 50 founding members from industry, government, and academia include AMD, Anyscale, CERN, Hugging Face, the Linux Foundation, NASA.... ai.meta.com/blog/ai-allian…

Rogerio Feris (@rogerioferis) 's Twitter Profile Photo

We have a cool challenge on understanding document images in our 2nd #CVPR2024 workshop on “What is Next in Multimodal Foundation Models?”, (sites.google.com/view/2nd-mmfm-…). This is a great opportunity to showcase your work in front of a large audience (pic below from our 1st workshop)

We have a cool challenge on understanding document images in our 2nd #CVPR2024 workshop on “What is Next in Multimodal Foundation Models?”, (sites.google.com/view/2nd-mmfm-…). This is a great opportunity to showcase your work in front of a large audience (pic below from our 1st workshop)
Leonid Karlinsky (@leokarlin) 's Twitter Profile Photo

Thanks for the highlight AK! We offer a simple and nearly-data-free way to move (large quantities) of custom PEFT models within or across LLM families or even across PEFT configurations. Useful for LLM cloud hosting when old base models need to be deprecated & upgraded

Wei Lin (@weilincv) 's Twitter Profile Photo

Welcome to join our workshop to figure out what is next in Multimodal foundation models! Tuesday 08:30 Pacific Time, Summit 437-439 at Seattle Convention Center Summit🤖

Nasim Borazjanizadeh (@nasimborazjani) 's Twitter Profile Photo

🚨 OpenAI's new o1 model scores only 38.2% in correctness on our new benchmark of combinatorial problems, SearchBench (arxiv.org/abs/2406.12172), while 57.1% is possible with GPT-4 and A* MSMT prompting! 🚨