Baharan Mirzasoleiman (@baharanm) 's Twitter Profile
Baharan Mirzasoleiman

@baharanm

Assistant professor @UCLAComSci. Better ML via better data, Machine learning, Optimization

ID: 1018575261896339456

linkhttp://web.cs.ucla.edu/~baharan/ calendar_today15-07-2018 19:17:21

61 Tweet

1,1K Followers

263 Following

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

My PhD student Yihao Xue has received one of the 50 OpenAI Superalignment Fast Grants (out of 2700 applications)! Big congrats Yihao and looking forward to seeing more amazing work from you! 🎉🎉🌱🌱

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Is CLIP data hungry? We rigorously showed that one can discard a good portion of CLIP’s massive (pre-)training data without harming its performance! Check out this awesome #AISTATS2024 paper with Siddharth Joshi (Friday, poster #140 @ session 2) 🎉🎉🌱🌱 Paper: arxiv.org/pdf/2403.12267

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Simplicity Bias (SB) makes deep models learn spurious correlations. But SB can be also used to eliminate them! Check out this nice #AISTATS2024 paper with Yu Yang where this is rigorously proved and used to achieve SOTA worst-group acc: arxiv.org/pdf/2305.18761 (Sat. P#67) 🌱

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Why CLIP is more robust to distribution shift than supervised learning? This #ICLR2024 paper provides the first rigorous proof! TL;DR details specified in the captions allow learning more generalizable features from images. Check it out: Tue, PS#1, P#113 arxiv.org/pdf/2319.04971

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Dataset distillation methods can only distill the early training dynamics. We showed in this #ICLR2024 paper that generating multiple synthetic data to capture different training stages improves the performance! Tue PS#2 Halle B#9 arxiv.org/pdf/2310.06982 Yu Yang Xuxi Chen

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Why does projection head benefits representation learning? It allows learning a broader range of features, as we rigorously proved in this #ICLR2024 paper! Yihao Xue also proposed an alternative for the project head! 🙌🎉🌱 Thu, PS#5, Hall B #119 arxiv.org/pdf/2403.11391

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Double descent confirms the benefit of larger models. But, when there is label noise in the data, larger model size can hurt the performance! We called this phenomenon "Final Ascent". Check out this interesting #UAI2024 spotlight by Yihao Xue: arxiv.org/pdf/2208.08003 🙌🌱

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Graph Contrastive Learning (GCL) has shown a great promise in learning node representations. But, under heterophily, existing GCL methods fail. Check out this nice #UAI2024 paper by WENHAN YANG that addresses this problem using graph filters! 🙌🌱arxiv.org/pdf/2303.06344

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

ML models are sensitive to distribution shift. Can we adapt a model with only a few examples from the target domain? In this #ICML2024 paper, Yihao Xue proposes an effective way, with nice theoretical analysis🌱 🔗arxiv.org/pdf/2305.14521 Thu, July 25, Poster session 5, #800

ML models are sensitive to distribution shift. Can we adapt a model with only a few examples from the target domain? In this #ICML2024 paper, <a href="/xue_yihao65785/">Yihao Xue</a> proposes an effective way, with nice theoretical analysis🌱
🔗arxiv.org/pdf/2305.14521
Thu, July 25, Poster session 5, #800
Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

CLIP is highly sensitive to data poisoning and backdoor attacks. In this #ICML2024 paper, WENHAN YANG proposed an interesting way to pretrain CLIP robust to such attacks without compromising the performance! 🌱🌱 🔗arxiv.org/pdf/2310.05862 Thu, July 25, Poster session 6, #814

CLIP is highly sensitive to data poisoning and backdoor attacks. In this #ICML2024 paper, <a href="/WenhanYang0315/">WENHAN YANG</a> proposed an interesting way to pretrain CLIP robust to such attacks without compromising the performance! 🌱🌱
🔗arxiv.org/pdf/2310.05862
Thu, July 25, Poster session 6, #814
sijia.liu (@sijialiu17) 's Twitter Profile Photo

The 3rd AdvML-Frontiers Workshop (AdvMLFrontiers advml-frontier.github.io) is set for #NeurIPS 2024 (NeurIPS Conference)! This year, we're delving into the expansion of the trustworthy AI landscape, especially in large multi-modal systems. Trustworthy ML Initiative (TrustML) LLM Security🚀 We're now

The 3rd AdvML-Frontiers Workshop (<a href="/AdvMLFrontiers/">AdvMLFrontiers</a> advml-frontier.github.io) is set for #NeurIPS 2024 (<a href="/NeurIPSConf/">NeurIPS Conference</a>)! This year, we're delving into the expansion of the trustworthy AI landscape, especially in large multi-modal systems. <a href="/trustworthy_ml/">Trustworthy ML Initiative (TrustML)</a>
<a href="/llm_sec/">LLM Security</a>🚀

We're now