Francesco Orabona (@bremen79) 's Twitter Profile
Francesco Orabona

@bremen79

Dad and associate professor at @KAUST_News.
Formerly @BU_ece, @sbucompsc, @YahooResearch, @TTIC_Connect.
ML theory&practice and history of science

ID: 115426969

linkhttps://parameterfree.com/ calendar_today18-02-2010 16:34:14

2,2K Tweet

6,6K Followers

405 Following

Peter Richtarik (@peter_richtarik) 's Twitter Profile Photo

Announcing the creation of *KAUST Center of Excellence in Generative AI*; official launch on July 1, 2024. Joint with Bernard Ghanem Jürgen Schmidhuber Francesco Orabona David R. Pugh and a few more KAUST colleagues. KAUST funding ($11m over 5 years) + industrial funding. We are looking

Announcing the creation of *KAUST Center of Excellence in Generative AI*; official launch on July 1, 2024. Joint with <a href="/BernardSGhanem/">Bernard Ghanem</a> <a href="/SchmidhuberAI/">Jürgen Schmidhuber</a> <a href="/bremen79/">Francesco Orabona</a> <a href="/TheSandyCoder/">David R. Pugh</a> and a few more KAUST colleagues. KAUST funding ($11m over 5 years) + industrial funding. We are looking
Francesco Orabona (@bremen79) 's Twitter Profile Photo

🚨 New blog post: Dynamic Regret and ADER Dynamic regret and suboptimal path-length bound of Online Mirror Descent. Then, I show how to achieve the optimal dynamic regret bound using ADER by Zhang, Lu, and Zhou,NeurIPS'18 As usual, feedback is welcome! parameterfree.com/2024/07/01/dyn…

🚨 New blog post: Dynamic Regret and ADER

Dynamic regret and suboptimal path-length bound of Online Mirror Descent.
Then, I show how to achieve the optimal dynamic regret bound using ADER by Zhang, Lu, and Zhou,NeurIPS'18

As usual, feedback is welcome!

parameterfree.com/2024/07/01/dyn…
Aaron Roth (@aaroth) 's Twitter Profile Photo

Congrats to the best paper award winners at COLT 2024! learningtheory.org/colt2024/award… First up, The Price of Adaptivity in Stochastic Convex Optimization by Yair Carmon and Oliver Hinder:

Congrats to the best paper award winners at COLT 2024! learningtheory.org/colt2024/award… First up, The Price of Adaptivity in Stochastic Convex Optimization by Yair Carmon and Oliver Hinder:
Francesco Orabona (@bremen79) 's Twitter Profile Photo

My student Keyi Chen successfully defended her PhD thesis at Boston University on Generalized Implicit Online Convex Optimization. She did great work on implicit algorithms, like arxiv.org/abs/2306.00201. I am very happy for her!

My student <a href="/keyic_/">Keyi Chen</a> successfully defended her PhD thesis at Boston University on Generalized Implicit Online Convex Optimization. She did great work on implicit algorithms, like arxiv.org/abs/2306.00201.

I am very happy for her!
Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

Career "advice": Live modestly, save money Put your family and health first Keep learning -don't live on your laurels No jerks: don't work for 'em, don't hire 'em

Delip Rao e/σ (@deliprao) 's Twitter Profile Photo

This is scientific malpractice and an intimidation tactic. Senior AI researchers who work in the same area should talk to Guohao Li and get details that they are not sharing publicly (for good reason), and investigate. It’s not a good idea to “let this slide” as it becomes

Kimmy Bestie of Bunzy, Co-CEO Execubetch™️ K-brat (@easybakedoven) 's Twitter Profile Photo

Twitter just activated a setting by default for everyone that gives them the right to use your data to train grok. They never announced it. You can disable this using the web but it's hidden. You can't disable using the mobile app Direct link: x.com/settings/grok_…

Twitter just activated a setting by default for everyone that gives them the right to use your data to train grok. They never announced it. You can disable this using the web but it's hidden. You can't disable using the mobile app

Direct link: x.com/settings/grok_…
Francesco Orabona (@bremen79) 's Twitter Profile Photo

Optimization people, how do you call this property? There exists L>0 such that f(y) - f(x*) <= nabla f(y)' (y-x*) - 1/(2 L) ||nabla f(y)||^2 where x* = argmin_x f(x) This is clearly satisfied by a convex L-smooth function, but it is weaker.

Stephan Mandt (@stephanmandt) 's Twitter Profile Photo

As an Action Editor for JMLR and frequent (Senior) AC for ML conferences, I can confidently say there's a noticeable difference in how carefully reviewers are selected. If you have a strong paper and aren't in a rush, consider submitting it to JMLR more often.

Chuang Gan (@gan_chuang) 's Twitter Profile Photo

I expect the number of ICLR submissions to rise significantly this year, as the China Computer Federation (CCF) has now classified ICLR as an A-level conference.

Francesco Orabona (@bremen79) 's Twitter Profile Photo

Every time I teach my Online Learning class, the interactions with the students stimulates new remarks and additions to my notes. This year seems particularly promising 🙂

Every time I teach my Online Learning class, the interactions with the students stimulates new remarks and additions to my notes.

This year seems particularly promising 🙂