Dimitris Papailiopoulos (@dimitrispapail) 's Twitter Profile
Dimitris Papailiopoulos

@dimitrispapail

researcher @MSFTResearch; prof @wisconsin (on leave); thinking about transformers; learning in context; babas of Inez Lily

ID: 573817445

linkhttp://papail.io calendar_today07-05-2012 17:26:48

6,6K Tweet

12,12K Followers

991 Following

Amin Karbasi (@aminkarbasi) 's Twitter Profile Photo

We are hiring research scientists at Robust Intelligence Robust Intelligence. If you are interested and also attending ICML Conference please find me (or pm me) so that we can chat. I will be around from Monday until Saturday.

Dimitris Papailiopoulos (@dimitrispapail) 's Twitter Profile Photo

Congratulations to the one and only Steve Wright! You likely know Steve from his scientific contributions and great books. And if you're extra lucky and have met him, you've also experienced firsthand what an amazing person he is. Cheers to one of the *absolutely* greatest!

Michael Li (@lzy_michael) 's Twitter Profile Photo

This short report proves that a small sized one layer transformer can not perform induction head task arxiv.org/abs/2408.14332

Ben Recht (@beenwrekt) 's Twitter Profile Photo

This semester, I’m back to live blogging my course lectures. I’m teaching Convex Optimization in the Age of LLMs. argmin.net/p/convex-optim…

Dimitris Papailiopoulos (@dimitrispapail) 's Twitter Profile Photo

interesting paper with counterintuitive findings: Synthetic data from cheaper/weaker models can lead to better finetuning results in comparison to sampling from a more expensive/capable model, as you can sample way more at the same price As long as there's signal that is :)

Jordan Ellenberg (@jsellenberg) 's Twitter Profile Photo

The UW-Madison philosophy department is aiming for 3-6 hires this year, in any area, both junior and senior lines available! jobs.wisc.edu/jobs/professor… jobs.wisc.edu/jobs/professor…

Alex Dimakis (@alexgdimakis) 's Twitter Profile Photo

One of the big problems in AI is that the systems often hallucinate. What does that mean exactly and how do we mitigate this problem, especially for RAG systems? 1. Hallucinations and Factuality Factuality refers to the quality of being based on generally accepted facts. For

One of the big problems in AI is that the systems often hallucinate. What does that mean exactly and how do we mitigate this problem, especially for RAG systems?

1. Hallucinations and Factuality

Factuality refers to the quality of being based on generally accepted facts. For
Gagan Bansal (@bansalg_) 's Twitter Profile Photo

Excited to share pre-print on AutoGen Studio! 🤖🤖🤖 "We present #AutoGen Studio, a no-code developer tool for rapidly prototyping, debugging, and evaluating multi-agent workflows built upon the AutoGen framework." Paper: arxiv.org/abs/2408.15247 Code: github.com/microsoft/auto…

Excited to share pre-print on AutoGen Studio! 🤖🤖🤖

"We present #AutoGen Studio, a no-code developer tool for rapidly prototyping, debugging, and evaluating multi-agent workflows built upon the AutoGen framework."

Paper: arxiv.org/abs/2408.15247 
Code: github.com/microsoft/auto…