Prakhar Gupta(@prakhariitr) 's Twitter Profileg
Prakhar Gupta

@prakhariitr

Research Scientist at Google. PhD from CMU | Ex-AdobeResearch | IIT Roorkee

ID:704625438

linkhttps://prakharguptaz.github.io/ calendar_today19-07-2012 08:11:26

35 Tweets

192 Followers

340 Following

Prakhar Gupta(@prakhariitr) 's Twitter Profile Photo

๐Ÿ“ข Introducing USB: A Unified Summarization Benchmark Across Tasks and Domains! ๐Ÿงต

๐ŸŒŸ Our benchmark consists of 8 diverse tasks, including evidence extraction, factuality and controllable summarization tasks.

account_circle
Pranjal Aggarwal(@PranjalAggarw16) 's Twitter Profile Photo

๐Ÿค”Can we reduce the cost of reasoning in large language models while maintaining accuracy? Introducing our new paper: 'Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs' a dynamic alternative to Self-Consistency!

๐ŸŒ: sample-step-by-step.info

๐Ÿงต

๐Ÿค”Can we reduce the cost of reasoning in large language models while maintaining accuracy? Introducing our new paper: 'Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs' a dynamic alternative to Self-Consistency! ๐ŸŒ: sample-step-by-step.info ๐Ÿงต
account_circle
Prakhar Gupta(@prakhariitr) 's Twitter Profile Photo

๐Ÿ“ขMeet Self-Refine: Iterative Refinement with Self-Feedback, our latest research on LLMs!

๐Ÿค–Self-Refine enables LLMs to refine their outputs via self-generated feedback, without human intervention. Explore the potential of self-refinement in AI!

selfrefine.info

account_circle
John Nay(@johnjnay) 's Twitter Profile Photo

LLMs Can Iteratively Self-Refine

-LLM creates draft
-Provides its own feedback
-Iteratively refines

On all 7 eval tasks
(review & code rewriting
toxicity removal
responses
acronyms
stories
etc.)
outputs are preferred by humans & by automated metrics

arxiv.org/abs/2303.17651

LLMs Can Iteratively Self-Refine -LLM creates draft -Provides its own feedback -Iteratively refines On all 7 eval tasks (review & code rewriting toxicity removal responses acronyms stories etc.) outputs are preferred by humans & by automated metrics arxiv.org/abs/2303.17651
account_circle
Kundan Krishna(@kundan_official) 's Twitter Profile Photo

In a new preprint, we investigate how summarization models are affected by the presence of noise in the input, and propose a way to improve it without prior knowledge of the type of noise that can be present (as it often happens in real-world scenarios)
arxiv.org/abs/2212.09928

In a new preprint, we investigate how summarization models are affected by the presence of noise in the input, and propose a way to improve it without prior knowledge of the type of noise that can be present (as it often happens in real-world scenarios) arxiv.org/abs/2212.09928
account_circle