Jeremy R Cole (@jeremy_r_cole) 's Twitter Profile
Jeremy R Cole

@jeremy_r_cole

Google DeepMind (NLP) | PhD from Penn State
Interested in question answering, information retrieval, cognitive/social linguistics, and beer.

ID: 1409589671194025984

linkhttps://jrc436.github.io/ calendar_today28-06-2021 19:09:07

195 Tweet

373 Followers

286 Following

Jeremy R Cole (@jeremy_r_cole) 's Twitter Profile Photo

Actually the best way to get support at Google is to start working on the product. Writing an emnlp paper might be a close second though.

Jeremy R Cole (@jeremy_r_cole) 's Twitter Profile Photo

I do earnestly think that page (and character) limits lead to better quality writing. I don't like arxiv 40 pagers or twitter blue (or whatever it's called) essays

Jinhyuk Lee (@leejnhk) 's Twitter Profile Photo

Introducing Gecko 🦎, a new text embedding model from Google DeepMind! Distilled from LLMs, Gecko offers powerful embeddings for various NLP tasks. Gecko is now available in Google Cloud API 👉bit.ly/google-gecko-a… Paper: bit.ly/google-gecko Colab: bit.ly/google-gecko-c…

Introducing Gecko 🦎, a new text embedding model from Google DeepMind! Distilled from LLMs, Gecko offers powerful embeddings for various NLP tasks. Gecko is now available in Google Cloud API 👉bit.ly/google-gecko-a…

Paper: bit.ly/google-gecko
Colab: bit.ly/google-gecko-c…
Jeremy R Cole (@jeremy_r_cole) 's Twitter Profile Photo

Gecko 🦎 is like promptagator🐊 in that it's also a reptile, but different in that it's a single model that changes uh... colors to blend in with its uh... surrounding task. Other important differences include being available through the google cloud api and me being on the paper

Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

Google presents Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? Long-context LM: - Often rivals SotA retrieval and RAG systems - But still struggles with areas like compositional reasoning repo: github.com/google-deepmin… abs: arxiv.org/abs/2406.13121

Google presents Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?

Long-context LM:
- Often rivals SotA retrieval and RAG systems
- But still struggles with areas like compositional reasoning

repo: github.com/google-deepmin…
abs: arxiv.org/abs/2406.13121
Jinhyuk Lee (@leejnhk) 's Twitter Profile Photo

Can long-context language models (LCLMs) subsume retrieval, RAG, SQL, and more? Introducing LOFT: a benchmark stress-testing LCLMs on million-token tasks like retrieval, RAG, and SQL. Surprisingly, LCLMs rival specialized models trained for these tasks! arxiv.org/abs/2406.13121

Can long-context language models (LCLMs) subsume retrieval, RAG, SQL, and more?

Introducing LOFT: a benchmark stress-testing LCLMs on million-token tasks like retrieval, RAG, and SQL. Surprisingly, LCLMs rival specialized models trained for these tasks!

arxiv.org/abs/2406.13121
Sebastian Riedel (@riedelcastro@sigmoid.social) (@riedelcastro) 's Twitter Profile Photo

"just put the corpus into the context"! Long context models can already match or beat various bespoke pipelines and infra in accuracy on non-trivial tasks! Hadn't expected this so soon, and honestly was hoping to milk RAG impact for a little longer 🤪

Devendra Singh Sachan (@devendr06654102) 's Twitter Profile Photo

Excited to present a new benchmark "LOFT" to study long-context language models ability to do in-context retrieval, RAG, and complex reasoning tasks. We find that long-context models are getting increasingly capable and often rival task-specific experts for 1M context length.

Kelvin Guu (@kelvin_guu) 's Twitter Profile Photo

Do long-context LMs obsolete retrieval, RAG, SQL and more? Excited to share our answer! arxiv.org/abs/2406.13121 from the team at Google DeepMind that wrote one of the 1st papers on RAG (REALM) and repeat SOTA on retrieval (Promptagator, Gecko). w/ Gemini 1.5 Pro, the answer is 🧵

Hexiang (Frank) Hu (@hexiang_hu) 's Twitter Profile Photo

Ever wondered if long-context language models can also master image, video, and multimodal retrieval? 🌟 Dive into our latest work LOFT! We benchmarked various long-context language models on million-token level retrieval, RAG, and SQL tasks across text, vision, and audio 🚀 #AI

Ming-Wei Chang (@mchang21) 's Twitter Profile Photo

Can long-context models replace retrievers, RAG & SQL? We evaluate them on smaller-scale versions of these tasks and compare them to specialized models in same settings. We found *prompting* LLM perform surprisingly well, generalizing across text, multimodal & other settings!

Kelvin Guu (@kelvin_guu) 's Twitter Profile Photo

Excited to launch and share the new citations feature ("related content") in Gemini! Last fall, we introduced Double-Check (shorturl.at/kFJqF), which checks Gemini's claims against sources on the web. With citations, we're now auto-running on all fact-seeking queries.