Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile
Oriol Vinyals

@oriolvinyalsml

VP of Research & Deep Learning Lead, Google DeepMind. Gemini co-lead.

Past: AlphaStar, AlphaFold, AlphaCode, WaveNet, seq2seq, distillation, TF.

ID: 3918111614

linkhttps://scholar.google.com/citations?user=NkzyCvUAAAAJ&hl=en calendar_today16-10-2015 21:59:52

1,1K Tweet

169,169K Takipçi

84 Takip Edilen

rohan anil (@_arohan_) 's Twitter Profile Photo

Andrej Karpathy Lucas Beyer (bl16) The team is working hard to bring audio inputs to the AI Studio interface for Gemini 1.5 Pro. We have an internal version that handles audio and video and can sample the video less frequently to increase the length of content that can be handled. Andrej Karpathy, thanks for the

<a href="/karpathy/">Andrej Karpathy</a> <a href="/giffmana/">Lucas Beyer (bl16)</a> The team is working hard to bring audio inputs to the AI Studio interface for Gemini 1.5 Pro.  We have an internal version that handles audio and video and can sample the video less frequently to increase the length of content that can be handled.  <a href="/karpathy/">Andrej Karpathy</a>, thanks for the
Mckay Wrigley (@mckaywrigley) 's Twitter Profile Photo

The future of fixing bugs? Just record them. I filmed 3 separate bugs in an app and gave the videos to Gemini 1.5 Pro with my entire codebase. It correctly identified & fixed each one. AI is improving insanely fast.

Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

We are rolling out Gemini 1.5 Pro API so that you can keep building amazing stuff on top of the model like we've seen in the past few weeks. Also, if you just want to play with Gemini 1.5, we removed the waitlist: aistudio.google.com Last, but not least, we pushed the model

We are rolling out Gemini 1.5 Pro API so that you can keep building amazing stuff on top of the model like we've seen in the past few weeks.

Also, if you just want to play with Gemini 1.5, we removed the waitlist: aistudio.google.com

Last, but not least, we pushed the model
Logan Kilpatrick (@officiallogank) 's Twitter Profile Photo

New Google developer launch today: - Gemini 1.5 Pro is now available in 180+ countries via the Gemini API in public preview - Supports audio (speech) understanding capability, and a new File API to make it easy to handle files - New embedding model! developers.googleblog.com/2024/04/gemini…

Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

Gemini 1.5 Pro has entered the (LMSys) Arena! Some highlights: -The only "mid" tier model at the highest level alongside "top" tier models from OpenAI and Anthropic ♊️ -The model excels at multimodal, and long context (not measured here) 🐍 -This model is also state-of-the-art

Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

Live from #GoogleIO, we’re announcing significant updates to our Gemini family of models. More multimodal, real time, faster, better, longer context... We can’t wait to see the amazing things people will do with this tech, as we continue making it more helpful and useful for

Jeff Dean (@🏡) (@jeffdean) 's Twitter Profile Photo

Gemini 1.5 Model Family: Technical Report updates now published In the report we present the latest models of the Gemini family – Gemini 1.5 Pro and Gemini 1.5 Flash, two highly compute-efficient multimodal models capable of recalling and reasoning over fine-grained information

Gemini 1.5 Model Family: Technical Report updates now published

In the report we present the latest models of the Gemini family – Gemini 1.5 Pro and Gemini 1.5 Flash, two highly compute-efficient multimodal models capable of recalling and reasoning over fine-grained information
Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

For Reinforcement Learning to be successful on top of LLMs, it is critical to have a very powerful and accurate reward model. Reward in most language tasks isn't as clearly defined as, say, in chess, where the winning condition is a simple computation. Generative reward models

For Reinforcement Learning to be successful on top of LLMs, it is critical to have a very powerful and accurate reward model. Reward in most language tasks isn't as clearly defined as, say, in chess, where the winning condition is a simple computation.

Generative reward models
lmsys.org (@lmsysorg) 's Twitter Profile Photo

Big news – Gemini 1.5 Flash, Pro and Advanced results are out!🔥 - Gemini 1.5 Pro/Advanced at #2, closing in on GPT-4o - Gemini 1.5 Flash at #9, outperforming Llama-3-70b and nearly reaching GPT-4-0125 (!) Pro is significantly stronger than its April version. Flash’s cost,

Big news – Gemini 1.5 Flash, Pro and Advanced results are out!🔥

- Gemini 1.5 Pro/Advanced at #2, closing in on GPT-4o
- Gemini 1.5 Flash at #9, outperforming Llama-3-70b and nearly reaching GPT-4-0125 (!)

Pro is significantly stronger than its April version. Flash’s cost,
Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

Gemma 2 has arrived! The arena for open models is also heating up. Extra exciting as 27B is ~3x smaller than Llama3 70B and ~10x smaller than NVIDIA's Nemotron 340B 🔥♊️💙

Gemma 2 has arrived! The arena for open models is also heating up. Extra exciting as 27B is ~3x smaller than Llama3 70B and ~10x smaller than NVIDIA's Nemotron 340B 🔥♊️💙
Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

AI is beating me at the things I love, one step at a time (coding, StarCraft, mathematics, ...). AlphaProof solved the most difficult 2024 IMO problem (P6). Answer doesn't fit in this tweet 🥈

AI is beating me at the things I love, one step at a time (coding, StarCraft, mathematics, ...). AlphaProof solved the most difficult 2024 IMO problem (P6). Answer doesn't fit in this tweet 🥈
Oriol Vinyals (@oriolvinyalsml) 's Twitter Profile Photo

Our experimental version of Gemini 1.5 Pro in AI Studio debuted at #1 on the LMSys leaderboard. 1300 ELO, nice round number 🔥 With the Gemini team, we are focused on both improving our models by iterating and finessing our recipe (deep learning FTW!), and also on massive bets

sarah guo // conviction (@saranormous) 's Twitter Profile Photo

New 🔥 No Priors drop: Oriol Vinyals VP Deep Learning Google DeepMind. Topics: *Gemini *infinite context windows *where compute goes: pre-training/post-training/test-time search *reward modeling beyond games *how we should prep for AGI