Andrew Feldman (@andrewdfeldman) 's Twitter Profile
Andrew Feldman

@andrewdfeldman

CEO and Founder @CerebrasSystems. I build teams that solve hard problems, fish small streams, dance tango and love Vizslas

ID: 4443830716

linkhttp://www.cerebras.net calendar_today11-12-2015 02:24:18

420 Tweet

1,1K Followers

196 Following

Cerebras (@cerebrassystems) 's Twitter Profile Photo

Cerebras Co-Founder Deconstructs NVIDIA Blackwell Delays From intricate interposer designs to alignment issues and thermal expansion complications, Cerebras Co-Founder and Chief System Architect Jean-Philippe Fricker provides a detailed look into the hurdles faced by GPU

Cerebras Co-Founder Deconstructs NVIDIA Blackwell Delays

From intricate interposer designs to alignment issues and thermal expansion complications, Cerebras Co-Founder and Chief System Architect Jean-Philippe Fricker provides a detailed look into the hurdles faced by GPU
LlamaIndex 🦙 (@llama_index) 's Twitter Profile Photo

Need super-fast responses from your LLM? How does 1800 tokens per second for Llama 3.1-8b sound? That's the fastest in the world! Speed is critical to any application, but LLMs can be slow, especially if you have multiple round trips in a complex agentic system. Cerebras

Cerebras (@cerebrassystems) 's Twitter Profile Photo

Cerebras Inference has the industry’s best pricing for high-speed inference - 10c per million tokens for Llama3.1- 8B - 60c per million tokens for Llama3.1- 70B Try it today: inference.cerebras.ai

Cerebras Inference has the industry’s best pricing for high-speed inference

- 10c per million tokens for Llama3.1- 8B
- 60c per million tokens for Llama3.1- 70B

Try it today: inference.cerebras.ai
Weights & Biases (@weights_biases) 's Twitter Profile Photo

Explore the rapid transition of AI from experimental tools to essential business products on the latest episode of #GradientDissent with Andrew Feldman, CEO of Cerebras & host Lukas Biewald. 𝐋𝐢𝐬𝐭𝐞𝐧 🎧 𝐨𝐫 𝐰𝐚𝐭𝐜𝐡 🎥 𝐧𝐨𝐰: podcasts.apple.com/us/podcast/lau…

Cerebras (@cerebrassystems) 's Twitter Profile Photo

“To do meaningful work in AI, you need a huge amount of compute, and that converts to many transistors, many more than can fit on a single chip,” said Andrew Feldman. “The technology to get to two [chips] is difficult to develop, the technology to get to four is harder, and to

“To do meaningful work in AI, you need a huge amount of compute, and that converts to many transistors, many more than can fit on a single chip,” said Andrew Feldman. “The technology to get to two [chips] is difficult to develop, the technology to get to four is harder, and to
Sharan Babu (@sharanbabu2001) 's Twitter Profile Photo

Used Cerebras and Val Town to build a search engine autocomplete tool. Demo & Code Link: val.town/v/sharanbabu/l… Absolutely incredible that I am able to run a 70B model this fast to get near-instant results. That too for a time-sensitive task like query completion in

Used <a href="/CerebrasSystems/">Cerebras</a> and <a href="/ValDotTown/">Val Town</a> to build a search engine autocomplete tool.

Demo &amp; Code Link: val.town/v/sharanbabu/l…

Absolutely incredible that I am able to run a 70B model this fast to get near-instant results. That too for a time-sensitive task like query completion in
Soumith Chintala (@soumithchintala) 's Twitter Profile Photo

Hacker Cup – one of the preeminent coding competitions started an AI track w/ Meta & Microsoft problems are hardddd – only a handful of engineers reliably solve them – requires deep algorithmic knowledge, reasoning, planning and fast execution – to solve 5 problems in 30

Weights & Biases (@weights_biases) 's Twitter Profile Photo

Join us for the MLOps South Bay Meetup in Mountain View on Sept 19. Learn about multimodal LLMs with Anish Shah and hear from Cerebras' experts. Limited space—register now: mountain-view-meetup.wandb.events

Join us for the MLOps South Bay Meetup in Mountain View on Sept 19. Learn about multimodal LLMs with <a href="/ash0ts/">Anish Shah</a> and hear from <a href="/CerebrasSystems/">Cerebras</a>' experts. Limited space—register now: mountain-view-meetup.wandb.events
Cerebras (@cerebrassystems) 's Twitter Profile Photo

📢 Developers can now access the world’s fastest AI chip! "AI computing is still at the dial-up level. Getting an answer from an LLM can be slow. But now Cerebras has launched an AI cloud service that is 10 to 20 times faster than regular cloud providers." – Agam Shah, The New

📢 Developers can now access the world’s fastest AI chip!

"AI computing is still at the dial-up level. Getting an answer from an LLM can be slow. But now Cerebras has launched an AI cloud service that is 10 to 20 times faster than regular cloud providers." – <a href="/agamsh/">Agam Shah</a>, The New
Nathan Benaich (@nathanbenaich) 's Twitter Profile Photo

pushed an update to the State of AI compute index that tracks how many ai research papers use specific ai startup chips check that hockey stick from team @cerebrassystems Andrew Feldman will be chuffed :-)

pushed an update to the <a href="/stateofaireport/">State of AI</a> compute index that tracks how many ai research papers use specific ai startup chips

check that hockey stick from team @cerebrassystems 

<a href="/andrewdfeldman/">Andrew Feldman</a> will be chuffed :-)