Teknium (e/λ) (@teknium1) 's Twitter Profile
Teknium (e/λ)

@teknium1

Cofounder @NousResearch, prev @StabilityAI
Github: github.com/teknium1
HuggingFace: huggingface.co/teknium
Support me on Github Sponsors

ID: 1365020011123773442

linkhttp://github.com/sponsors/teknium1 calendar_today25-02-2021 19:25:11

31,31K Tweet

33,33K Followers

3,3K Following

Jerry Tworek (@millionint) 's Twitter Profile Photo

Many (including me) who believed in RL were waiting for a moment when it will start scaling in a general domain similarly to other successful paradigms. That moment finally has arrived and signifies a meaningful increase in our understanding of training neural networks

main (@main_horse) 's Twitter Profile Photo

when openai says that, > we evaluated o1 on the maximal test-time compute setting. what is most likely to be the limiting factor that leads to the existence of a "maximum" to begin with? (if "None of the below" then write a reply)

Lex Fridman (@lexfridman) 's Twitter Profile Photo

I'm doing a podcast with the Cursor team. If you have questions / feature requests to discuss (including super-technical topics) let me know! For those not familiar, Cursor is a code editor based on VSCode that adds a lot of powerful features for AI-assisted coding. I've been

vie 🛸 (@viemccoy) 's Twitter Profile Photo

Good Nous Everyone! Super excited to announce that I am now a Researcher in Residence at Nous Research 💜 I can't say much about what we're cooking up yet, but stay tuned. I'm so proud to be part of an org pushing SOTA, especially one so aligned with my own philosophy.

Teknium (e/λ) (@teknium1) 's Twitter Profile Photo

I hope this is true and does work out, medicine and health are the two things im most excited for automatic intelligence for.

David Hinkle (@drachs1978) 's Twitter Profile Photo

If this isn't reasoning, what the fuck even is reasoning? I posit that if you don't think this is reasoning it's your reasoning we should doubt.

If this isn't reasoning, what the fuck even is reasoning?  I posit that if you don't think this is reasoning it's your reasoning we should doubt.
Alexander Long (@alexanderjlong) 's Twitter Profile Photo

"The safest number of ASIs is 0. The least safe number is 1. Our odds get better the more there are." The typical response here is that we are witnessing commoditization at the foundation model layer and so everything will be fine but think for a second what that actually