Collin Burns(@CollinBurns4) 's Twitter Profileg
Collin Burns

@CollinBurns4

Superalignment @OpenAI. Formerly @berkeley_ai @Columbia. Former Rubik's Cube world record holder.

ID:1236495233996775424

linkhttp://collinpburns.com/ calendar_today08-03-2020 03:33:43

71 Tweets

11,1K Followers

276 Following

Leopold Aschenbrenner(@leopoldasch) 's Twitter Profile Photo

One year since GPT-4 release. Hope you all enjoyed some time to relax; it’ll have been the slowest 12 months of AI progress for quite some time to come.

account_circle
Jacob Steinhardt(@JacobSteinhardt) 's Twitter Profile Photo

Can we build an LLM system to forecast geo-political events at the level of human forecasters?

Introducing our work Approaching Human-Level Forecasting with Language Models!

Arxiv: arxiv.org/abs/2402.18563
Joint work with Danny Halawi, Fred Zhang, and John(Yueh-Han) Chen

Can we build an LLM system to forecast geo-political events at the level of human forecasters? Introducing our work Approaching Human-Level Forecasting with Language Models! Arxiv: arxiv.org/abs/2402.18563 Joint work with @dannyhalawi15, @FredZhang0, and @jcyhc_ai
account_circle
Collin Burns(@CollinBurns4) 's Twitter Profile Photo

The next few years are going to be wilder than almost anyone realizes.

I've been watching this over and over again and it's still hard to believe it's not real.

openai.com/sora

account_circle
OpenAI(@OpenAI) 's Twitter Profile Photo

Introducing Sora, our text-to-video model.

Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.

openai.com/sora

Prompt: “Beautiful, snowy…

account_circle
Collin Burns(@CollinBurns4) 's Twitter Profile Photo

Likewise, this was super fun! Was very cool getting to cube with the guy who taught me F2L (the one part of cubing I was decent at) :)

We’ll miss you <3

account_circle
Andrej Karpathy(@karpathy) 's Twitter Profile Photo

Aanish Nair 🟢🟡 oh that ship has sailed, sorry :D
actually one of my favorite meets at OpenAI was a cubing session with two very fast cubers, one of them a former world's record holder. I can't cube anywhere near my prior level anymore so it was a bit embarassing alongside but really fun.

account_circle
Leopold Aschenbrenner(@leopoldasch) 's Twitter Profile Photo

Reminder: applications for the $10M Superalignment grants close Sunday night!

Grad students, academics, researchers: we’d love to work with you, we think there’s a ton of interesting research to do on generalization, scalable oversight, interpretability, and more.

account_circle
Andrej Karpathy(@karpathy) 's Twitter Profile Photo

I touched on the idea of sleeper agent LLMs at the end of my recent video, as a likely major security challenge for LLMs (perhaps more devious than prompt injection).

The concern I described is that an attacker might be able to craft special kind of text (e.g. with a trigger…

account_circle
Eric Schmidt(@ericschmidt) 's Twitter Profile Photo

openai.com/blog/superalig…
This group from OpenAI are among the smartest people i have ever met. I'm very pleased to be one of their supporters, please review and apply to work with them !!!!!!!!!!!!

account_circle
Aleksander Madry(@aleks_madry) 's Twitter Profile Photo

So happy about this release and grateful to my awesome Preparedness team (especially Tejal Patwardhan), Policy Research, SuperAlignment and all of OpenAI for the hard work it took to get us here. It is still only a start but the work will continue!

account_circle
Jan Leike(@janleike) 's Twitter Profile Photo

I'm very excited that today OpenAI adopts its new preparedness framework!

This framework spells out our strategy for measuring and forecasting risks, and our commitments to stop deployment and development if safety mitigations are ever lagging behind.

openai.com/safety/prepare…

account_circle
OpenAI(@OpenAI) 's Twitter Profile Photo

We are systemizing our safety thinking with our Preparedness Framework, a living document (currently in beta) which details the technical and operational investments we are adopting to guide the safety of our frontier model development.
openai.com/safety/prepare…

account_circle
Leo Gao(@nabla_theta) 's Twitter Profile Photo

new paper! one reason aligning superintelligence is hard is because it will be different from current models, so doing useful empirical research today is hard. we fix one major disanalogy of previous empirical setups. I'm excited for future work making it even more analogous.

new paper! one reason aligning superintelligence is hard is because it will be different from current models, so doing useful empirical research today is hard. we fix one major disanalogy of previous empirical setups. I'm excited for future work making it even more analogous.
account_circle
Greg Brockman(@gdb) 's Twitter Profile Photo

New direction for AI alignment — weak-to-strong generalization.

Promising initial results: we used outputs from a weak model (fine-tuned GPT-2) to communicate a task to a stronger model (GPT-4), resulting in intermediate (GPT-3-level) performance.

account_circle
will depue(@willdepue) 's Twitter Profile Photo

Very impressive work from the Superalignment team just released! Methodology + code is all public & new $10M grant program for new alignment projects.

Very impressive work from the Superalignment team just released! Methodology + code is all public & new $10M grant program for new alignment projects.
account_circle
Mira Murati(@miramurati) 's Twitter Profile Photo

Exploring generalization properties of deep learning to control strong models with weak supervisors, showing early promise.

account_circle