Perry E. Metzger (@perrymetzger) 's Twitter Profile
Perry E. Metzger

@perrymetzger

Mad Scientist, Bon Vivant, and Raconteur.

ID: 127100323

calendar_today28-03-2010 02:04:03

47,47K Tweet

12,12K Followers

946 Following

Adam Thierer (@adamthierer) 's Twitter Profile Photo

California is considering a new #AI bill (SB 1047) that is "one of the most far-reaching and potentially destructive technology measures being considered today," as I argue in this new R Street Institute analysis. rstreet.org/commentary/cal…

Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

#AI is a fertilizer to the garden of possibilities from scientific discovery to economic growth. We need more people to be educated and invited into the world of AI, not less. Ensuring public sector and entrepreneurial’s access to the best AI models is critical for our society!

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

And yet, somehow, in spite of all of this new technology, our civilization hasn’t been wiped out. You would think that doomers would eventually understand that their constant stream of analogies are terrible, but no.

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

It's really remarkable how, when the supposed “rationalists” want to argue with you, their techniques are mostly things like posting image macros of bald fat people to try to make you seem socially undesirable and rejected. Way to go on rational argument, rationalist anon!

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

Thought this reply was worth independently posting. I want to emphasize that I think it is possible to create AIs that have independent goals and the like, and that doubtless someone will try it at some point; I just don't think it's either easy to do or will happen by accident.

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

“You just don’t understand!” “What is it exactly about your argument that I don’t understand?” “That we’re right and you’re wrong!” “Er…”

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

It’s secretly lobbying when you’re interviewing someone but feeding them false or slanted information, and not actually as interested in the interview as much as the opportunity to influence their opinion.

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

I haven’t analyzed this much yet, so don’t take my comments seriously *yet*, but: off the cuff, this appears, to me, to be an attempt to push for regulation against open source AI. Even the implicit claim that model weights are too dangerous to be allowed out in the wild

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

By the way, I always find it suspicious when interesting news is released on a Friday night. This is no exception. You release news on Friday nights in order to assure that no one covers it.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

# CUDA/C++ origins of Deep Learning Fun fact many people might have heard about the ImageNet / AlexNet moment of 2012, and the deep learning revolution it started. en.wikipedia.org/wiki/AlexNet What's maybe a bit less known is that the code backing this winning submission to the

# CUDA/C++ origins of Deep Learning

Fun fact many people might have heard about the ImageNet / AlexNet moment of 2012, and the deep learning revolution it started.
en.wikipedia.org/wiki/AlexNet

What's maybe a bit less known is that the code backing this winning submission to the
Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

Regulatory capture on behalf of crony capitalists often abuses the concerns of fanatics to get passed. The fanatics provide the cover needed by the people who would prefer to ban competitors to gain a monopoly instead of competing in a free market.

Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

Rapidly deleted post from Yo Shavit at OpenAI. Thank you for confirming that we should be paying close attention to the trusted computing initiative.

Rapidly deleted post from <a href="/yonashav/">Yo Shavit</a> at OpenAI. Thank you for confirming that we should be paying close attention to the trusted computing initiative.
Perry E. Metzger (@perrymetzger) 's Twitter Profile Photo

But the article portrays this not as a mechanism to preserve their investment, but as part of AI safety. And you don’t actually require trusted computing hardware to make sure that your weights won’t get stolen.

Charles Foster (@cfgeek) 's Twitter Profile Photo

Contrary to claims SB 1047 would only impact AI megacorps, “covered models” include any non-derivative model that is as generally capable as circa-2024 frontier models. Algorithmic progress means in a matter of years, smaller players and even hobbyists *will* fall into its scope.

Contrary to claims SB 1047 would only impact AI megacorps, “covered models” include any non-derivative model that is as generally capable as circa-2024 frontier models. Algorithmic progress means in a matter of years, smaller players and even hobbyists *will* fall into its scope.