Ethan Mollick(@emollick) 's Twitter Profileg
Ethan Mollick

@emollick

Professor @Wharton studying AI, innovation & startups. Democratizing education using tech
Book: https://t.co/CSmipbJ2jV
Substack: https://t.co/UIBhxu4bgq

ID:39125788

linkhttps://mgmt.wharton.upenn.edu/profile/emollick/ calendar_today10-05-2009 22:33:52

26,4K Tweet

211,3K Takipçi

553 Takip Edilen

Follow People
Ethan Mollick(@emollick) 's Twitter Profile Photo

This may end up being a big deal:

Usually LLMs just predict the next token in a sequence, one at a time, but if you have them predict the next several tokens at once you get significantly better performance, faster, and with no added costs. The gains are better for bigger models

This may end up being a big deal: Usually LLMs just predict the next token in a sequence, one at a time, but if you have them predict the next several tokens at once you get significantly better performance, faster, and with no added costs. The gains are better for bigger models
account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

If people carry their phones everywhere, and AI applications on phones are excellent, I don't understand why phones don't win the race for 'AI device' - not that there isn't room for something like Meta glasses as a peripheral, but why would a dedicated AI device ever win?

account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

As multiple medical journal articles have shown, GPT-4 was already pretty good at text-based medical tasks, but not visual ones.

The new multimodal Gemini trained for medicine raises the bar and adds the ability to work well with video and very long records. Lots of potential.

account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

This addresses a common misunderstanding about confidence intervals that comes from how CIs are intuitively understood. I see it when I post charts on here

People assume that if the CIs overlap that there is no significant difference between the two results, but that isn’t true

This addresses a common misunderstanding about confidence intervals that comes from how CIs are intuitively understood. I see it when I post charts on here People assume that if the CIs overlap that there is no significant difference between the two results, but that isn’t true
account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

I am really glad that careful RCTs on low-cost interventions are being done. Giving adults reading glasses had pretty stunning earnings impacts jn Bangladesh. A replication is needed, but this is a really nicely done study and suggests big returns to small charitable investments.

account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

The ability of Claude to do interesting ASCII & p5.js drawings makes me think a lot of new capabilities will be unlocked when LLMs get more direct types of multimodal output.

Having an LLM prompt an image generator is just too indirect and adds too much randomness to be useful.

The ability of Claude to do interesting ASCII & p5.js drawings makes me think a lot of new capabilities will be unlocked when LLMs get more direct types of multimodal output. Having an LLM prompt an image generator is just too indirect and adds too much randomness to be useful.
account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

On paper formats:

Social sciences could learn from AI papers - put the abstract and headline graphic/data visualization on the front page (you can always repeat it later)

AI could learn from social science papers - have a significance to non-experts section, show effect size

On paper formats: Social sciences could learn from AI papers - put the abstract and headline graphic/data visualization on the front page (you can always repeat it later) AI could learn from social science papers - have a significance to non-experts section, show effect size
account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

The most 1990s-era-Wired-Magazine thing about our current AI moment is that a lot of interesting stuff is intrinsically motivated, outside of industry & academia. There are anonymous folks doing weird art/philosophy (j⧉nus) or releasing stuff for hobbyists (@cocktailpeanut)

account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

ChatGPT's new 'memory' feature is neat in theory, and I am sure for many use cases, but can be a real problem if you are trying to do any persona-based roleplaying, either for fun or for idea generation, writing, etc. It has a tendency to remember anything you roleplay as true.

ChatGPT's new 'memory' feature is neat in theory, and I am sure for many use cases, but can be a real problem if you are trying to do any persona-based roleplaying, either for fun or for idea generation, writing, etc. It has a tendency to remember anything you roleplay as true.
account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

A problem with algorithmic feeds is how much they distort our idea of popular opinion.

We know only people with strong feelings post & that the algorithm hides moderate views. Add to that a system that shows you more of whatever outraged you, and soon you believe it is universal

A problem with algorithmic feeds is how much they distort our idea of popular opinion. We know only people with strong feelings post & that the algorithm hides moderate views. Add to that a system that shows you more of whatever outraged you, and soon you believe it is universal
account_circle