Kyle Mahowald(@kmahowald) 's Twitter Profileg
Kyle Mahowald

@kmahowald

UT Austin linguist https://t.co/1GaRxR8rOu. cognition, psycholinguistics, data, NLP, crosswords. He/him.

ID:22515678

linkhttp://mahowak.github.io calendar_today02-03-2009 18:28:50

514 Tweets

1,6K Followers

724 Following

Follow People
Nathan Schneider(@complingy) 's Twitter Profile Photo

The corpus linguistics brief has already been raised in oral argument (by Justice Comey-Barrett). supremecourt.gov/oral_arguments…

Lots of folk linguistic claims being bandied about like 'your brain autocorrects 'and' to 'or'' in certain contexts.

account_circle
Tom McCoy(@RTomMcCoy) 's Twitter Profile Photo

🤖🧠NEW PAPER🧠🤖

Language models are so broadly useful that it's easy to forget what they are: next-word prediction systems

Remembering this fact reveals surprising behavioral patterns: 🔥Embers of Autoregression🔥 (counterpart to 'Sparks of AGI')

arxiv.org/abs/2309.13638
1/8

🤖🧠NEW PAPER🧠🤖 Language models are so broadly useful that it's easy to forget what they are: next-word prediction systems Remembering this fact reveals surprising behavioral patterns: 🔥Embers of Autoregression🔥 (counterpart to 'Sparks of AGI') arxiv.org/abs/2309.13638 1/8
account_circle
Tom McCoy(@RTomMcCoy) 's Twitter Profile Photo

Zining Zhu Shunyu Yao Dan Friedman matt hardy Griffiths Computational Cognitive Science Lab We thought that too - but it turns out that GPT captures the spelling of its byte-pair tokens! (see image for how we tested this).

Here's a thread about a great paper that describes how GPT might learn this info: twitter.com/kmahowald/stat…

@zhuzining @ShunyuYao12 @danfriedman0 @mdahardy @cocosci_lab We thought that too - but it turns out that GPT captures the spelling of its byte-pair tokens! (see image for how we tested this). Here's a thread about a great paper that describes how GPT might learn this info: twitter.com/kmahowald/stat…
account_circle
Alexander Huth(@alex_ander) 's Twitter Profile Photo

Our big language fMRI dataset is now officially published!

📰 The paper: nature.com/articles/s4159… (free pdf: nature.com/articles/s4159…)

🧰 Code to download data & build models: github.com/HuthLab/deep-f…

💾 The dataset: openneuro.org/datasets/ds003…

account_circle
Greg Durrett(@gregd_nlp) 's Twitter Profile Photo

📣 Today we launched an overhauled NLP course to 600 students in the online MS programs at UT Austin.

98 YouTube videos 🎥 + readings 📖 open to all!
cs.utexas.edu/~gdurrett/cour…
w/5 hours of new 🎥 on LLMs, RLHF, chain-of-thought, etc!

Meme trailer 🎬
youtu.be/DcB6ZPReeuU

🧵

account_circle
Sihan Chen(@cshnican) 's Twitter Profile Photo

New paper alert 📣. In this paper (published in Cognition Cognition, link at tinyurl.com/deictics), we (@rljfutrell Kyle Mahowald and I) explain spatial demonstrative systems across languages using the Information Bottleneck, plus some new constraints. thread. 1/16

New paper alert 📣. In this paper (published in Cognition @CognitionJourn, link at tinyurl.com/deictics), we (@rljfutrell @kmahowald and I) explain spatial demonstrative systems across languages using the Information Bottleneck, plus some new constraints. thread. 1/16
account_circle
UT Liberal Arts(@LiberalArtsUT) 's Twitter Profile Photo

Send COLA to and by voting for our 12 proposed panels by Sun, Aug 20!

Find the list of COLA panels and how to vote here: liberalarts.utexas.edu/news/vote-for-…

Send COLA to #SXSW2024 and #SXSWEdu by voting for our 12 proposed panels by Sun, Aug 20! Find the list of COLA panels and how to vote here: liberalarts.utexas.edu/news/vote-for-…
account_circle
Christopher Potts(@ChrisGPotts) 's Twitter Profile Photo

From the always wonderful Open Source, a useful exchange (from 2017) for people thinking of using LLMs trained on 100B words to inform theories of language acquisition (radioopensource.org/noam-chomsky-a…):

From the always wonderful @radioopensource, a useful exchange (from 2017) for people thinking of using LLMs trained on 100B words to inform theories of language acquisition (radioopensource.org/noam-chomsky-a…):
account_circle
Kyle Mahowald(@kmahowald) 's Twitter Profile Photo

This paper is very interesting and introduces to the literature (I assume) the ngram “ants formicating meaninglessly in the sand”.

account_circle
Peter Jenks(@psejenks) 's Twitter Profile Photo

The text below is from Christopher Potts wonderful paper on PiPPs. For me at least, it carves out a natural middle path between hardcore generative nativism and black box functionalism: generative grammars are clearly 'real' but learnable with very few innate learning mechanisms.

1/2

The text below is from @ChrisGPotts wonderful paper on PiPPs. For me at least, it carves out a natural middle path between hardcore generative nativism and black box functionalism: generative grammars are clearly 'real' but learnable with very few innate learning mechanisms. 1/2
account_circle
Kyle Mahowald(@kmahowald) 's Twitter Profile Photo

Surprising though I know it may be that a paper on the English Preposing in PPs construction could be not only linguistically interesting but also sharply insightful about LLMs and amusing and moving, Chris's paper lingbuzz.net/lingbuzz/007495 that goes with this model is all of that.

account_circle
Kyle Mahowald(@kmahowald) 's Twitter Profile Photo

Now that you’ve no doubt solved your Sunday crossword puzzle, looking to read about crosswords and linguistics? In The Atlantic theatlantic.com/science/archiv…, Scott AnderBois, Nicholas Tomlin, and I talk about what linguistics can tell us about crosswords and vice versa. Thread.

account_circle
The Atlantic(@TheAtlantic) 's Twitter Profile Photo

If you solve a lot of crosswords, then you’re fluent in a grammar that you cannot fully describe. Scott AnderBois, Kyle Mahowald, and Nicholas Tomlin explain the hidden rules that govern clues: theatlantic.com/science/archiv…

account_circle