Michael J.J. Tiffany (@kubla) 's Twitter Profile
Michael J.J. Tiffany

@kubla

priv/acc Hacker: @ninjanetworks Cofounder: @SecureWithHUMAN (infosec unicorn) Cofounder: @FulcraDynamics (personal data sovereignty) I have magnificent friends

ID: 13071242

linkhttp://fulcradynamics.com calendar_today04-02-2008 23:23:43

844 Tweet

2,2K Followers

1,1K Following

Michael J.J. Tiffany (@kubla) 's Twitter Profile Photo

AI aggregates our data into a neutral average that knows a little about everyone, but nothing deeply specific about anyone. Amit Pradhan refers to this as a โ€œbeige brainโ€. How can we build AI systems that honor individuality without sacrificing privacy?

Michael J.J. Tiffany (@kubla) 's Twitter Profile Photo

Fulcra liberates the data of your life, from apps, wearables, IoT, and more. With everything connected, you can literally see your life better. When it's under your control, you can share this context -- where you are, what you're doing, how you're sleeping -- with your

Jenia Jitsev ๐Ÿณ๏ธโ€๐ŸŒˆ ๐Ÿ‡บ๐Ÿ‡ฆ (@jjitsev) 's Twitter Profile Photo

Our AIW study arxiv.org/abs/2406.02061 showed already one should not trust standardized benchmarks in measuring model capabilities. We test models on AIW problem variations that are irrelevant for solving the simple problem, probing model robustness and generalization 3/n

Our AIW study arxiv.org/abs/2406.02061 showed already one should not trust standardized benchmarks in measuring model capabilities. We test models on AIW problem variations that are irrelevant for solving the simple problem, probing model robustness and generalization 3/n
Michael J.J. Tiffany (@kubla) 's Twitter Profile Photo

AI hype is everywhere, but Harvard and Google's brain mapping project excites me. Their AI-powered neuroscience breakthrough could revolutionize our understanding of the mind and inspire future AI developments. It's a perfect example of tech and science advancing together.

Michael J.J. Tiffany (@kubla) 's Twitter Profile Photo

The creator(s) of this summerโ€™s *awesome* GPT-2 fine-tuned for implicit CoT take a look at o1-preview and o1-mini with the same multiplication challenge that they aced in July. And they measure o1โ€™s internal token use!