Wei Dai (@weidai11) 's Twitter Profile
Wei Dai

@weidai11

wrote Crypto++, b-money, UDT. thinking about existential safety and metaphilosophy. blogging at lesswrong.com/users/wei-dai

ID: 3235660472

linkhttp://weidai.com calendar_today04-06-2015 03:38:14

399 Tweet

7,7K Followers

106 Following

Matt (@spacedoutmatt) 's Twitter Profile Photo

Gil⏸️ Shrimp welfare: 🦐 Deworming: 🚫🪱 Chicken Welfare: 🐤 Fish Welfare: 🐟 Bednets: 🛏️🥅 Neoliberalism: 🌐 Rationalist (or rodent welfare):🐀 Biosecurity: ☣️ Nuclear Security: ☢️ Utilitarianism: 😀

Andreas Stuhlmüller (@stuhlmueller) 's Twitter Profile Photo

.Wei Dai's point on centralization of the economy given AI is still underrated. Paraphrasing: Companies with human employees benefit and suffer from scale 1. With human employees, coordination costs within companies grow superlinearly because workers’ behavior is only

Wei Dai (@weidai11) 's Twitter Profile Photo

Andreas Stuhlmüller Thanks for the signal boost! It seems wild that with AGI plausibly just years away, there are still important and fairly obvious considerations for the AI transition that are rarely discussed. Another example is the need to solve metaphilosophy or AI philosophical competence.

Wei Dai (@weidai11) 's Twitter Profile Photo

Andreas Stuhlmüller Why aren't professional economists and philosophers debating these ideas? I used to think that AGI was just too far away, and these topics will naturally enter the public consciousness as it got closer, but that doesn't seem to be happening nearly fast enough. What gives?

Wei Dai (@weidai11) 's Twitter Profile Photo

I once checked out an econ textbook from the school library and couldn't stop reading it because the insights gave me such a high. Imagine what our politicians would be like if that was the median voter. (Which is doable with foreseeable tech, e.g., embryo selection!)

Wei Dai (@weidai11) 's Twitter Profile Photo

I've been wondering why I seem to be the only person arguing that AI x-safety requires solving hard philosophical problems that we're not likely to solve in time. Where are the professional philosophers?! Well I just got some news on this from Simon Goldstein: "Many of the

Wei Dai (@weidai11) 's Twitter Profile Photo

Are there *any* Pareto improvements in the real world, after taking reallocation of power, and other forms of status, into account? Every proposal is a bid for power. Every argument or statement of fact is a bid for prestige. Every success makes everyone else less successful by