That is, the “vainglorious buffoons” who insists it would be tragic to restrict or regulate AI when they can make so much money off it — er, when it can give humanity so many wonderful things. Like United Health using a massively error-prone AI to cut off healthcare. Or the ongoing debate over using copyrighted texts to train AIs. Or using AIs to write legal documents when they have no hesitation making citations up. Despite which a lawyer insisting that “there’s no point in being a naysayer … or being against something that is invariably going to become the way of the future.”
Once we accept AI doing all this stuff is inevitable, it’s a lot harder to manage its use, regardless of the problems. Much as smart houses run into problems that never happen to old-style housing (one or another company announces they’re no longer maintaining the software for your smart lightbulbs or smart thermostat, say). Or how the high-tech farming company AppHarvest collapsed and failed. And NFTs. The buffoons, despite their wealth and accomplishments, aren’t as smart as they think they are (e.g., Peter Thiel opining on Tolkien or Sam Bankman-Fried on Shakespeare).
As Cory Doctorow describes it there are two schools in Silicon Valley about AI: it will gain sentience and become a threat or it will gain sentience and become a friend. Both schools, as Doctorow points out, focus on the far-distant future, and thereby ignore the harms AI does in the present: “shifting the debate to existential risk from a future, hypothetical superintelligence ‘is incredibly convenient for the powerful individuals and companies who stand to profit from AI.’ After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic “decision support” systems and other AI-based automation is pretty poor.”
Software engineer Molly White says focusing on the distant future can justify almost any decision: if AI can improve the lives of 100 million people 200 years from how, isn’t screwing over 50,000 people in the present justified if it gets you to that future? “It is interesting, isn’t it, that these supposedly deeply considered philosophical movements that emerge from Silicon Valley all happen to align with their adherents becoming disgustingly wealthy. Where are the once-billionaires who discovered, after their adoption of some “effective -ism” they picked up in the tech industry, that their financial situation was indefensible? The tech thought leaders who coalesced and wrote 5,000-word manifestos about community aid and responsible development?”
Crooked Timber says the Silicon Valley mindset is less a mindset than a personality disorder. Vox points out the tremendous irrationality of billionaires supporting Nikki Haley over Trump: the only candidate who can stop Trump is Biden but they can’t bring themselves to support a Democrat. Paul Campos has more. So does the Guardian.
Doctorow (again) says these kind of fights are old issues: “Tech has always included people who wanted to make a better internet … Tech has also always included people who wanted to enshittify the internet” because this often makes it possible to make more money (e.g., less customer services, fewer content moderators).
The problems to worry about are now, not when the robot overlords gain sentience.


