In Glass Onion, Daniel Craig’s Benoit Blanc mocks tech billionaire Edward Norton as a “vainglorious buffoon.” As I mentioned back in August, their success, often in the face of skepticism seems to make them both contrarian and overconfident in their beliefs. Thus we have Sam Bankman-Fried, recently found guilty of massive cryptocurrency fraud after badly managing his defense. Or Elon Musk, under whose tender care Twitter is worth less than half what it was. But he’s going to fix it by making Twitter your banking site and your dating site.
Then we have Marc Andreesen, titan of early web browsers turned venture capitalist, who is shocked — shocked — that people think Silicon Valley should abide by such absurdities as tech ethics, sustainability, social-media trust-and-safety teams and social responsibility. That tech companies should think before deploying AI regardless of problems. How can he see a return on his VC investments that way? Oh, wait, that’s not what he says is his issue, it’s because tech leaders like him “believe in ambition, aggression, persistence, relentlessness — strength,. We believe in merit and achievement. We believe in bravery, in courage … We are the apex predator; the lightning works for us.”
As I pointed out in some LGM discussion, this is almost comical. Sure, investing takes a tolerance for risk but it doesn’t take bravery and courage the way physical risk does. Particularly not when Andreesen’s worth almost $2 billion; he could invest $500 million and lose it all but it still wouldn’t affect his lifestyle. Nor is rushing ahead with tech and not considering the consequences what I’d consider brave or strong; ambitious and aggressive maybe. But in this situation they are not bad things. Like the creators of the app that can deepfake women’s photos and turn them naked (but cannot, go figure, do the same to men).
We’ve had multiple accounts of AI being sold for some super-efficiency and it makes things worse or perpetuates the makers’ preconceptions. It’s not a new issue — the developers of polygraphs spewed the same Trust The Machine malarkey and they didn’t work either — but that doesn’t make it any less problematic. I doubt using AI to make military decisions will work well.
One new AI will supposedly model mass human behavior but “What this project does is complexify agent modeling far beyond anything that’s been done and then dumps the uncertainties of AI into it. I suspect, because the computing needs are so enormous, that a great many simplifications have been made. They’re not saying what those simplifications are.” And that’s bad given how easily AI defaults to stereotypes.



Pingback: It’s not the robot overlords we should worry about but the human ones | Fraser Sherman's Blog
Pingback: Money and sex can both corrupt | Fraser Sherman's Blog
Pingback: Decadence is not what we think | Fraser Sherman's Blog
Pingback: Science links for Wednesday | Fraser Sherman's Blog