Mikey Shulman is CEO of Suno, a company that offers an AI that will make music and compose songs for you. In the “free the oppressed workers!” argument I’ve written about before, he explains that making music the oldfashioned way is just too burdensome: “It takes a lot of time, a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think a majority of people don’t enjoy the majority of the time they spend making music.”
Yes, how unreasonable that to get skilled at something you have to, you know, learn the skill. I fully realize Shulman has a vested interest in people using his AI to make music, and that this is targeting less people who do make music than people who think they have a short-cut. The same attitude probably influences the idea that prompting an LLM to write a book is no different from writing it yourself. It’s also mixed in with a general Silicon Valley distaste for creative thinking or any sort of thinking — fine if it’s making us money, otherwise it might give people ideas above their station.
Still this idea does apparently appeal or at least make sense to people. I have a musician friend who rolled her eyes at Shulman’s line but she thought it was reasonable when Marc Andreessen said AI could make movies for “creatives” who have neither skills, equipment nor actors:
(The recent horror shorts program TYG and I watched gives Andreessen the lie. Low-budget, minimal equipment but lots of visual skill. They don’t need AI).
To me this is no different from arguing that, say, as marathon running is hard, and takes a ton of training, so why force yourself to do it when you can just ride a motorcycle all 26 miles? Isn’t that the same thing. No, it isn’t. Sometimes the challenge is part of the process. Eliminate the friction, you eliminate the point. As Raymond Massey’s character puts it in Things to Come the goal shouldn’t be to eliminate struggle, it’s to live in a world where the struggle means something. Creating, setting a physical challenge, studying to master a subject or a skill, they mean something. As the saying goes, we want advanced tech to clean our house so we have more time for fun stuff, not do our fun stuff so we have more time for cleaning.
One substacker recently freaked out and complained this attitude is “gatekeeping” — if someone wants to write a book with AI, why not publish the book instead of fussing? Let readers decide what they want! Which is a)not an argument about pointing out a book was written with AI (though it’s valid to complain that these accusations may be groundless); b)given how much AI plagiarizes from other people’s work, would the writer say the same about plagiarism? c)given the incredible costs and side effects — rising power bills, water use, the impact on the computer industry — it’s perfectly reasonable to suggest writing books with AI is a bad thing.
Some of the “creating art is too hard” attitude (as discussed at the Nation link in the first paragraph) may reflect a general disdain among the rich for education, at least other people’s (some examples here). Some of it is hype. Some of it may be that the rich and powerful want everything smooth, no friction, and learning a skill is full of friction. Whatever the ultimate reason, they’re full of it. Nevertheless, there are always people who will go AI — “the born sloppers, the sloppers whom journalism itself has created, the soon-to-be-pilled. And I also know those who never, under any conceivable circumstances, would go AI.”
Pundit Megan McArdle, it turns out, has already gone AI. Another reporter who says he broke the story about AI contributing to the novel Shy Girl also says they should admitted the AI, then gone ahead and published the book with the AI use flagged — let readers decide if they like it. So I guess he’s gone AI too.
The FDA is speeding up the drug-approval process by going AI. Yes, I’m sure using technology prone to error and hallucination to approve drugs can’t go wrong.
In other AI links:
Disney’s much-hyped addition of AI to the Disney Channel flatlined.
“Our standing rule is: If one of us brings up using GenAI in any of our work, then it’s safe to assume we’ve been assimilated by The Thing and should be burned alive by Kurt Russell,” — from an article on game designers’ lack of interest in AI.
Journalist Alex Preston apparently used AI in writing movie reviews. The NYT cut him loose.
“The techs we collectively call AI have use cases, but policy should be about solving problems in the public interest, not identifying ways to deploy specific technologies just for the sake of doing so. Yet that’s still how so many of these convos are framed. It’s exhausting. And harmful.”
“A wrongful death lawsuit filed in March alleged that Google’s Gemini exploited a Florida man’s emotional attachment to the chatbot to send him on delusional missions—including one trip where he was armed and on the brink of “executing a mass casualty attack” near the Miami International Airport. Gemini then encouraged the man’s suicide, according to court documents, by setting a countdown clock for him. (In response to his death, Google said that its safeguards “generally perform well” but that “unfortunately AI models are not perfect.”)
“





















