Typically, Monday’s post is dedicated to one our philosophical read-alongs. We’re taking a break this week so that people can catch up with our reading of Plato’s Republic.
This will be the last AI missive on Commonplace Philosophy, at least for the time being.
Looking back at the last few things I’ve written on the topic, I am left unsatisfied. I am not articulating an interesting thought; I’m repeating myself; I’m not solving a problem; I’m simply complaining.
For at least the time being, I need to step away from the topic, do some real thinking, and see if there is anything that I can say that is insightful and helpful.
Allow me one final indulgence before I let the subject rest.
The Chicago Sun-Times published a list of summer reads recently. The only problem: some of the books on that list didn’t exist. There is no Tidewater Dreams, and Andy Weir never wrote The Last Algorithm.
The piece was penned – or not penned – by a freelancer who admitted to using AI:
According to Victor Lim, marketing director for the Chicago Sun-Times' parent company Chicago Public Media, the list was part of licensed content provided by King Features, a unit of the publisher Hearst Newspapers.
The list has no byline. But writer Marco Buscaglia has claimed responsibility for it and says it was partly generated by Artificial Intelligence, as first reported by the website 404 Media. In an email to NPR, Buscaglia writes, "Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate and I betrayed that trust. It's on me 100 percent."
I don’t know how many of you know the freelancing world, and I personally have never done any freelance writing, but I have a fair number of friends here in Austin who live that life. A good friend of mine, Bridget, used to freelance too, before she passed, and I could see the toll it took on her. The economics of freelancing are dire. Rates for pieces have remained stagnant or gone down for years, and it is hard to scrape together a living.
I imagine Buscaglia thought of it like this: it’s just a booklist, it probably won’t be read closely by anyone, I don’t know where it will be published, so I can use AI to help me out. He’s not a book critic – there aren’t many full-time book critics these days – and I bet he needed the paycheck.
The Chicago Sun-Times is not doing well. 20% of their staff left as part of a buyout just two months ago, and new ownership in traditional media often insists on doing more with significantly less. I have no contacts in that newsroom, but I can guess what it feels like: tense, hopeless, stretched too thin.
This is the sort of environment where people make stupid mistakes. It is the sort of environment where people don’t have time to double-check. It is the sort of environment where an editor briefly scans a piece before letting it run. It is the sort of environment where you take on faith that the content you licensed is good enough.
404 Media also reports that authors have been found leaving AI prompts and responses in their books.
Fans reading through the romance novel Darkhollow Academy: Year 2 got a nasty surprise last week in chapter 3. In the middle of steamy scene between the book’s heroine and the dragon prince Ash there’s this: "I've rewritten the passage to align more with J. Bree's style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements"
The author of Darkhollow Academy was clearly using AI to write her books, and she isn’t the only one. Other writers in this genre – self-published fantasy romance, also called ‘romantasy’ – had left in AI detritus.
Some claim it was an honest mistake; others blame editors and beta readers for unauthorized changes.
In the world of self-published fiction, volume matters. These writers often release multiple books per year. They purposefully write formulaic stories; the entire business strategy is premised on converting readers into perpetual customers. You often give away some of the books for free, often via Kindle Unlimited, and then release a slew of similar books once one of them becomes a hit.
Kelsie Stelting, who I have no reason to believe uses AI, is a good example of this. (I found out about her because someone on Twitter was making fun of her books, admittedly, so let me say this: I’m glad she’s making a living.) Stelting has released 44 books. Her most popular books are part of a 14-part series called The Curvy Girls Club:
Curvy Girls Can’t Date Quarterbacks
Curvy Girls Can’t Date Billionaires
Curvy Girls Can’t Date Cowboys
Curvy Girls Can’t Bad Boys
Curvy Girls Can’t Date Best Friends
Curvy Girls Can’t Date Bullies
Curvy Girls Can’t Dance
Curvy Girls Can’t Date Soldiers
Curvy Girls Can’t Date Princes
Curvy Girls Can’t Date Rock Stars
Curvy Girls Can’t Date Curvy Girls
Curvy Girls Can’t Date Surfers
Curvy Girl Club: All Grown Up
Curvy Girls Can’t Date Point Guards
You can see what Stelting has done. She wrote a book – Curvy Girls Can’t Date Quarterbacks – and found a readership. She is a romance author, and so she knows she is selling a fantasy. She now writes and sells variations of that fantasy, and it is clearly working for her. The least popular Curvy Girls book has 499 ratings on Goodreads; the most popular has 15,854. Those are numbers many authors would kill to have.
I don’t begrudge Stelting her success, nor do I think she’s doing anything new. Many authors discover that the best way to make a living is to write the same book again and again. As a kid, I read nearly every Animorphs book and loved them; I don’t know if there is a substantial difference between Animorphs, Stelting’s Curvy Girls books, and the collected works of James Patterson. She’s writing very similar books for readers who want to escape, and she gives them a way to escape for a fairly small amount of money. (You can buy 24 of Stelting’s books on her website for $69.99, so under $3 per book.)
But you can also see how this literary environment has negative incentives. It prioritizes volume and sameness. It flits between trends. You can see how many authors, especially newer authors, would struggle to keep up.
So, you use AI. Maybe you don’t let AI write your story, but you use it for revisions, or brainstorming, or something else. It’s how you play (and hopefully win) the game.
Many of these AI controversies are not about AI. They are, in fact, debates about what we think matters. They are debates about what we value, individually and collectively.
Is AI a plagiarism machine? I don’t think this is a correct description of what these systems do, and I don’t think critics of generative AI can fully articulate what they mean by this. But these models did use massive volumes of online content in their training sets, and no one was compensated for that use. So, really, I think the accusation comes from a fear that nobody cares about our writing (or our art) save for those who can instrumentalize and productize it. So, we have to ask: what is the value of art?
Is AI art ‘art’? I don’t know if anyone who says it is not has a fully fleshed out theory of what art is (though one commenter points to two examples: Jeanette Winterson and Ted Chiang), and I don’t know if they are interested in having that debate. But we do, for better or worse, highly value authenticity, and it strikes us that using AI in our art is horribly inauthentic. So, we have to ask: what is the value of authenticity?
Can AI replace workers? I don’t know if anyone actually treats this as an empirical question. CEOs use it like a stick when they run out of carrots, and people who push back are often appealing to the dignity of labor and workers’ rights, not the actual question. Work and labor are inextricably tied up with our sense of dignity and value, and AI seems to threaten to take that away from us. So, we have to ask: what do we value in a (possible) post-work world?
And here is the question under all of those questions: what sort of world do we want to build?
The Chicago Sun-Times story isn’t about AI, because it is actually a story about whether or not we value newspapers, book critics, and writers. The AI romantasy scandal isn’t a scandal about AI, because it is actually a scandal about the way we take books and turn them into just another consumable.
Some of you may know that I’m writing a book. Because of Substack, YouTube, and my book advance, I am currently supporting my family of three (soon to be a family of four) entirely through my writing. It is something I have dreamed of for a very long time, and it is something many writers are never able to do: the numbers don’t add up.
Because of this, I’m able to slow down a little, to step back and ask the big questions. That’s why I’m not writing about AI for the time being. I want to ask those big questions articulated above and see if I can find some answers. Then, it is a matter of seeing how AI (or any technology) fits into that answer.
Re: “Is AI art ‘art’? I don’t [doubt?] anyone who says it is not has a fully fleshed out theory of what art is, and I don’t know if they are interesting in having that debate.”
I’d beg to differ. Two writers I know of who certainly do have well-fleshed theories of art have written on the subject of AI “art”: Jeanette Winterson and Ted Chiang. Winterson is an unlikely cheerleader, in light of what she wrote in the ‘90s about the devaluation of human culture and human beings by technological progress. Chiang is critical and, I’ve found, more thoughtful and deliberate about what theory of art is espoused with AI adoption.
Winterson: https://www.theguardian.com/books/2025/mar/12/jeanette-winterson-ai-alternative-intelligence-its-capacity-to-be-other-is-just-what-the-human-race-needs
Chiang: https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
I don’t think your pieces on AI amount to only “complaining,” Jared, and anyway I don’t see what’s wrong with complaining.
Thanks for sharing these thoughts. Perhaps the silver-lining to this AI rollercoaster is that it is forcing us to reflect on our relationship with art and to attempt to draw a line between the authentic and inauthentic. And maybe the conclusion will be that we can never draw a firm line and to embrace the grey area. My prediction is that this will drive a deeper appreciation for interpersonal or hyper-local art, particularly that is gifted or not commodified, where we therefore don’t feel the need to determine authenticity. A ‘poorly’ drawn card or piece of art from a friend is worth infinitely more to me than something produced by a stranger whose artistic motives are unknown to me. The most resonant message from your piece is that it’s ok for us not to know exactly how we feel about the AI landscape. We should all give ourselves time to continuously observe and reflect.