Fake Books, Fake Police Reports, and the Human Urge to Cut Corners
Every month or so, I attend a gathering of writers in Austin. We meet in someone’s house, we drink wine or sparkling water, and we talk about the business of writing. It isn’t a writing group, where you might expect critiques of your work; it is an opportunity to talk about the struggles and the occasional triumphs of living as a writer in the modern landscape. Each meeting centers around a talk — sometimes by a visiting writer or publishing professional, but oftentimes by a member of the group. Last month, the talk was an interview with Seth Harp, the author The Fort Bragg Cartel: Drug Trafficking and Murder in the Special Forces. The theme of Seth’s interview was ‘What I Wish I Had Known While Writing My First Book.’
It was a good and informative discussion. Seth spent two years writing The Fort Bragg Cartel, building on previous reporting he had done for Rolling Stone. He discussed managing research, staying healthy, and keeping the project manageable while trying to write a 100,000-word book. (I’m struggling to keep a 65,000-word book manageable, so I needed to hear this.)
There was one question about AI. Seth was asked about how he uses in his writing process, and his response was that he didn’t use it at all. As a writer, Seth didn’t want anything to do with AI. But it turns out that he was going to have to think about AI quite a bit, actually, as we approached his release date.
As Seth’s publication day approached, he would sometimes check his Amazon listings. He found that along with The Fort Bragg Cartel, numerous other titles were being listed on Amazon that seemed suspiciously similar.
These titles often have keywords in the title or subtitle, e.g., ‘Fort Bragg,’ ‘Cartel,’ ‘Drug Trafficking.’ One of them went so far as to include Seth’s full name in the subtitle: The Fort Bragg Cartel: Inside the Shadows of Modern Warfare — With context from Seth Harp. Others purported to be the ‘real’ story of Fort Bragg, establishing a contrast with Seth’s book, perhaps. Still others claimed to be biographies of Seth.
Seth took some time to speak to me last week, and he confirmed that he had not been contacted by any biographers. This is no surprise, because these books are clearly AI-generated fakes. They are self-published books, written using services like ChatGPT and published under pseudonyms, attempting to piggyback off the success of Seth’s book.
Seth isn’t alone in this, to be clear. Earlier this year, Ezra Klein and Derek Thompson released Abundance, an instant bestseller. Within a few weeks, I found this book on the Amazon listings: The Politics of Abundance: Building a Future of Prosperity and Progress, Inspired by Ezra Klein and Derek Thompson. It is attributed to an author by the name of Ezekiel Thompson, some sort of unholy variation on ‘Ezra Klein’ and ‘Derek Thompson.’
Whoever is behind these books – we won’t call them writers, as they aren’t writing, so let’s agree to call them fraudsters instead – have a clear modus operandi. They identify books that are already hits, or books that they suspect will be hits, and they use AI to quickly produce low-quality knockoffs. They title these books in such a way so as to appear in search results, and they create covers that are intended to make it difficult to tell these books apart from the genuine thing.
This is not a business model that uses AI to cater to some real human need. These fraudsters are using AI to confuse, trick, and defraud customers.
Here is how I imagine it works:
The Fraudster identifies titles that he thinks will be successes
Using AI, he generates several books per title and self-publishes them. Given that these can run concurrently and require very little oversight (since we don’t care about quality), I suspect he can generate several books per hour.
The Fraudster lists these on Amazon via the Kindle store and print-on-demand, cutting down on overhead costs.
A small portion of customers get confused and buy the wrong book. Some of them don’t bother to return the book or dispute the transaction, and so the Fraudster makes a small profit.
It is that simple. The Fraudster doesn’t need to make much money per book. He needs to make a small amount of money per book while publishing a huge volume of books. That’s what AI enables him to do: to produce fake titles at scale. If each title can make $50 – not unreasonable if we assume a $5 profit per undisputed sale – and you produce 10 titles in an hour, then you’re making hundreds or thousands of dollars per hour of work.
Discussions about AI writing tend to focus on how this affects writers. This is understandable because the people having these discussions are writers, and we feel threatened by the rise of AI. AI promises to do what we do faster and cheaper — and even if it can’t do what we do better, history teaches us that speed and low prices often trump any desire consumers have for quality. Simply put, we worry that AI writing is going to make it so that we can’t make a living. What is often omitted from these discussions is the impact that AI has on consumers and citizens.
Seth’s book is going to be a hit — I believe it is already an NYT bestseller, and Viking has ordered a third printing. He’s going to make some money from this book, and I hope he buys a beach house or at least a new car. He worked hard, did the right thing, and deserves his success. These AI books likely won’t have a huge impact on Seth himself — the people being harmed here are consumers, not writers.
In some ways, this story is not about AI per se. Rather, it is about the way that human beings will find ways to make a quick buck, even if this means you’ll be harming other people. (I’m assuming that cheating someone out of $15 is harm.) And this seems to be the real AI grift: people want to use AI to cut corners without thinking about, or without caring about the negative effects.
And that brings us to another bad use of AI: AI-generated police reports.
In January 2025, an article was published on a Department of Justice website on AI-generated police reports. These are the reports police are supposed to file after they’ve had an encounter or made an arrest; these reports are often used as evidence in criminal proceedings.
The most prominent product that offers to write these AI-generated reports is called Draft One. They say that their AI can generate a high-quality police report from body camera footage and audio in seconds. There’s a huge problem with Draft One, though. The Electronic Frontier Foundation conducted an investigation and found that when an officer uses Draft One, there’s no record of what was written by the AI and what was written by the officer.
Draft One does not save the draft it generates, nor any subsequent edited versions. Rather, an officer copies the AI draft text and pastes it to the police report, and the AI draft disappears as soon as the window closes.
And this is a problem because we all know that AI hallucinates — and we know that sometimes officers lie on their police reports. If an officer lies in a police report, this is a relevant fact in a criminal proceeding. But if you use Draft One, there is no way to tell if it is a deliberately misleading description of the events or if it is an AI hallucination. How does the legal system handle this? I have no idea. We’re in uncharted waters.
Axon, the company that sells DraftOne – they are also the company that invented the Taser – offers safety features for their product. These features are designed to reduce hallucinations in final reports. One feature inserts deliberate errors that a human being must find in the process of editing and reviewing; another feature actually requires the officer to edit the content. These are supposed to ensure that there is a human in the loop.
Mother Jones found that most police departments that use DraftOne are turning those features off. They don’t want the safety features. They want the easily and quickly generated AI reports, because they want to save time.
In practice, police using Axon’s product have reduced or eliminated human oversight, deactivating safeguards meant to prevent AI bias while making it difficult or impossible to audit which reports were generated by AI.
Axon also released a feature where a header would be inserted into the document, letting the reader know that the report was AI-generated. They said they made this change in the spirit of transparency, but also that you could easily turn the feature off. Like with the other safety features, most police departments are turning this off.
As in the case of fake AI books, the case of AI-generated police reports isn’t really about AI. I imagine that the thought process behind choosing to use AI to write police reports is simple: Writing reports is complicated and takes time, and this is a good way to eliminate that busy work. But there are real dangers here. Unlike in the case of fake AI books, where perhaps you’ll lose $15, a false or misleading police report could lead to prolonged court proceedings or, in the worst case, a wrongful conviction. The safety features are meant to reduce this risk, but given that these safety features all require more human oversight (and thus human time), departments are opting out of using them.
In our effort to cut corners, we end up harming others.
Some states are trying to ban or regulate these AI-generated reports, according to Mother Jones:
In March, Utah’s state legislature passed a bill mandating that police departments disclose any use of artificial intelligence, the first such bill to enter law in any state. California’s state Assembly is considering a similar bill, and Seattle’s police watchdog has urged a local ordinance regulating departmental AI use.
I expect that we’ll see more of this in the future — but for now, it is largely unregulated, and we don’t know what the effects will be.
Platforms could act more like states in this regard, banning or heavily regulating fake AI books. But right now, it appears that enforcement is left up to authors and publishers. I asked Seth how he has tried to fight back against the AI fakes duping his customers. He told me that there is a form he can submit to Amazon, but that it is time-consuming to fill out and that he would have to refile it for every fake title. As a busy author promoting his debut, he doesn’t have much time for that sort of thing. And while he fills out those forms, more fake books could be uploaded to Amazon.
Given the way AI enables fraudsters to rapidly produce books, it is plausible that they can create fake books faster than Seth can find and have them taken down.








With you 100%.
The lack of liability for all middleman platforms is at the heart. If Facebook were liable for the posts it bumps via its algorithm that are slanderous or dangerous, and if Amazon were liable for fraudulent products, it would go a long way towards making our world less shitty.
As for your friend, it wouldn’t seem hard to write an AI agent to auto-submit pull-down requests.
John Oliver has been critical about AI on Last Week Tonight. He's commented that it is "stupid in ways we cannot predict." He's also talked about "AI Slop" flooding the internet, being the new spam. And that much of the content is coming from countries where small payments are more meaningful.
On the issue of Axon Enterprise, Oliver's previous commentary does not put them in a good light. Issues include the myth of the Taser being non-lethal, false claims about their effective rates, and reports of a cult-like atmosphere where employees felt pressured to be tased and tattooed with company logos. He expressed concern about Axon's growing influence over law enforcement policy.
The country needs a comprehensive federal law regarding transparency. California has an AI transparency act coming into effect in January. Colorado's is in February. There are many more legislative bills in the works on various aspects from deepfakes & synthetic media to consumer protection & transparency to workplace surveillance & management, and more... Unfortunately, with a patchwork of laws coming into place, there will be ways for the laws to be circumvented.
Until there is a coherent federal system of laws in place, platforms and tech companies will find it acceptable to treat AI harms as an acceptable cost of doing business. They will only do the minimum necessary to comply with the patchwork or regulations instead of investing in comprehensive solutions. This leave the issue with the general public to put pressure on the platforms and on government to act.