I had started to write a very positive piece today about Substack, how this platform supports writers and readers (instead of treating them as adversaries), the economics behind it all (from the perspective of someone on Substack and YouTube) and the like. And then I remembered that the deluge of AI slop is continuing and even increasing, and I knew I had to say something. Perhaps my more positive piece will come out tomorrow.
It all started when I saw this two days ago.
Ben Williamson is a Fellow at the Centre for Research in Digital Education at the University of Edinburgh, researching and writing on data in its various manifestations. He recently took an interest in whether or not academic journals would be licensing writing to companies like Microsoft in order for it to be used as training data for generative AI. A senior executive at Taylor & Francis has confirmed that this is the case — and academics cannot opt out.
You have to understand that academic publishing, at least when it comes to journal articles, is a perverse institution. Universities and foundations fund research; academics perform this research and write articles; for-profit publishers publish the research, acquiring the rights to the articles in the process; these publishers then license that research back to universities. The publishers profit, but they do not pay the researchers, and universities foot the bill.
Now, they will also license that research to AI companies to be used to create new generative AI models. They will profit, but they will not pay researchers. Universities may end up with a new bill to foot.
Nature released an editorial on this recently as well. They argue that there need to be clear ‘rules of engagement’ for AI firms’ use of academic research, including an opt-out clause for academics. But there is no reason to think that publishers will insist on this. When academics began pushing for open access to research for those unable to afford a license, the publishers’ response was to make that an option, but only if the researchers paid thousands of dollars in fees upfront. I can imagine that if a researcher wants to opt out of being used for AI training, the publishers would follow the same playbook: just pay a few grand upfront. These are profit-seeking institutions who do not feel a sense of fraternity with researchers and universities.
As AI slop continues to fill the internet – google ‘beethoven portrait’ and several of the top results will be AI-generated – we risk becoming deadened to it all. But the internet does not have to be this way. We don’t have to fill our minds with subpar art and subpar writing just because AI firms have made it more convenient.
And now I worry that academic writing will be increasingly offloaded to AI. Just type in a prompt, upload your data sheets, and tell the AI how comfortable you are with p-hacking, and then you’ll have a full academic article worthy of a psychology journal. Academic writing, already quite fairly maligned for being lifeless, won’t be human anymore.
As an engineer, I write reports for forensic, cause and origin evaluation of structural failures. I’ve asked, “If AI is writing your report, then exactly what engineering failure analysis are you doing?”
The situation is truly saddening. Substack allows for works published on its platform to train AI models. However, the good thing here is that one may still opt-out of it, if one wants.