19 Comments
User's avatar
Vince Strawbridge's avatar

This is exactly what my Socratic machine was telling me.

Expand full comment
Michael Kenney's avatar

As much as I enjoy and have gained from Plato's Republic, I would not necessarily recommend it as the first of Plato's dialogues that one should read. Republic is significantly longer, more sophisticated, and generally more intimidating than most of Plato's other dialogues.

On the other hand, if I were going to read only one of Plato's dialogues, it would be Republic. Indeed, if I were going to read only one work of philosophy ever, Republic would be one of the main contenders.

Expand full comment
Jared Henderson's avatar

It may not be the ideal place to start, but in a guided setting like I’m offering here it is not beyond the reach of many readers.

Expand full comment
rif a saurous's avatar

While agreeing full-throatedly that we should all say, ‘I am a human being, in possession of rational faculties, and I will use them to their fullest extent,’ and while admitting up front that I work in the AI industry, adjacent to but not directly on today's frontier models, I think takes like this and the "ChatGPT is bullshit" paper will not age well, and are instances of imagining the world is a certain way we want it to be rather than the way it is.

Are the mistakes listed above genuine? Yes! Should the authors have checked their sources? Yes! But the argument boils down to "Chatbots sometimes do this, therefore they are fallible, therefore we should ignore them." I have bad news for anyone considering accepting this argument, which is that humans also do this sometimes, both accidentally and on purpose.

The "ChatGPT is bullshit" makes additional, outdated arguments based on the premise that the models are designed to "produce likely text" rather than true and helpful text. This is becoming less true over time; the "produce likely text" stage is now referred to as "pretraining", and there's a huge and expensive step where the model builders attempt, of course not entirely successfully, to make these things honest and helpful.

Practically, we've reached the point where I find the models deeply useful. I ask it for creative help with a recipe. Sure, there's an example on the internet where a weak outdated model said glue was a pizza topping, but the actual result today from frontier models is I get useful advice that makes my food better. I discuss movie plots with the models and feel that i end up understanding the movie better, seeing multiple perspectives. I ask it to help me with hard math questions --- sometimes its answer is wrong, but in its attempt I see the technique I should have used and learn the truth.

Philosophically, I think there are deep issues here, and it's better to avoid glib dismissals, especially because the underlying story is changing rapidly. We of course shouldn't think of these things as infallible oracles and give up our agency, but I'm rapidly coming to view them as essential cognitive prostheses that make me smarter and more productive and even more virtuous. Something big is happening.

Expand full comment
Jared Henderson's avatar

This isn’t quite what I intend. What I have in mind is a very human tendency to want to offload cognitive responsibility. The treatment of AI as a kind of oracle - which I basically mean as a kind of highly reliable truth-spewer - is just an instance of that.

But I don’t think this is a glib dismissal of AI. I’ve written about it a number of times, and my view is that it is actually quite helpful at some things. One day I hope no one ever has to write a SQL query or an Excel function ever again. But for that basic human task of finding the truth and interpreting the world, we can’t just offload.

Expand full comment
rif a saurous's avatar

Thanks Jared. I agree completely that offloading cognitive responsibility is a huge issue. I don't have an answer.

Consider navigation. I'm old enough to remember having to have an atlas in my car and having to carefully ask people for directions to their home on the phone. Nowadays they share a Google maps link, and the screen in my car tells me when to turn. (And in ten more years, maybe it will be a self-driving car and I won't even know the address, I'll just say "Take me to Jared's house.") I've offloaded this, and it's genuinely double-edged. I get where I'm going faster, with less stress. But something real is genuinely lost: my sense direction, size, location, physical place in the world is noticeably weaker. But note what is lost is inherent to offloading cognition --- it's not that Google Maps is spewing bullshit.

The AI models that are appearing now raise similar issues at a much larger scale, because of their much wider applicability. Used well, I think these models can make us smarter and better, but I think it's also much easier to use them poorly rather than well. And again, this is somewhat orthogonal to whether the systems can perform cognition accurately enough --- even if we find that the machines are not spewing bullshit. Or, to put it differently, whereas you wrote that what the machines output "have little to no attachment to the world" (and give examples of factual screwups), the problems of offloading cognitions are still there even as the machines become more Oracle-like. Watching what's happening, my belief is that these machines are going to get to the point where across a huge variety of topics, they're giving answers at or above the level of the smartest humans. What then?

Expand full comment
Raymond Lau's avatar

I agree with the points you raised in your two notes. Two big topics are mixed together in Jared's original post--"offloading cognitive responsibility" and AI. While they intersect sometimes, they must also be discussed as independent issues. As you put it so well: even if AI chatbots are perfected, our natural inclination to find shortcuts or time-saving means still remains. I also agree that it is a far stretch to go from occasional mistakes to "little to no attachment to the world" or "bullshit machines." It's clear that advanced technology in general and AI in particular are here to stay and will continue to expand; so, for me, the question is whether human kind can find a way to clearly understand their consequences and utilize them for human good.

Expand full comment
Jared Henderson's avatar

The mistakes are far from occasional. Hallucinations are a regular occurrence!

As for 'bullshit machines,' it is derived from Harry Frankfurt's conception of bullshit. When you use one of these AI systems, there's no difference in how they present information when they are speaking truly and speaking falsely; if you were evaluating it like a human being you were speaking with, you would quickly judge it to be someone that doesn't have care for whether or not something is true or false. It's always presented in that same matter-of-fact tone.

Expand full comment
Raymond Lau's avatar

I have never used an AI chatbot and I don't have any desire to do so in the near future; so, Jared, I'll just take on faith your assessment of the current level of accuracy of AI chatbots. (Am I offloading cognitive responsibility and treating you as my "oracle"?) My poor engagement with new technologies is a result of my relative old age (I'm close to 70) rather than of any principled objections. On the contrary, I feel sad that I won't be around to experience the benefits of technological advances that will surely arrive in the next 30 years.

Two philosophical questions continue to concern me:

1. Jared, you have used many words and phrases that imply human agency and intention---"hallucinations," "doesn't have care for whether or not something is true or false," "matter-of-fact tone," "bullshit," "fabrication," "The AI made it up," etc. Together, they seem to imply that the current mistakes are not merely mistakes in calculation by the computer but, rather, something more sinister, or somebody with a malicious intent. Why not view the technology of AI chatbots as a flawed tool which needs to be improved before it can be useful? In any case, you may be right, but I don't think you have made your case yet.

2. I think it has become too easy to criticize or to blame new technologies, especially in media, for our unwillingness to think or, in your words, to ask the big questions. My impression is that most recent criticisms are repetitious and remain only at the journalistic level; they fail to reach the fundamental philosophical questions. Also, I believe it is difficult to find any new philosophical critiques of technology or technological thinking beyond those already offered by thinkers like Heidegger, the Frankfurt School, and Foucault. As you admit that our reluctance to think precedes the age of new media, then wouldn't it be more important to ask why? In "What Is Called Thinking?" Heidegger says that "most thought-provoking for our thought-provoking time is that we are still not thinking." For me, this is the more urgent question for us to pursue.

Expand full comment
rif a saurous's avatar

Thanks for the great discussion!

Jared, you wrote, "if you were evaluating it like a human being you were speaking with, you would quickly judge it to be someone that doesn't have care for whether or not something is true or false."

This doesn't quite match my current experience. If I were evaluating frontier model AI responses as if they were coming from a human being, I would say they actually cares a lot about telling the truth and being a good conversation partner: they will consider your responses, update their point-of-view etc. Instead, my experience is more of talking to someone who in many ways is vastly more knowledgeable than me, but has astonishing and surprising gaps that a human with that knowledge would never have. And of course, this in turn makes me not evaluate it as a human being. It's something else.

But I don't find the Frankfurt-style "bullshit" arguments compelling. They seem to veer back and forth between two separate approaches:

1. AI responses are bullshit because the content is bad, in a way that can only be produced by someone or something indifferent to truth. I think this is much less factually correct today than several years ago when this argument first became fashionable, and frontier AI creators have put a lot of effort into getting their machines to produce a lot more truth and a lot less falsehood. They're far from perfect, but the models today are lightyears away from three years ago in a way that's moved me from "I never use this" to "I still have to use my brain, but this makes me much smarter, better, and more productive."

2. AI responses are automatically Frankfurt-style bullshit because AIs are not alive and have no intentions, and therefore their outputs are by definition "produced with indifference to truth." This argument could be narrowly true but if so it is interesting primarily in that it points to a need for a new language and concepts: Frankfurt-style bullshit from humans is almost never wanted or helpful whereas AIs can in fact produce text which, even if it is bullshit in a narrow Frankfurt-ian sense, is incredibly valuable and helpful to humans.

Expand full comment
George Beckert's avatar

I think I would draw a little bit of a different interpretation of the Socrates quote here and also with the conclusion of AI being in a category of "truth spewer" along with "Delphic priestesses, chicken bones, tea leaves, Tarot cards, internet encyclopedias, and fast-talking gurus". I think it undermines the fact that all these things are extremely different within their own cultural context, and to deny them and reduce them to a single category takes an overly rational view that is endemic to our modern, technological world.

My reading would be that Socrates is not denying the word of god in any way here, from my understanding it would be relatively normal for ancient greeks to question the word of god or try to find meaning in it in a very different way than modern christians would, for example. There wasn't any centralized authority of religion, and there was a vibrant dialogue and many arguments about the various stories and interpretations of the divine.To me, Socrates isn't really denying the temptation to accept god's word at face value here to me, he's using a divine framing for his method because that would probably be a very usual way to frame an intellectual debate in his culture. When you say "I could see how a zealous Greek could believe that this was impious." that's a much more modern framing of religion, and I think it would be a relatively foreign idea for the greeks who were used to arguing about their gods all the time, unlike modern christians who have a centralised authority of religion either through the bible or the church.

The criticism of AI is valid, but is a little reductive because AI is a genuinely useful tool to collate a large amount of general data about a subject before doing more specific research and fact checking. It is important to engage the information we get in a dialogue, in the same way Socrates engages his source of truth in a dialogue.

I would go as far to say people are relatively uncritical of ALL sources of knowledge, and they should be critical of ALL sources of knowledge. We must be critical of the institution of academia and the science it produces, we must be critical of the history we learn and our systems of schooling, we must be critical of AI and the companies that produce it. I think specifically with AI the easiest tools we have to question its validity (other than when it is just incorrect and people are what I would see as blatantly lazy with their research) are the arts and humanities. We have historically undervalued pursuits in the humanities and arts in favor of more material and rational ones, especially in America, and somewhat lack the tools to engage in the same kind of dialogue with it that the ancient greeks had in their time with their gods with our technology and our science.

Expand full comment
Mightyspunge's avatar

Hold on, let me ask chatGPT if you are reliable first

Expand full comment
Helena Pulaku's avatar

Commenting on the first part of your article (about Socrates), I'd say that the real Question regarding the oracle's pronouncement shouldn't be *whether* the oracle was right or not but *in what way* is the oracle's pronouncement correct. For Apollo was also named "Loxias" (the "Oblique-one"), offering answers that should not necessarily be taken at face value.

As for AI... yeah... nah.

Expand full comment
Thomas Rodriguez's avatar

Thanks for this thoughtful and pointed post, Jared. I’m reminded of an anecdote. When I studied abroad in Greece during my undergraduate studies (in philosophy), one of my classmates asked our professor what he thought the modern-day equivalent of the Oracle of Delphi was. Our professor responded with something along the lines of “the stock market”. I thought that was an interesting response. This was in the summer of 2023, before the bulk of the generative AI craze began, but we seem to use quite a number of “oracles” in our daily lives that may lead to epistemically undesirable outcomes.

Expand full comment
littlehummer's avatar

I asked Uncensored AI the age of a famous equestrian. It gave me the age of his son. I chided it for not being able to recognize the difference between the names Nelson Pessoa, and Rodrigo Pessoa!!! (....and after that stupid error, I will never trust AI!)

Expand full comment
Brock's avatar

For anyone who will be participating in the read-along who has not yet acquired a copy of The Republic, I'll mention that you can almost certainly acquire a copy in a used bookstore. It will probably be the Hackett edition, translated by Grube and Reeve, since that's the most common translation used in college classes.

Expand full comment
Jared Henderson's avatar

Yes, those Hackett editions are practically everywhere (and many copies at used bookstores are suspiciously lacking in marginalia!)

Expand full comment
EncryptedLore's avatar

I found that the Griffith translation was available in pdf format on the Internet Archive. I wanted a printed version, but since I was not able to find one in bookstores and since I don't want to buy it of Amazon, it's a good alternative.

Expand full comment