Artificial Intelligence according to Byung-Chul Han
Non-things by Byung-Chul Han, Part 2.5
Welcome back to our 2026 book club on the philosophy of technology. Throughout the year, we’re exploring questions like:
What effect has the rise of digital technologies had on the human condition?
What do we lose – and what do we gain – when we live our lives online?
What do conventional narratives about technology (techno-optimism, techno-pessimism, fatalism) miss? What facts do we need to consider? What alternative narrative about technology do we need to construct?
Usually, I’d only post for the book club on Mondays. But Han’s work this week seemed to need to be split into a few posts. I tried to write about Heidegger, too, but that will need to be another post. I want to make sure we have space to explore all the ideas being raised.
The Heidegger post will come in a few days. Here’s the rest of the January schedule.
January 19: The following parts of Non-Things: Views of Things
January 25: Members-Only Zoom Call, 3 PM Eastern
January 26: The following parts of Non-Things: Stillness, Excursus on the Jukebox
Note that the call is on Sunday, January 25 at 3 PM Eastern. This will be when we have our monthly calls: the final Sunday of the month at that time. Those calls are open to paid subscribers; a recording of the call will be made available for those unable to attend. These calls will include a 30-45 minute presentation on the material and a group discussion.
Much of the discussion about AI over the past several years has focused on its effects. So, for instance, when I have written about AI in the past, I have focused on the effects that it has on its users — leading people into delusions (what’s called ‘AI psychosis,’ which admittedly I’m skeptical of as a term), or allowing our cognitive skills to atrophy as we use it for more of our writing. Others have focused on the origins of AI, primarily because it was trained on large amounts of copyrighted content (mine included!) without compensation for rights owners, with the added bonus that this has led to a decrease in work for writers and artists.
But that’s not the approach that Han takes in Non-things. Han’s chapter on artificial intelligence is largely concerned with whether or not we can say that artificial intelligence is actually intelligence. In other words, he wants to know if AI can think.
A prefatory note: later in the year, we’ll be reading some parts of What is Called Thinking? by Heidegger. I didn’t know that this was going to so strongly parallel this chapter of Han when I chose it, but I had turned to Heidegger’s text late last year when I was doing some thinking about thinking. So, we will have an opportunity to revisit some of these themes later in the year. (If you check the schedule, you’ll see that we’re reading What is Called Thinking? in September.)
To some, it is obvious that AI cannot think. AI is merely a text-prediction engine; it does pattern recognition; it never goes beyond its sources. (Han seems to be closer to this — more on that below.) To others, though, it is obvious that AI is thinking. In an article for The New Yorker, we read:
Was ChatGPT mindlessly stringing words together, or did it understand the problem? The answer could teach us something important about understanding itself. “Neuroscientists have to confront this humbling truth,” Doris Tsao, a neuroscience professor at the University of California, Berkeley, told me. “The advances in machine learning have taught us more about the essence of intelligence than anything that neuroscience has discovered in the past hundred years.” Tsao is best known for decoding how macaque monkeys perceive faces. Her team learned to predict which neurons would fire when a monkey saw a specific face; even more strikingly, given a pattern of neurons firing, Tsao’s team could render the face. Their work built on research into how faces are represented inside A.I. models. These days, her favorite question to ask people is “What is the deepest insight you have gained from ChatGPT?” “My own answer,” she said, “is that I think it radically demystifies thinking.”
Now, I want to express some initial skepticism here. Human beings love to invent technologies and then reinterpret themselves or the world in terms of those technologies. We should be very cautious about looking at AI and saying, ‘Oh, that is what human beings, as thinking beings, have been doing all along.’
Han’s point is a more grounded one:
Before capturing the world in concepts, thinking is emotionally gripped, even affected by the world…The first thought image is goosebumps. Artificial intelligence is incapable of thinking, for the very reason it cannot get goosebumps. It lacks the affective-analogue dimension, the capacity to be emotionally affected, which lies beyond the reach of data and information.
Reading this, I was reminded of work on intuition — not the way philosophers like to use their intuitions as evidence for their views, but the way that skilled practitioners use intuitions as a way of guiding their work. Elijah Chudnoff has written a number of papers on intuition in mathematics, e.g., this one (unfortunately paywalled). Years ago, I invited Chudnoff to speak at a conference I organized, and he gave an account of intuitions in mathematical practice. The details are lost to time – we’d have to consult his written work to see the complete account – but one thing has persisted in my memory. Expert mathematicians develop a sense of intuition for problems, a sense of what ‘feels’ right, and use that to guide their attempts at proofs. For Chudnoff, we have reason to think that these intuitions are at least loosely truth-tracking — in less jargon-heavy speak, they are reliable but hardly infallible. Speaking to logicians who work on similarly complex proofs, you’ll hear similar talk. ‘These two ideas feel the same,’ they’ll say, and that guides them in the direction of trying to prove an equivalence. ‘This has to follow from that.’
Now, it would be a mistake for them to conclude that because it feels a certain way, it must be true. That would be, let’s call it, naive intuitionism. Naive intuitionism is indefensible; I can’t think of a single good argument for it. But that doesn’t undermine the point that these intuitions guide thinking, pointing the mind in a fruitful direction. And these intuitions are felt. Han says that thinking ‘moves in a “field of experience” before it turns towards the individual objects and facts in that field.’ He goes on to say that:
The world disclosed in a fundamental attunement is subsequently articulated by thinking in terms of concepts. Being gripped precedes comprehension, the work on the formation of concepts.
Han’s critique of AI, then, seems to rest on the problem of AI ‘feeling.’ It can have no ‘heartfelt thinking’ which ‘measures and feels spaces before it works on concepts.’ Instead, it ‘processes pre-given, unchanging facts. It cannot provide new facts to be processed.’
I am not so sure about that last sentence. One area where AI has made some progress – where even expert practitioners in the field admit it can be interesting – is in mathematics. Some people are complete triumphalists about this, but one person who I take to be reliable is Terrance Tao, one of the most prolific and impressive living mathematicians.1 He is documenting the progress of AI in finding solutions to unsolved problems in mathematics, and he has found some. (Currently, though, he’s finding many claimed solutions that are incorrect.) One’s assessment of this, I think, is going to depend on one's account of mathematical knowledge. In some sense, a proof shows that the information contained in the premises was already included in the conclusion; there is no ‘new fact’ there. (Philosophers sometimes call this, or at least a related problem, the problem of logical omniscience.) Yet, finding a proof for something does feel like discovering a new fact. Is this a case where AI finds new facts to be processed? Is this a moment where AI goes beyond correlation and pattern recognition? Does it form concepts?
Two points late in the chapter are, however, very interesting to me. I would like to hear what everyone else thought of them.
First, Han writes:
Human thinking is more than computing and problem solving. It brightens and clears the world. It brings forth an altogether other world. The main danger that arises from machine intelligence is that human thinking will adapt to it and itself become mechanical.
This is related to the worries that I have raised in the past, but in a slightly different register. I’ve thought about the way the mind atrophies when left unused. What Han is saying, however, is that we may change what it means to think so that we resemble a machine, rather than finding that machines think like humans.
Second, Han’s final point on becoming an idiot was very interesting to me. An idiot, etymologically, is a private person. Someone who searches for a new idiom is an ‘idiot,’ then, in that older sense — they aren’t using the socially adopted idioms or conventions of thinking. This is where thinking can be especially fruitful. It is why I find the grand philosophical system-builders so interesting — they have to coin new idioms in an attempt to conceptualize the world. But AI, Han says, is too intelligent to become an idiot.
Let me know what you think of this. I’ll collect comments from Monday’s post and this post and share them in next week’s post so that we can all keep track of the growing discussion.
‘Impressive’? Certainly — but who am I to judge that? I’m primarily going on reputation. He won a Fields Medal, after all.



The thought struck me today. Heidegger, Han, Kingsnorth, et al are generally pretty negative about technology because they view it as something that divests us of who we are. As you pointed out, in this chapter, one of Han's worries is that the confrontation with AI leads to an adaptation of what thinking is in light of the new technology. What it means to be human changes in reaponse to our encpunter with the technological other. The technological world unmakes our own and leads us on to a death of our own making. In our attempt to master the world, over time we annihilate ourselves and so become mastered by it. As this process continues, humanity becomes unrecognizable. And that begets mourning. I've always thought they were right about all of that. But my shifting understanding that alienation is the root of all possibility is making me think this alone may not be the final word. What if all of this is correct, but there is more to the story? What if technology does alienate us? What if it does eliminate the current world? And yet, what if that is not an utter loss but the source of its emancipatory power? Technology constantly invokes crises within us as individuals and within our societies and cultures. It threatens us. But that threat opens up a wound within us that becomes the seed of possibility for a new future. The wound, as wound may fester. It may kill us. That is a real fear. But it may also bring us together and allow us to meet one another and the world in new ways, to reach into one another through the breach. New ways of thinking and being can become possible through the encounter with the mechanistic other. If this is the case, technology brings with it the possibility of failure, the risk of annihilation, but it can be a true and absolute good creative effort which may heal us through our wounds. None of this eliminates the critiques or worries of Han et al. But it may provide a way to embrace them and learn to recognize and act against the nihilistic risk of the technological while cultivating its wonder and possibility in the world
"Expert mathematicians develop a sense of intuition for problems, a sense of what ‘feels’ right, and use that to guide their attempts at proofs."
This reminds me of an essay by Henrik Karlsson on the topic of wordless thought:
https://www.henrikkarlsson.xyz/p/wordless-thought
I always felt confused by the descriptions of the thought processes great thinkers give concerning their creative output; this essay cleared up some of that confusion. I struggle to see how AI will manage to incorporate this ambiguity into its logic.
"It processes pre-given, unchanging facts. It cannot provide new facts to be processed."
This stood out to me because I am not sure most people "provide new facts". At least, not all the time. This is not meant to denigrate humanity's intelligence as a whole. Even though our languages are endlessly recursive, we use only a tiny portion of its potential. We speak in pre-built phrases, sentences, and concepts, most of the time. I think that the discourse surrounding AI tends to have too sophisticated a standard for 'thinking'.