The thought struck me today. Heidegger, Han, Kingsnorth, et al are generally pretty negative about technology because they view it as something that divests us of who we are. As you pointed out, in this chapter, one of Han's worries is that the confrontation with AI leads to an adaptation of what thinking is in light of the new technology. What it means to be human changes in reaponse to our encpunter with the technological other. The technological world unmakes our own and leads us on to a death of our own making. In our attempt to master the world, over time we annihilate ourselves and so become mastered by it. As this process continues, humanity becomes unrecognizable. And that begets mourning. I've always thought they were right about all of that. But my shifting understanding that alienation is the root of all possibility is making me think this alone may not be the final word. What if all of this is correct, but there is more to the story? What if technology does alienate us? What if it does eliminate the current world? And yet, what if that is not an utter loss but the source of its emancipatory power? Technology constantly invokes crises within us as individuals and within our societies and cultures. It threatens us. But that threat opens up a wound within us that becomes the seed of possibility for a new future. The wound, as wound may fester. It may kill us. That is a real fear. But it may also bring us together and allow us to meet one another and the world in new ways, to reach into one another through the breach. New ways of thinking and being can become possible through the encounter with the mechanistic other. If this is the case, technology brings with it the possibility of failure, the risk of annihilation, but it can be a true and absolute good creative effort which may heal us through our wounds. None of this eliminates the critiques or worries of Han et al. But it may provide a way to embrace them and learn to recognize and act against the nihilistic risk of the technological while cultivating its wonder and possibility in the world
Really enjoyed your thoughts here. The concept of "growth through wounds" really stuck with me for its potentially positive outcomes for humanity, though I worry about how deep the wounds will be before we start to heal and learn from our scars. I fear, however, that today's technological shift differs in one key way from the movable type printing press or radio and television. AI is irrevocably tied to venture capital and private interests. The printing press, while still being a private business endeavor, quickly spread to multiple European cities with hundreds of printers, many of which failed to make a profit. Still, that technology did not remain only in the hands of 4 or 5 companies. It is possible that AI will spread and new companies will enter the field. I'd venture to guess, however, that the current AI cartel will remain dominant and limit our options for AI. TV and radio rapidly came under government regulations once they became viable. I'm not here to argue one way or the other for governmental regulation. I use the example to point out that both were seen as public goods that needed set standards and regulations. The internet, social media and AI have been allowed to grow free of significant regulation. That unfettered, free range approach to AI could create "wounds" so deep that they may not ever fully heal.
Really great commentary on this section. You remark that “The wound, as wound may fester. It may kill us. That is a real fear. But it may also bring us together and allow us to meet one another and the world in new ways, to reach into one another through the breach.” The mantra of *move fast and break things* has created an ecosystem where the wound is more likely to kill rather than scar over and afford healing “through our wounds.” How do we create a society that allows for technological change that embraces the idea of wounds-as-growth?
I always felt confused by the descriptions of the thought processes great thinkers give concerning their creative output; this essay cleared up some of that confusion. I struggle to see how AI will manage to incorporate this ambiguity into its logic.
"It processes pre-given, unchanging facts. It cannot provide new facts to be processed."
This stood out to me because I am not sure most people "provide new facts". At least, not all the time. This is not meant to denigrate humanity's intelligence as a whole. Even though our languages are endlessly recursive, we use only a tiny portion of its potential. We speak in pre-built phrases, sentences, and concepts, most of the time. I think that the discourse surrounding AI tends to have too sophisticated a standard for 'thinking'.
"Human thinking is more than computing and problem solving. It brightens and clears the world. It brings forth an altogether other world. The main danger that arises from machine intelligence is that human thinking will adapt to it and itself become mechanical."
That quote made me think of this one: "Pathos is the beginning of thinking. Artificial intelligence is apathetic, that is, without pathos, without passion. It computes." (Han, 40)
Han's idea of pathos and apathy rung true with me. Outsourcing thought and creativity to AI risks a loss of passion and connection to the arts, literature and culture as well as academic pursuits. Rather than the human mind mimicking the mechanical algorithm of AI, I'd argue the true danger is to our wonder. The wonder that comes from an unexpected turn in a story or through a surprise discovery in the midst of research. AI obliterates all through its inarguably terrifying efficiency. Claude presents the full information buffet for you. An academic Door Dash for research and thought. It is not the same, not as meaningful, as collecting the ingredients yourself, reading the recipe, preparing the dish and presenting your own work and research through your own intellectual home cooking. I can order chicken marsala from Door Dash anytime I want. I get it from a local place and it is very good. When I decided to learn how to make chicken marsala, however, I came to understand the dish in a new way. I learned skills that I could apply to other recipes. I gained a sense of accomplishment and satisfaction through my efforts. Was it as good as the local place? On my first go around, no, it was not. But I tweaked the recipe, was careful not to overcook the chicken and I improved. Look, I know its just a chicken dinner but I found a passion for getting that dish right, or at least improved. I found value in the effort. This is what concerns me about AI's impact. AI isn't just apathetic. It also acts to dilute our passion for discovery and wonder. It's flat and smooth and frictionless - like ordering from Door Dash. I worry that humanity will increasingly adapt to AI's apathy by sacrificing our passion and wonder.
I found Han's claim that "The affective is essential to thinking" thought-provoking. If taken to its extreme and compared to traditional philosophical views, it sounds quite bold! My layman's understanding of philosophy, historically, is that reasoning is generally seen as something separate—and often superior—to emotions. That emotions may be in fact essential to thinking seems provocative but somehow closer to being true than the strict reason/passion separation.
Zooming out a bit, I am persuaded by the idea that there is something a priori, undirected (unprompted?) in thinking, before directed thinking can even start. I am not sure if that a priori state-of-mind is "the affective" or something harder to define. Our sense perception and the fact that we are embodied minds in the world might be relevant.
As for the generation of new knowledge: AI protein folding seems like a decent counterexample to Han's point, in addition to the mathematical proofs above. And, of course, how radically new are human creations, really?
"My layman's understanding of philosophy, historically, is that reasoning is generally seen as something separate—and often superior—to emotions."
You certainly find this in a lot of places. For instance, we might think Plato endorses this view in The Republic. Aristotle seemed to have a more nuanced view of the passions, so I'd need to revisit some of his work to see if it would be fair to say that he agreed. The Stoics say emotions just as value-judgments, essentially subordinating them to reason. But then you have figures like Hume, who famously held that reason is a slave to the passions.
Often when reading I will highlight a passage, but not fully be able to explain why I highlighted it. I *feel* that something is there. Unfortunately, I often don’t get back around to the highlight to ever try to find if there was anything to it. This, I *feel*, is somehow linked to the principle of faire l’idiot Han mentions:
“By adopting the principle faire l’idiot, thinking risks the leap into the altogether other, ventures on untrodden paths.”
Say I do go back to one of those highlights, and looking at it again gives me a really absurd thought. The line of inquiry this thought has spawned seems to really make no sense, but screw it, I’ll go down the path anyways. Maybe I’ve accidentally stumbled onto an untrodden path that actually leads somewhere interesting.
At the moment it doesn't seem like AI is capable of having that experience. The problem might not be that AI is too intelligent, like Han says, but that its intelligence isn’t counterbalanced by anything like emotion or intuition. Perhaps emotion and intuition are what give all the collective ideas we have as humans the buffer space to keep expanding, whereas logic and intelligence alone would be an airtight space with no wiggle room to play around and discover anything new.
On AI, thinking, and consciousness we probably need to turn to philosophers or psychologists with a more "scientific" turn, how are these things really implemented in our biological brains. I have not read much in this area, but it seems like people like Daniel Dennett might have more to offer in understanding what is going on as opposed to throwing out "the first thought image is goosebumps".
Though I am firmly in the read more, interact with the non-digital world more, school of thought (including reading some of the so-called great books), I think commentators of this ilk too readily discount human adaptation. LLMs are changing the world and will continue to do so. We will adapt to their presence like we have adapted to all the other changes that have happened in the history of man. I think a rough analogy can be drawn between our feelings on AI technology and society and how thinkers in the 1950s and 1960s (Heidegger and Arendt, for instance) were very focused on the atomic age and the threat of nuclear war. This is not to say that we should be concerned, but that there is a discounting of how things will change.
A last note on the topic of LLM hallucinations and bullshitting (in the Frankfurt sense). My limited experience is that an LLM is orders of magnitude less likely to make these errors than the random person sitting next to me at a bar. We have a "compared to what" problem. Comparing the safety of a self-driving car to that of the average human driver would be an unreasonably low bar, just as "perfect" safety is an unreasonably high bar.
"On AI, thinking, and consciousness we probably need to turn to philosophers or psychologists with a more "scientific" turn, how are these things really implemented in our biological brains."
I think this would be good to explore in the future. Sadly, even in a year, we're only going to get to a sliver of what we could read.
I'm a professional AI researcher, so I've read pretty deeply on this, and the short answer is the philosophy (including Dennett) is super interesting and fun and also nobody really knows!
I will make a couple points.
The first is that the use of language becomes very delicate. We don't really know what "thinking" is, and this lets us easily confuse ourselves. For instance, I agree with Han that AI (almost certainly, as of today) is doing nothing we would call "heartfelt thinking", and can never "feel goosebumps", and that that's *important*, but also I absolutely think AI is "thinking", and in many (but not all) ways this thinking is highly analogous to human thinking. So Han is technically (in my view) probably wrong, but is pointing at lots of things that are actually and importantly true and right.
I will also say that I find Jared's claim that "a proof shows that the information contained in the premises was already included in the conclusion; there is no ‘new fact’ there" problematic. Again, language is delicate, but I think it is reasonable to view "facts" as relative to human knowledge and understanding, and I think finding a proof for something *is* like discovering a new fact? it's possible the universe is deterministic. If so, are there no "new" facts, because an outside simulator of our deterministic universe could have already known any such fact? I don't buy that.
I do agree with Han’s point here that over reliance on AI as a substitute or crutch for thinking will gradually (but surely) lead to an end of thinking as we know it today. What strikes me as well is the following: “Genuine thinking brings forth a new world” which I read as the thinking that we’ve done as human beings has led to the creation or invention of things, philosophies, art forms, etc. that didn’t exist before, wholly original things that came from a unique individual or collective experience. My interpretation is that we feel our way into meaning and reach a personal sort of understanding that contributes to our thinking. It’s this subjective, random and imperfect process that defines human thinking and makes it wondrous and capable of true invention. In contrast, the abuse of AI would flatten thinking, everyone would start from the same set of “pre-given, unchanging facts”, use the same models, converge to the same prompts, and generate results that resemble each other. Effectively, the disappearance of meaning, feeling and emotion from the thinking process. If that happens: Music will all sound the same. Proofs and arguments will be structured the same. In many ways, it’s a devolution of thinking.
It’s a depressing notion. Taking a bit of leap, and somewhat generalizing, I think it implies that we’re edging towards giving up on innovation as we’ve known it and moving more towards being satisfied with incremental or marginal improvements to existing things.
Like a lot of the commenters on the first couple of posts, I’m oscillating between enjoying the book and being really frustrated by it. I agree with his viewpoint more often than not, but the way that he doesn’t show his work almost makes this feel more like a writing prompt generator than a philosophy book.
His comment about how AI is “deaf” got me thinking about whether language could be the starting point for understanding and awareness, rather than emergent from understanding like it is for us. Humans experience the self and the outside world, and then try to communicate that experience through language. Our contact with the world is through our senses, emotions and concepts, and the invention of language allowed us to communicate with other beings that also sense, emote and conceptualize. The words we use are inexact, but close enough that another human who also experiences the world can usually comprehend what we’re trying to say.
It’s obvious that modern LLMs aren’t aware, but sometimes it’s hard to shake the feeling that the seeds could be there. We’re used to taking writing skills to be a signal of intelligence. AI’s writing (even if it’s soulless and formatted like a listicle most of the time) is well-organized, coherent, and grammatically correct. If a human wrote like that, we would assume them to be clear thinking and decently educated. The LLM is arriving at this language through a much different probability-based route, but the fact that we can have free-flowing conversations with machines built that way is still pretty jarring, even though we’re pretty used to it by now.
If the hallucination problem gets solved and a far-future iteration of an LLM develops the capacity to actually understand, it would be doing so with inexact language as its touchpoint to the world, rather than through vision, scent, sound, or other links to direct happenings with the physical world. It would be starting with the abstract and working back to the physical – pretty much upside down from how we experience the world. Without a clear answer from science about what exactly consciousness is (how it either emerges from the brain or otherwise links with it), I still feel like it’s hard to completely rule out the possibility of an experience grounded in language, but it would be a very alien type of experience.
On the ‘becoming an idiot’ point: anyone who has been frustrated by AI’s hallucinations might find it hard to swallow the idea that it’s (at least currently) too intelligent to become an idiot. It's perfectly willing to write something objectively false, with complete confidence. It’s almost like it’s too confident to be intelligent. Maybe a truly intelligent machine will require some sort of self-doubt mechanism.
About the ‘idiot’ comment: I think we’re trading on two senses of the term. I think AI is dumb in the sense of willing to confidently assert bullshit (admittedly I’m giving in to anthropomorphism). But I think Han means it in the sense of having one’s own private idiom.
About the first thing you said: this book has made me more frustrated than any other book by Han, perhaps because I’m reading it so slowly.
I was intrigued by the “idiot” comment as well. Han's comment that “thinking risks the leap into the altogether other, ventures on untrodden paths” (44) made me think that being an *idiot* means that one bristles at the status quo and then veers off the well-traveled path. "The philosopher bids farewell to all that went before” and so questions everything (hello Socrates and Descartes); they leap onto untrodden paths. But AI can’t do this — at least not in its current LLM form — because it is necessarily built upon all that came before. All AI can do (and here I’m thinking of generative AI) is predict. It can’t veer off the trodden path or leap into new futures because it is tethered to prior information. Maybe true intelligence involves the sort of self-doubt and lack of blind hubris that you mention.
I agree with both you and Jared: this book is fascinating and frustrating in equal proportions! haha
One more thought, on your second to last point: I also worry about our thinking being shaped by AI. We are already becoming lazy and impatient as LLMs become an almost irresistible and superconvenient assistant for pretty much anything. However, this could be a chance to reclaim what's truly ours, whether that is the affective essence Han talks about or our physical embodiment in the world.
Going off on a tangent, one good—but scary—thing about the possibility of superhuman AI is that it forces us to ask: why do we do anything? Or, rather, why should we do anything? If AI can produce the same (or better) outcomes than you, why do it? Perhaps the answer is that the outcome is not all that matters. AI may replace the outcome-producing process (you or me) but it will never be able to *be* you or me. Only you are you (duh) and only you have access to your subjective experience. There is value in the subjective experience of reasoning, thinking, or creating itself—outcomes aside.
This is not a novel idea, especially for this comment section, but it is something I keep going back to, and it certainly came up after reading Han's take on thinking.
"The main danger that arises from machine intelligence is that human thinking will adapt to it and itself become mechanical."
I don't foresee human thinking becoming mechanical in the sense that the 'way' they are thinking will be mechanical (thinking mechanically or statistical prediction), but rather human thinking will become a mechanical process. 1. Have a question, 2. Reach for your phone, 3. Google.
The "intelligence" sought is found, but rarely understood (or lingered with). Answering a question apart from mechanical help often "brightens and clears the world" because it requires understanding through human interaction or the interaction with real 'things'.
Two of Han's points can be explained by a common concept. The first point "Before capturing the world in concepts, thinking is emotionally gripped, even affected by the world" and the second point "It cannot provide new facts to be processed." are related to one of the things that makes human human: intention.
There are some researches that point that humans intention and decision making are always related to emotion, no matter how "objective" it is. It was observed that in patients whose part of the brain that process emotion is damaged, it becomes difficult to form intention and make decision (cf. Jonah Lehrer).
AI can never form intention. People talks about "prompt engineering" shows exactly this. It tries to find the most efficient way to convey user's intention to the AI, which implies in the end whatever the AI "does", it always starts from human's intention.
AI cannot bring any new fact to the world, in the sense that whatever it spits out is merely algorithm processing past data responding to the intention of the user. It is the user's intention that is new to the world.
The first point Han mentioned may have been proven in many AI slops nowadays, such as a woman on Instagram with a giant owl as big as her (the video is made by AI), while thousands of comments of people asked where she found the owl. The simple term "fake news" the completely outright fake that used to be created by clickbait factories are now produced by AI, and people just believe it and narrate reality based on it, is that reality now mechanical as Han mentioned? Is that reality become half-fictious as the first part of it is made up by a short video or a made-up paragraph? What is the different between the fake stories made by clickbait factories(human) and recent AI generated content? The consequence seems to be similar as they are both fiction, but how 'mechanical" Han means with AI? I still can't discern the significance with this new condition of living.
Yes, AI will absolutely change thinking, in some ways possibly for the better, in many ways for the worse. Thoughts conforming to what the machine expects or humans just not remembering how to think seem like genuine risks.
Regarding idiocy, Han's book was written in 2022, which is frankly already ancient history. I think he was definitely right about the AI of 2022, arguably right about the AI of 2026, and all bets are off about the AI of 2036.
> What Han is saying, however, is that we may change what it means to think so that we resemble a machine, rather than finding that machines think like humans.
This has happened before, and we've named it the 'Flynn Effect'. We defined intelligence as being able to do well at an IQ test, and then we've shaped the way people think of 'thinking' to fit that IQ test, thus seeing scores rise. I remember reading a book (sadly, I don't remember which book), where they discussed researchers trying to give an IQ test to people in rural Central Asia (I believe it was) and them just *not caring* because it wasn't how they thought or perceived the world. Take the descendants of those very same people, and they've now been molded into this view of intelligence. It's honestly just another form of colonisation. Same with trying to make humans into AI-like machines.
It's also worth I think, taking a read of Lakoff and Johnson and the others in their ilk. They argue that cognition is inherently *embodied*, via metaphors with things in the physical world. I tend to lean towards this view myself, which means that AI can't be thinking, because it doesn't have these concepts, it doesn't understand them.
At the risk of oversimplifying, it seems Han is arguing that AI cannot engage in genuine thinking because it cannot directly perceive “totality.” By genuine thinking, he appears to mean the kind that brings genuinely new knowledge into the world.
If so, I both agree and disagree. It seems there are two types of thinking here. One, is the kind that begins with “emotional affect” or “gripping” that requires the ability to perceive “totality”. I agree that this kind of thinking is akin with intuition. This kind of thinking can bring forth new insights or breakthroughs (though more on my confusion here below).
However, Han seems quick to dismiss a second kind of thinking, one that looks more like logic, analysis, or pattern recognition. This is the kind of thinking that begins with already known facts. Humans do this type of thinking too, and now machines arguably do it “better.” By better, I mean they can process far more inputs, generate more combinations, and produce many more outputs. As one example, AI systems are now generating new mathematical proofs.
This leaves me with some lingering questions about what Han means by thinking and knowledge.
First, what is Han’s definition of knowledge? He seems to equate it with concepts, conclusions, and totality, but I struggled to follow this move. In particular, I don’t quite see how “totality” (which I associate with Heidegger) maps onto concepts or conclusions (which I think are Hegel’s terms). Perhaps I’m missing something fundamental here.
Second, where does Han believe knowledge comes from? He again gestures toward Heidegger, referring to knowledge as something concealed within a “great hall in which everything that can be known is kept.”
I’m intrigued by the idea that knowledge is a kind of sensing of the true essence of things and would love to read a more detailed argument on this line of reasoning. Perhaps this means I need to finally read Heidegger, though that’s a daunting proposition!
The thought struck me today. Heidegger, Han, Kingsnorth, et al are generally pretty negative about technology because they view it as something that divests us of who we are. As you pointed out, in this chapter, one of Han's worries is that the confrontation with AI leads to an adaptation of what thinking is in light of the new technology. What it means to be human changes in reaponse to our encpunter with the technological other. The technological world unmakes our own and leads us on to a death of our own making. In our attempt to master the world, over time we annihilate ourselves and so become mastered by it. As this process continues, humanity becomes unrecognizable. And that begets mourning. I've always thought they were right about all of that. But my shifting understanding that alienation is the root of all possibility is making me think this alone may not be the final word. What if all of this is correct, but there is more to the story? What if technology does alienate us? What if it does eliminate the current world? And yet, what if that is not an utter loss but the source of its emancipatory power? Technology constantly invokes crises within us as individuals and within our societies and cultures. It threatens us. But that threat opens up a wound within us that becomes the seed of possibility for a new future. The wound, as wound may fester. It may kill us. That is a real fear. But it may also bring us together and allow us to meet one another and the world in new ways, to reach into one another through the breach. New ways of thinking and being can become possible through the encounter with the mechanistic other. If this is the case, technology brings with it the possibility of failure, the risk of annihilation, but it can be a true and absolute good creative effort which may heal us through our wounds. None of this eliminates the critiques or worries of Han et al. But it may provide a way to embrace them and learn to recognize and act against the nihilistic risk of the technological while cultivating its wonder and possibility in the world
Really enjoyed your thoughts here. The concept of "growth through wounds" really stuck with me for its potentially positive outcomes for humanity, though I worry about how deep the wounds will be before we start to heal and learn from our scars. I fear, however, that today's technological shift differs in one key way from the movable type printing press or radio and television. AI is irrevocably tied to venture capital and private interests. The printing press, while still being a private business endeavor, quickly spread to multiple European cities with hundreds of printers, many of which failed to make a profit. Still, that technology did not remain only in the hands of 4 or 5 companies. It is possible that AI will spread and new companies will enter the field. I'd venture to guess, however, that the current AI cartel will remain dominant and limit our options for AI. TV and radio rapidly came under government regulations once they became viable. I'm not here to argue one way or the other for governmental regulation. I use the example to point out that both were seen as public goods that needed set standards and regulations. The internet, social media and AI have been allowed to grow free of significant regulation. That unfettered, free range approach to AI could create "wounds" so deep that they may not ever fully heal.
Really great commentary on this section. You remark that “The wound, as wound may fester. It may kill us. That is a real fear. But it may also bring us together and allow us to meet one another and the world in new ways, to reach into one another through the breach.” The mantra of *move fast and break things* has created an ecosystem where the wound is more likely to kill rather than scar over and afford healing “through our wounds.” How do we create a society that allows for technological change that embraces the idea of wounds-as-growth?
"Expert mathematicians develop a sense of intuition for problems, a sense of what ‘feels’ right, and use that to guide their attempts at proofs."
This reminds me of an essay by Henrik Karlsson on the topic of wordless thought:
https://www.henrikkarlsson.xyz/p/wordless-thought
I always felt confused by the descriptions of the thought processes great thinkers give concerning their creative output; this essay cleared up some of that confusion. I struggle to see how AI will manage to incorporate this ambiguity into its logic.
"It processes pre-given, unchanging facts. It cannot provide new facts to be processed."
This stood out to me because I am not sure most people "provide new facts". At least, not all the time. This is not meant to denigrate humanity's intelligence as a whole. Even though our languages are endlessly recursive, we use only a tiny portion of its potential. We speak in pre-built phrases, sentences, and concepts, most of the time. I think that the discourse surrounding AI tends to have too sophisticated a standard for 'thinking'.
These are the sentences that broke Han wide open for me:
"The fundamental attunement is the gravity that gathers words and concepts around it."
and
"Heartfelt thinking measures and feels *spaces* before it works on concepts."
Han doesn't reason from logic, like a machine.
He reasons from poetry, like a human.
"Human thinking is more than computing and problem solving. It brightens and clears the world. It brings forth an altogether other world. The main danger that arises from machine intelligence is that human thinking will adapt to it and itself become mechanical."
That quote made me think of this one: "Pathos is the beginning of thinking. Artificial intelligence is apathetic, that is, without pathos, without passion. It computes." (Han, 40)
Han's idea of pathos and apathy rung true with me. Outsourcing thought and creativity to AI risks a loss of passion and connection to the arts, literature and culture as well as academic pursuits. Rather than the human mind mimicking the mechanical algorithm of AI, I'd argue the true danger is to our wonder. The wonder that comes from an unexpected turn in a story or through a surprise discovery in the midst of research. AI obliterates all through its inarguably terrifying efficiency. Claude presents the full information buffet for you. An academic Door Dash for research and thought. It is not the same, not as meaningful, as collecting the ingredients yourself, reading the recipe, preparing the dish and presenting your own work and research through your own intellectual home cooking. I can order chicken marsala from Door Dash anytime I want. I get it from a local place and it is very good. When I decided to learn how to make chicken marsala, however, I came to understand the dish in a new way. I learned skills that I could apply to other recipes. I gained a sense of accomplishment and satisfaction through my efforts. Was it as good as the local place? On my first go around, no, it was not. But I tweaked the recipe, was careful not to overcook the chicken and I improved. Look, I know its just a chicken dinner but I found a passion for getting that dish right, or at least improved. I found value in the effort. This is what concerns me about AI's impact. AI isn't just apathetic. It also acts to dilute our passion for discovery and wonder. It's flat and smooth and frictionless - like ordering from Door Dash. I worry that humanity will increasingly adapt to AI's apathy by sacrificing our passion and wonder.
Again, really insightful post and comments!
I found Han's claim that "The affective is essential to thinking" thought-provoking. If taken to its extreme and compared to traditional philosophical views, it sounds quite bold! My layman's understanding of philosophy, historically, is that reasoning is generally seen as something separate—and often superior—to emotions. That emotions may be in fact essential to thinking seems provocative but somehow closer to being true than the strict reason/passion separation.
Zooming out a bit, I am persuaded by the idea that there is something a priori, undirected (unprompted?) in thinking, before directed thinking can even start. I am not sure if that a priori state-of-mind is "the affective" or something harder to define. Our sense perception and the fact that we are embodied minds in the world might be relevant.
As for the generation of new knowledge: AI protein folding seems like a decent counterexample to Han's point, in addition to the mathematical proofs above. And, of course, how radically new are human creations, really?
"My layman's understanding of philosophy, historically, is that reasoning is generally seen as something separate—and often superior—to emotions."
You certainly find this in a lot of places. For instance, we might think Plato endorses this view in The Republic. Aristotle seemed to have a more nuanced view of the passions, so I'd need to revisit some of his work to see if it would be fair to say that he agreed. The Stoics say emotions just as value-judgments, essentially subordinating them to reason. But then you have figures like Hume, who famously held that reason is a slave to the passions.
Often when reading I will highlight a passage, but not fully be able to explain why I highlighted it. I *feel* that something is there. Unfortunately, I often don’t get back around to the highlight to ever try to find if there was anything to it. This, I *feel*, is somehow linked to the principle of faire l’idiot Han mentions:
“By adopting the principle faire l’idiot, thinking risks the leap into the altogether other, ventures on untrodden paths.”
Say I do go back to one of those highlights, and looking at it again gives me a really absurd thought. The line of inquiry this thought has spawned seems to really make no sense, but screw it, I’ll go down the path anyways. Maybe I’ve accidentally stumbled onto an untrodden path that actually leads somewhere interesting.
At the moment it doesn't seem like AI is capable of having that experience. The problem might not be that AI is too intelligent, like Han says, but that its intelligence isn’t counterbalanced by anything like emotion or intuition. Perhaps emotion and intuition are what give all the collective ideas we have as humans the buffer space to keep expanding, whereas logic and intelligence alone would be an airtight space with no wiggle room to play around and discover anything new.
On AI, thinking, and consciousness we probably need to turn to philosophers or psychologists with a more "scientific" turn, how are these things really implemented in our biological brains. I have not read much in this area, but it seems like people like Daniel Dennett might have more to offer in understanding what is going on as opposed to throwing out "the first thought image is goosebumps".
Though I am firmly in the read more, interact with the non-digital world more, school of thought (including reading some of the so-called great books), I think commentators of this ilk too readily discount human adaptation. LLMs are changing the world and will continue to do so. We will adapt to their presence like we have adapted to all the other changes that have happened in the history of man. I think a rough analogy can be drawn between our feelings on AI technology and society and how thinkers in the 1950s and 1960s (Heidegger and Arendt, for instance) were very focused on the atomic age and the threat of nuclear war. This is not to say that we should be concerned, but that there is a discounting of how things will change.
A last note on the topic of LLM hallucinations and bullshitting (in the Frankfurt sense). My limited experience is that an LLM is orders of magnitude less likely to make these errors than the random person sitting next to me at a bar. We have a "compared to what" problem. Comparing the safety of a self-driving car to that of the average human driver would be an unreasonably low bar, just as "perfect" safety is an unreasonably high bar.
"On AI, thinking, and consciousness we probably need to turn to philosophers or psychologists with a more "scientific" turn, how are these things really implemented in our biological brains."
I think this would be good to explore in the future. Sadly, even in a year, we're only going to get to a sliver of what we could read.
I'm a professional AI researcher, so I've read pretty deeply on this, and the short answer is the philosophy (including Dennett) is super interesting and fun and also nobody really knows!
I will make a couple points.
The first is that the use of language becomes very delicate. We don't really know what "thinking" is, and this lets us easily confuse ourselves. For instance, I agree with Han that AI (almost certainly, as of today) is doing nothing we would call "heartfelt thinking", and can never "feel goosebumps", and that that's *important*, but also I absolutely think AI is "thinking", and in many (but not all) ways this thinking is highly analogous to human thinking. So Han is technically (in my view) probably wrong, but is pointing at lots of things that are actually and importantly true and right.
I will also say that I find Jared's claim that "a proof shows that the information contained in the premises was already included in the conclusion; there is no ‘new fact’ there" problematic. Again, language is delicate, but I think it is reasonable to view "facts" as relative to human knowledge and understanding, and I think finding a proof for something *is* like discovering a new fact? it's possible the universe is deterministic. If so, are there no "new" facts, because an outside simulator of our deterministic universe could have already known any such fact? I don't buy that.
I do agree with Han’s point here that over reliance on AI as a substitute or crutch for thinking will gradually (but surely) lead to an end of thinking as we know it today. What strikes me as well is the following: “Genuine thinking brings forth a new world” which I read as the thinking that we’ve done as human beings has led to the creation or invention of things, philosophies, art forms, etc. that didn’t exist before, wholly original things that came from a unique individual or collective experience. My interpretation is that we feel our way into meaning and reach a personal sort of understanding that contributes to our thinking. It’s this subjective, random and imperfect process that defines human thinking and makes it wondrous and capable of true invention. In contrast, the abuse of AI would flatten thinking, everyone would start from the same set of “pre-given, unchanging facts”, use the same models, converge to the same prompts, and generate results that resemble each other. Effectively, the disappearance of meaning, feeling and emotion from the thinking process. If that happens: Music will all sound the same. Proofs and arguments will be structured the same. In many ways, it’s a devolution of thinking.
It’s a depressing notion. Taking a bit of leap, and somewhat generalizing, I think it implies that we’re edging towards giving up on innovation as we’ve known it and moving more towards being satisfied with incremental or marginal improvements to existing things.
Like a lot of the commenters on the first couple of posts, I’m oscillating between enjoying the book and being really frustrated by it. I agree with his viewpoint more often than not, but the way that he doesn’t show his work almost makes this feel more like a writing prompt generator than a philosophy book.
His comment about how AI is “deaf” got me thinking about whether language could be the starting point for understanding and awareness, rather than emergent from understanding like it is for us. Humans experience the self and the outside world, and then try to communicate that experience through language. Our contact with the world is through our senses, emotions and concepts, and the invention of language allowed us to communicate with other beings that also sense, emote and conceptualize. The words we use are inexact, but close enough that another human who also experiences the world can usually comprehend what we’re trying to say.
It’s obvious that modern LLMs aren’t aware, but sometimes it’s hard to shake the feeling that the seeds could be there. We’re used to taking writing skills to be a signal of intelligence. AI’s writing (even if it’s soulless and formatted like a listicle most of the time) is well-organized, coherent, and grammatically correct. If a human wrote like that, we would assume them to be clear thinking and decently educated. The LLM is arriving at this language through a much different probability-based route, but the fact that we can have free-flowing conversations with machines built that way is still pretty jarring, even though we’re pretty used to it by now.
If the hallucination problem gets solved and a far-future iteration of an LLM develops the capacity to actually understand, it would be doing so with inexact language as its touchpoint to the world, rather than through vision, scent, sound, or other links to direct happenings with the physical world. It would be starting with the abstract and working back to the physical – pretty much upside down from how we experience the world. Without a clear answer from science about what exactly consciousness is (how it either emerges from the brain or otherwise links with it), I still feel like it’s hard to completely rule out the possibility of an experience grounded in language, but it would be a very alien type of experience.
On the ‘becoming an idiot’ point: anyone who has been frustrated by AI’s hallucinations might find it hard to swallow the idea that it’s (at least currently) too intelligent to become an idiot. It's perfectly willing to write something objectively false, with complete confidence. It’s almost like it’s too confident to be intelligent. Maybe a truly intelligent machine will require some sort of self-doubt mechanism.
Two things:
About the ‘idiot’ comment: I think we’re trading on two senses of the term. I think AI is dumb in the sense of willing to confidently assert bullshit (admittedly I’m giving in to anthropomorphism). But I think Han means it in the sense of having one’s own private idiom.
About the first thing you said: this book has made me more frustrated than any other book by Han, perhaps because I’m reading it so slowly.
I was intrigued by the “idiot” comment as well. Han's comment that “thinking risks the leap into the altogether other, ventures on untrodden paths” (44) made me think that being an *idiot* means that one bristles at the status quo and then veers off the well-traveled path. "The philosopher bids farewell to all that went before” and so questions everything (hello Socrates and Descartes); they leap onto untrodden paths. But AI can’t do this — at least not in its current LLM form — because it is necessarily built upon all that came before. All AI can do (and here I’m thinking of generative AI) is predict. It can’t veer off the trodden path or leap into new futures because it is tethered to prior information. Maybe true intelligence involves the sort of self-doubt and lack of blind hubris that you mention.
I agree with both you and Jared: this book is fascinating and frustrating in equal proportions! haha
One more thought, on your second to last point: I also worry about our thinking being shaped by AI. We are already becoming lazy and impatient as LLMs become an almost irresistible and superconvenient assistant for pretty much anything. However, this could be a chance to reclaim what's truly ours, whether that is the affective essence Han talks about or our physical embodiment in the world.
Going off on a tangent, one good—but scary—thing about the possibility of superhuman AI is that it forces us to ask: why do we do anything? Or, rather, why should we do anything? If AI can produce the same (or better) outcomes than you, why do it? Perhaps the answer is that the outcome is not all that matters. AI may replace the outcome-producing process (you or me) but it will never be able to *be* you or me. Only you are you (duh) and only you have access to your subjective experience. There is value in the subjective experience of reasoning, thinking, or creating itself—outcomes aside.
This is not a novel idea, especially for this comment section, but it is something I keep going back to, and it certainly came up after reading Han's take on thinking.
"The main danger that arises from machine intelligence is that human thinking will adapt to it and itself become mechanical."
I don't foresee human thinking becoming mechanical in the sense that the 'way' they are thinking will be mechanical (thinking mechanically or statistical prediction), but rather human thinking will become a mechanical process. 1. Have a question, 2. Reach for your phone, 3. Google.
The "intelligence" sought is found, but rarely understood (or lingered with). Answering a question apart from mechanical help often "brightens and clears the world" because it requires understanding through human interaction or the interaction with real 'things'.
Two of Han's points can be explained by a common concept. The first point "Before capturing the world in concepts, thinking is emotionally gripped, even affected by the world" and the second point "It cannot provide new facts to be processed." are related to one of the things that makes human human: intention.
There are some researches that point that humans intention and decision making are always related to emotion, no matter how "objective" it is. It was observed that in patients whose part of the brain that process emotion is damaged, it becomes difficult to form intention and make decision (cf. Jonah Lehrer).
AI can never form intention. People talks about "prompt engineering" shows exactly this. It tries to find the most efficient way to convey user's intention to the AI, which implies in the end whatever the AI "does", it always starts from human's intention.
AI cannot bring any new fact to the world, in the sense that whatever it spits out is merely algorithm processing past data responding to the intention of the user. It is the user's intention that is new to the world.
The first point Han mentioned may have been proven in many AI slops nowadays, such as a woman on Instagram with a giant owl as big as her (the video is made by AI), while thousands of comments of people asked where she found the owl. The simple term "fake news" the completely outright fake that used to be created by clickbait factories are now produced by AI, and people just believe it and narrate reality based on it, is that reality now mechanical as Han mentioned? Is that reality become half-fictious as the first part of it is made up by a short video or a made-up paragraph? What is the different between the fake stories made by clickbait factories(human) and recent AI generated content? The consequence seems to be similar as they are both fiction, but how 'mechanical" Han means with AI? I still can't discern the significance with this new condition of living.
My thoughts on your two points.
Yes, AI will absolutely change thinking, in some ways possibly for the better, in many ways for the worse. Thoughts conforming to what the machine expects or humans just not remembering how to think seem like genuine risks.
Regarding idiocy, Han's book was written in 2022, which is frankly already ancient history. I think he was definitely right about the AI of 2022, arguably right about the AI of 2026, and all bets are off about the AI of 2036.
> What Han is saying, however, is that we may change what it means to think so that we resemble a machine, rather than finding that machines think like humans.
This has happened before, and we've named it the 'Flynn Effect'. We defined intelligence as being able to do well at an IQ test, and then we've shaped the way people think of 'thinking' to fit that IQ test, thus seeing scores rise. I remember reading a book (sadly, I don't remember which book), where they discussed researchers trying to give an IQ test to people in rural Central Asia (I believe it was) and them just *not caring* because it wasn't how they thought or perceived the world. Take the descendants of those very same people, and they've now been molded into this view of intelligence. It's honestly just another form of colonisation. Same with trying to make humans into AI-like machines.
It's also worth I think, taking a read of Lakoff and Johnson and the others in their ilk. They argue that cognition is inherently *embodied*, via metaphors with things in the physical world. I tend to lean towards this view myself, which means that AI can't be thinking, because it doesn't have these concepts, it doesn't understand them.
At the risk of oversimplifying, it seems Han is arguing that AI cannot engage in genuine thinking because it cannot directly perceive “totality.” By genuine thinking, he appears to mean the kind that brings genuinely new knowledge into the world.
If so, I both agree and disagree. It seems there are two types of thinking here. One, is the kind that begins with “emotional affect” or “gripping” that requires the ability to perceive “totality”. I agree that this kind of thinking is akin with intuition. This kind of thinking can bring forth new insights or breakthroughs (though more on my confusion here below).
However, Han seems quick to dismiss a second kind of thinking, one that looks more like logic, analysis, or pattern recognition. This is the kind of thinking that begins with already known facts. Humans do this type of thinking too, and now machines arguably do it “better.” By better, I mean they can process far more inputs, generate more combinations, and produce many more outputs. As one example, AI systems are now generating new mathematical proofs.
This leaves me with some lingering questions about what Han means by thinking and knowledge.
First, what is Han’s definition of knowledge? He seems to equate it with concepts, conclusions, and totality, but I struggled to follow this move. In particular, I don’t quite see how “totality” (which I associate with Heidegger) maps onto concepts or conclusions (which I think are Hegel’s terms). Perhaps I’m missing something fundamental here.
Second, where does Han believe knowledge comes from? He again gestures toward Heidegger, referring to knowledge as something concealed within a “great hall in which everything that can be known is kept.”
I’m intrigued by the idea that knowledge is a kind of sensing of the true essence of things and would love to read a more detailed argument on this line of reasoning. Perhaps this means I need to finally read Heidegger, though that’s a daunting proposition!