Discussion about this post

User's avatar
Vince Strawbridge's avatar

This is exactly what my Socratic machine was telling me.

rif a saurous's avatar

While agreeing full-throatedly that we should all say, ‘I am a human being, in possession of rational faculties, and I will use them to their fullest extent,’ and while admitting up front that I work in the AI industry, adjacent to but not directly on today's frontier models, I think takes like this and the "ChatGPT is bullshit" paper will not age well, and are instances of imagining the world is a certain way we want it to be rather than the way it is.

Are the mistakes listed above genuine? Yes! Should the authors have checked their sources? Yes! But the argument boils down to "Chatbots sometimes do this, therefore they are fallible, therefore we should ignore them." I have bad news for anyone considering accepting this argument, which is that humans also do this sometimes, both accidentally and on purpose.

The "ChatGPT is bullshit" makes additional, outdated arguments based on the premise that the models are designed to "produce likely text" rather than true and helpful text. This is becoming less true over time; the "produce likely text" stage is now referred to as "pretraining", and there's a huge and expensive step where the model builders attempt, of course not entirely successfully, to make these things honest and helpful.

Practically, we've reached the point where I find the models deeply useful. I ask it for creative help with a recipe. Sure, there's an example on the internet where a weak outdated model said glue was a pizza topping, but the actual result today from frontier models is I get useful advice that makes my food better. I discuss movie plots with the models and feel that i end up understanding the movie better, seeing multiple perspectives. I ask it to help me with hard math questions --- sometimes its answer is wrong, but in its attempt I see the technique I should have used and learn the truth.

Philosophically, I think there are deep issues here, and it's better to avoid glib dismissals, especially because the underlying story is changing rapidly. We of course shouldn't think of these things as infallible oracles and give up our agency, but I'm rapidly coming to view them as essential cognitive prostheses that make me smarter and more productive and even more virtuous. Something big is happening.

16 more comments...

No posts

Ready for more?