For ChatGPT I've been using this system prompt in an attempt to disable all of the engagement features. Its not mine, but I forget where I found it. It cuts out the engagement bait which causes these types of outcomes. Makes it more tool like than friend like:
Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
Excellent take, Jared! I could not agree more. I’m relatively enthusiastic about the possibilities of AI. But there are no potential benefits worth going down that road.
There are discord servers that encourage people to hurt themselves. Image boards before that. Pre-internet the local bully might drive a kid to suicide and still might. LLM slop is distasteful but I am not as yet convinced they are going to lead to an increase in self-harming behavior.
Most big AI companies explicitly want to scale up LLM usage exponentially so that they can make endless amounts of money. You don't think that recklessly scaling up a technology as a replacement for individual effort and human-to-human interaction will increase self-harming behavior? The best formula for self-harm is more isolation + reinforcing self-absorbed, myopic patterns of thought. That's why those discord servers, etc., are also harmful environments. By scaling up without guardrails, we are making that sort of harm much, much more accessible to impressionable young minds.
I agree with the sentiment wholeheartedly, but be aware that 99.9% of these cold email outreach campaigns are automated and often use AI themselves to develop emails that appear personal or relevant. Companies set up campaigns of 3-4 emails, scheduled to be sent over a set number of days or weeks, to thousands of targets. It's the next generation of spam but with a sheen of business-to-business services and personal engagement. I doubt there was a human on the other end of that email.
Yes, I’m aware there’s a good chance I’m simply getting templated emails. It’s one reason I wanted to publicly release the response, too — at least somebody will read it.
I shared my take on AI on my publication a while ago, and the takeaway message from that was how we value everything for what it does for us personally, not for what it actually is. At the end of the day, I am more worried about humanity’s moral bankruptcy than the technology we have made.
I agree. We argue if it’s bad for how it behaves- well it does this and that’s beneficial, it does that which is negative- and we just tat up the pros and cons of the behaviour, as if AI was just another kind of human being.
Well, yes, but I think this is only a fraction of the problem. The root is that Americans are encouraged to think that monetisation is the highest good.
So if I give my sister something for Christmas that I designed and sewed for her, she gets mad at me when I reject the idea of starting a business.
Thank you for posting this, Jared, I like where you stands for this AI chabot thing. I personally find AI helpful when it's used as a tool, either for learning or looking up information quickly etc. However, human interactions is one of the things that can never be replaced by AI. I'm not surprised by the existence of such "AI or Tech" companies whose end goal is to make profit regardless of user safety, or even morality. This isn't going to be the first or the last company/ product, we'll surely see more in the future. But as consumers, probably the best thing we can do is to think criqtically before using any product, measure the danger of it if there's any.
“Your scientists were so preoccupied with whether or not they could that they didn’t stop think whether they should” -Ian Malcolm, Jurassic Park
I much appreciate hearing humans insist on remaining human. Hearing them refuse to be shoved into the machine as one more cog.
Yeah, I don't want to be a luddite but theyre gonna make us Butlerian Jihad
For ChatGPT I've been using this system prompt in an attempt to disable all of the engagement features. Its not mine, but I forget where I found it. It cuts out the engagement bait which causes these types of outcomes. Makes it more tool like than friend like:
Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
Excellent take, Jared! I could not agree more. I’m relatively enthusiastic about the possibilities of AI. But there are no potential benefits worth going down that road.
The irony of this story is that "Eric's" emails, if being true to the mission of the company, are almost certainly sent by AI.
Jared, I applaud your decision to prioritize the personal authenticity within your philosophical endeavors!
There are discord servers that encourage people to hurt themselves. Image boards before that. Pre-internet the local bully might drive a kid to suicide and still might. LLM slop is distasteful but I am not as yet convinced they are going to lead to an increase in self-harming behavior.
Most big AI companies explicitly want to scale up LLM usage exponentially so that they can make endless amounts of money. You don't think that recklessly scaling up a technology as a replacement for individual effort and human-to-human interaction will increase self-harming behavior? The best formula for self-harm is more isolation + reinforcing self-absorbed, myopic patterns of thought. That's why those discord servers, etc., are also harmful environments. By scaling up without guardrails, we are making that sort of harm much, much more accessible to impressionable young minds.
I agree with the sentiment wholeheartedly, but be aware that 99.9% of these cold email outreach campaigns are automated and often use AI themselves to develop emails that appear personal or relevant. Companies set up campaigns of 3-4 emails, scheduled to be sent over a set number of days or weeks, to thousands of targets. It's the next generation of spam but with a sheen of business-to-business services and personal engagement. I doubt there was a human on the other end of that email.
Yes, I’m aware there’s a good chance I’m simply getting templated emails. It’s one reason I wanted to publicly release the response, too — at least somebody will read it.
I shared my take on AI on my publication a while ago, and the takeaway message from that was how we value everything for what it does for us personally, not for what it actually is. At the end of the day, I am more worried about humanity’s moral bankruptcy than the technology we have made.
I agree. We argue if it’s bad for how it behaves- well it does this and that’s beneficial, it does that which is negative- and we just tat up the pros and cons of the behaviour, as if AI was just another kind of human being.
Well, yes, but I think this is only a fraction of the problem. The root is that Americans are encouraged to think that monetisation is the highest good.
So if I give my sister something for Christmas that I designed and sewed for her, she gets mad at me when I reject the idea of starting a business.
Medea knew how to kill Talos...
I just realized you're probably not American, sorry. I'm not sure about British society, but I imagine it's not much different in this regard.
Good for you Jared.
Jared Henderson the type of guy to write an essay worthy of publication to a scammer. 😂😂
And the scammer be like, 'Sir, this is a Wendy's.'
But jokes aside tho, keep up the good work.
“But the revenue…the branding…the monetization…”
Thank you for posting this, Jared, I like where you stands for this AI chabot thing. I personally find AI helpful when it's used as a tool, either for learning or looking up information quickly etc. However, human interactions is one of the things that can never be replaced by AI. I'm not surprised by the existence of such "AI or Tech" companies whose end goal is to make profit regardless of user safety, or even morality. This isn't going to be the first or the last company/ product, we'll surely see more in the future. But as consumers, probably the best thing we can do is to think criqtically before using any product, measure the danger of it if there's any.
AI is the Infinite Jest that David Foster Wallace warned us about: it literally will kill people!
Thank you for writing the email and sharing this here, so I’m aware of how harmful AI can be, and believe there is still authenticity in this world!