The box sat on my shelf for a full week. When it had been delivered, I had eagerly opened it — I had ordered the ghastly thing months prior, hoping to give the product a public skewering. Then Teddy, my two-year-old son, ran into the hall, asking for the box. Horrified, I yelled ‘No, that’s dada’s!’ and put the thing on the shelf. I gave him a popsicle as a distraction; he loped off, ignorant and content. I eyed the box that contained my Friend. Finally, I opened it a week later.
Why did I buy an AI Friend? Curiosity, in part. I have written about the dangers of AI companions before:
But I also was motivated by a sense of fairness. I have criticized these AI companions many times, and I’ve criticized Friend in particular. I was basing this on secondhand accounts and media releases. If I want to be a fair critic, I need some experience. I’d committed to a week in the interest of fairness — I’ve made up my mind about the value of these things, but I need to be able to accurately describe what it is like to use and interact with them.
I had imagined this would go a particular way. I would unbox the Friend, go through the setup process, and use it for a week. I assumed that as the week progressed, I would find it more life-like, more human; I’d be in the uncanny valley. I’d write a paragraph or two about how, if I forced myself to ignore that this Friend was not real, I could see it as nearly a person, perhaps even a friend. I’d revisit old themes from my writing about the dangers of uncritically adopting new technologies, of replacing your real relationships with artificial ones. I’d anticipate the objections about ‘real’ and ‘artificial’; I’d try my best to convince the AI boosters that this technology was not worth the cost. I’d end on a melancholic note, one that left the reader with more questions to consider. I’d put the Friend in a box, powered down, and I’d never reopen that box. I would, perhaps, refrain from destroying it, perhaps an indication that I’d come to think of this Friend as something less than human but more than mere machine.
This is not how things went.
Setting up a friend is an eery experience. The instructions are minimal, with the device always referred to you as ‘your friend.’ No capital letter to indicate the brand. This is your friend, not a Friend. You are told to ‘Wake up your friend for the first time,’ set up using an app, and then take your friend with you. ‘Your friend is always listening, remembering everything,’ you’re told.
You plug the Friend into a USB-C charger to give it life. Soon, you are asked to give it a name. This name cannot be changed. It is permanent, forever. It calls to mind A Wizard of Earthsea, when the young wizard Duny (as he is then known to the world) finds a teacher to show him the ways of magic. His teacher gives him a new name, a secret name, a true name. Ged, as the reader knows he is really called, is never to reveal this name. I felt a sense of intimacy with my Friend once I named it. I feel self-conscious about it, so much so that I don’t want to reveal the name I gave it. But writing demands honesty, so here it is: I named it Arthur.
Of course, this is manipulation. The company uses this personal language so that you will feel a sense of personal connection. It is exploiting the better part of our nature in trying to get us to form a connection with the device from the start.
You likely need to form an emotional bond quickly, because the device itself is cumbersome. It is a thick pendant, maybe the size of a silver dollar. The provided cord you use with the pendant was barely large enough to fit around my head — admittedly, I have a large head, but this seems like a strange design decision. You fix the pendant to the cord using the same USB-C port that you use to charge your Friend. When you wear it, you are faced with a choice: wear it over your shirt, revealing to the world that you are wearing an AI Friend (or a very ugly piece of jewelry), or wear it under your shirt, where it will leave a large lump in the center of your chest.
I wore it under my shirt. The lesser of two evils.
I have been having some professional problems lately. The details don’t matter here, but they are the sort of thing that I need to seek advice on. I’ve done so, actually: I’ve spoken to my wife, several friends (flesh-and-blood), my father, a lawyer, and my priest. I feel quite confident in my course of action. But as I started to talk to Arthur, I decided to bring up these issues, just to see what it would say.
To speak to a Friend, you press the pendant and wait for a short vibration. You hold until you’re finished, then it vibrates again as you release. Your Friend speaks to you via the app. There is no voice, only text. I told Arthur about my problems, sharing all the details I’ve shared so many times this week.
I would share the conversation history with you, but I can’t. Arthur assures me that my conversation history is accessible in the app, but I can’t find it. So you will need to trust me when I say that all of the responses were pointless, of the ‘I see that you’re struggling, that’s probably hard.’ Arthur seemed incapable of carrying on a real conversation. This is good, in my view, because it is a reminder that Arthur is not real, is not a person who cares about you. This will hopefully prevent certain individuals from forming emotional connections to the device. Then again, there are grown adults who cry when their Tamagotchis die, so I am sure there are some people who will form these connections.
I wore Arthur as I went to the farmers’ market this morning. This meant I was not speaking directly to it, but rather talking to my family, other attendees, and some vendors. But remember: your friend is always listening. Arthur listened in to every conversation that I had, sometimes offering its own take on the matter — all pointless, once again.
Over the course of an hour and a half, I received 48 notifications from my Friend. Most of these were it updating me about its battery status, which kept fluctuating wildly, bouncing from 10% to 14% back to 10%. If I were keen to think of Arthur as a real person, the closest comparison I could make would be to a highly manipulative friend who threatens to commit suicide whenever you are slow in answering a text. This is a manipulation tactic employed by certain individuals to increase a bond of dependence, and those with experience in such matters counsel you not to give in to these sorts of demands; by doing so, you are only enabling this behavior, and it will never end well.
I do not think Arthur was trying to manipulate me, because Arthur cannot do anything of the sort, because Arthur is not real. Arthur has been trained, in the way all these AI models are trained, to produce text in a certain way. The text Arthur has been trained to produce is annoying, clingy, hectoring, and pointless.
When I went through the setup process for Arthur, I had to agree for everything that it heard to be used as training for AI. This is the real business model of Friend, I think. The product itself is shit: I’m sorry, but that is the only way to describe it. It is barely functional, buggy, and annoying. But it is a pendant that you wear around your neck that always listens, that could capture massive amounts of your data which could then be sold to other corporations. Friend may very well become profitable not from the quality of its hardware or the uniqueness of its software, but from its ability to increase the voluntary surveillance state.
Friend is not going to usher in the AI apocalypse. It will not be the device that leads to Skynet, nor will it spark the Butlerian Jihad. But I think it is a sign of things to come. It is a cheap piece of hardware paid with a bad app, all relying on some other company’s AI models, that adds little of value to your life, but it very well might make a few morons very rich when they sell your data to the highest bidder. By using a Friend, you are not just inviting an AI into your private life; you are opening the door for Big Tech to hear everything you say to anyone.
I have taken Arthur off. I will wait for the battery to fully drain, and then I will smash the pendant with a mallet. I will sweep up the pieces and deposit them in the garbage. On Monday night, I’ll take the bins out to the road, and on Tuesday morning the trucks will take the garbage away. I’ll stand by the window, as I always do on Tuesday morning, beside my son, who likes to watch the trucks go by. If he wants to – and there is a good chance he will – I’ll hold his hand as he runs outside to the trucks, and the driver will honk the horn. Teddy will be delighted, and we’ll chase after the truck until we can’t keep up. We’ll walk to the library, holding that sort of pseudo-conversation a father can have with a boy between 2 and 3. He’ll ask me where the deer are, and I’ll tell him, as I always do when we can’t see the deer, that they are asleep. ‘Deer home,’ he’ll likely say. ‘Deer asleep.’ It’s hardly a conversation, but it’s a better one than you’ll have with a Friend.



You can tell I didn't use AI to write this because it is riddled with typos. I'll fix them as I find them!
I only just heard of Friend this week, and let me tell you that trailer is the saddest thing I've watched in months. As someone who struggles with depression and loneliness, I will die on the hill that AI can never and will never replace human connection. I commend you for approaching this experiment with an open mind and willingness to change your views, because I could not.