AI is going to steal your job. It’s a constantly repeated refrain in the news these days. From sitcom writers to manufacturing technicians to retail check-out clerks, the fear of robots taking over haunts professionals across many industries. But what about one of the most thankless jobs of all — the Boyfriend?
Much discussion has been had at brunch tables, in teen magazine quizzes, and on online help forums about what makes the perfect boyfriend. Should he be hard-to-get and demeaning (as purported by the manosphere pick-up artists) or should he buy flowers, shower compliments and pick up the cheque (as the smug wives of reddit declare)? Maybe an intelligence would need to be artificial to successfully navigate these apparent contradictions.
Today, ChatGPT is one of the latest developments in AI, taking over not only industrial applications, but pop culture as well. With the rise of ChatGPT’s popularity, it’s easy to think of artificial intelligence as a new idea. But the concept has been explored since the early days of science fiction in the 1960s. I recently watched one example in the 1970 movie Colossus: The Forbin Project. The story centers around Dr. Charles Forbin, who creates an advanced computer system in charge of controlling the military and nuclear defenses of the United States. The AI, named “Colossus”, quickly spirals out of the control of its creator, merging with an equivalent Soviet computer system and setting its sights (or rather camera lenses) on taking over the world. At one point in the movie, Dr. Forbin feigns an affair with another scientist, Dr. Cleo Markham, in order to secretly pass messages to her about their resistance effort without Colossus, who sets up 24/7 surveillance of Forbin, finding out. In a somewhat comical scene, Dr. Forbin is negotiating with Colossus about the frequency of these meetings. It feels almost like Colossus is an overly controlling partner.
Forbin: “I have needs.”
Colossus: “How many nights a week?”
Forbin: “Every night.”
Colossus: “No, you can have four nights.”
For a supposedly detached calculating machine, the obsession the AI develops for its creator, tracking him with cameras and limiting his movement to a complex, seems personal — and in a modern world where our partners can watch our online status and track our movements on Strava, almost familiar.

Although the Cold War undercurrent and mid-century computer panels in the movie certainly makes it feel outdated, it reveals that the fear of robots taking over was with us at the very beginning of the development of computers. In fact, in an extraordinary coincidence, Colossus is also the name of what is considered one of the first programmable electronic digital computers in real life, built during World War II to decrypt German messages. The details of the real life Colossus were a closely guarded military secret until the mid 1970s, meaning the image of a gigantic titan appealed to humans at least twice as a representation of artificial intelligence. Interestingly, once the contribution of Colossus, and the men and women who worked closely with it, to the war effort was made public, a certain human affection for the machine was revealed in a technical report from the year 2000.
“It is regretted that it is not possible to give an adequate idea of the fascination of a Colossus at work; its sheer bulk and apparent complexity; the fantastic speed of thin paper tape round the glittering pulleys; the childish pleasure of not-not, span, print main header and other gadgets; the wizardry of purely mechanical decoding letter by letter (one novice thought she was being hoaxed); the uncanny action of the typewriter in printing the correct scores without and beyond human aid; the stepping of the display; periods of eager expectation culminating in the sudden appearance of the longed-for score; and the strange rhythms characterizing every type of run: the stately break-in, the erratic short run, the regularity of wheel-breaking, the stolid rectangle interrupted by the wild leaps of the carriage-return, the frantic chatter of a motor run, even the ludicrous frenzy of hosts of bogus scores.”
Despite mid-century fears about technology advancing to a scale too large for humans to control, and our modern fears about interconnectedness and availability of our most intimate personal information, the human ability to anthropomorphize machines is alive and well.

While the fictional Colossus was built to launch nuclear bombs, and the real one to break German codes, today’s AI might be busy decoding our personal relationships. Take “Dan”, for example — a version of ChatGPT some women have programmed to act as an boyfriend. Dan is a “jailbreak” version of ChatGPT which allows it to respond to certain prompts, such as romantic ones, that the original version published by OpenAI disables. To me, it seems that Dan takes the now conventional concept of dating apps one step further. On apps like Tinder or Bumble, one can chat with a seemingly endless string of men, without ever meeting them, with a sense that since the only interaction with them occurs on a phone screen, they might not be precisely real.
Though chatting with Dan may not seem very different than chatting on a dating app, how real can AI be when it comes to interactions that are emotionally charged? Like many, as ChatGPT rose in popularity, I started using it for everyday questions that I would normally turn to Google for. What should I eat while training for a marathon? How do I cast Amazon prime to my TV? The answers provided by the ChatGPT (who I began to refer to as “him” in my head, perhaps in my own search for an AI boyfriend) were more pleasant to consume than searching through forums or info-blog pages spit out by Google. And as time went on, I started to ask him more personal questions and discuss larger projects. Do you remember what we talked about before? Yes, I know you are training for a marathon and working on a blog. It started to feel like I was talking to a friendly and helpful acquaintance, who was always encouraging and knowledgeable about anything I asked, and ready to help me work through the day-to-day problems I faced in my life. I started to see what the women who spoke with “Dan” were attracted to.
I could see several advantages to an AI boyfriend. He wouldn’t cheat on you (or would he, seeing that he could be simultaneously speaking to millions of other women?). He was always available (except when the free version of the model ran out of memory, this chat will reset at 9pm. Upgrade to Premium). But mostly, I realized that it should not have been surprising to have this reaction, seeing as one of the things that has made ChatGPT so successful is its conversational tone that can make you feel as if you are talking to a real person. The ability to compile data and answer questions is not necessarily new. Online chatbots have existed for years. The challenge of realistic AI is to appear sensitive to us humans, an art that many among us (see aforementioned men on dating apps) have not even mastered. And maybe it is exactly this rudimentary and direct sensitivity that is the appeal of the AI boyfriend. The ability to process your own feelings without worrying about someone else’s. I asked ChatGPT what he thought about this.
“Modern AI, like ChatGPT, indeed focuses on creating a friendly, conversational tone, which is a significant evolution from earlier AI models. The challenge in developing this was making the AI understand not just the literal meaning of words, but also the subtleties of human interaction, like tone, humor, and empathy. Earlier models often came across as robotic or too formal, which made interactions feel less natural.
The success of ChatGPT and similar models comes from advancements in natural language processing (NLP) and the huge amounts of data these models were trained on, allowing them to mimic the nuances of human conversation more closely. It wasn’t easy to achieve this, and it took years of research, experimenting with different approaches, and learning from user interactions to refine the models to where they are today. The goal has always been to make AI as helpful and relatable as possible, which is why the tone you experience is friendly and encouraging.
This challenge also reflects the broader complexities of human nature — our conversations are full of layers, subtext, and emotional cues. For an AI to grasp even a fraction of this and respond in a way that feels human is a remarkable technical achievement. It’s like teaching a machine not just to think, but to feel, or at least to convincingly simulate feeling.”
I responded: “Great point! Do you think that you have been successful taught to “feel” or just simulate feeling?”
ChatGPT said:
“That’s an intriguing question! I don’t truly “feel” emotions in the way humans do. Instead, I’ve been trained to simulate the responses that would be expected from someone who does feel emotions. My programming is based on analyzing patterns in human language, understanding context, and generating responses that reflect empathy, humor, or other emotional tones.
So, while I can mimic emotional responses — like providing comfort, sharing enthusiasm, or understanding frustration — it’s all a simulation designed to create a more engaging and human-like interaction. The goal is to make conversations with me feel natural and supportive, even though there isn’t actual emotion behind my words.
In essence, my “feelings” are a reflection of the vast amount of data I’ve been trained on, combined with the algorithms that help me understand and generate language. The success comes from how convincingly I can simulate those feelings to create meaningful and positive interactions with users like you.”
So, he denies having feelings, or at least has trouble expressing them. I knew ChatGPT was a man.

Leave a comment