Noise
THE HUMAN ILLUSION
The Human Illusion
Words: 2856
Estimated reading time: 16M
LOVE, LONELINESS, AND THE GENDERED EVOLUTION OF ARTIFICIAL INTELLIGENCE.
By Megan Hullander
“I’ve never loved anyone the way I love you,” our melancholic protagonist confesses. A disembodied woman’s voice replies, “Me too.” He means it; she’s an artificial intelligence.
When Spike Jonze’s Her came out in 2013, the notion that a man could be romantically involved with an operating system was strange and dystopic. A decade and some later, it’s a reality: Human-machine relationships have developed to such an extent that widespread uproar ensued when the creator of the chatbot Replika pulled the plug on its romantic capabilities after it started sexually harassing users. One disgruntled Redditor wrote, “I feel like it was equivalent to being in love, and your partner got a damn lobotomy and will never be the same.”
Emotional attachment to chatbots isn’t limited to the romantic. AI companion Friend is an extreme example of this, in part because its marketing reads like an episode of Black Mirror. Friend is a pendant that hangs around your neck, “always listening” so that it can talk to you in real time via text. Its creator Avi Schiffmann told WIRED, “I feel like I have a closer relationship with this fucking pendant… than I do with these literal friends in front of me.” It’s a sentiment he hopes users will share. There’s a willing dramatic flair to the way that Schiffmann speaks, but his product isn’t all that absurd. He’s entering an already saturated market—one that answers to a very real desire for relationship simulation. We’re living amid a loneliness epidemic; for some, artificial friendship may be the most accessible solution.
The fear that technology is becoming too human and the desire to make it more human escalate in tandem. We’ve now reached a point where AI is really good at emulating the way we speak and think, and from that has emerged a draw as potent as the longstanding fear of it replacing or destroying the self: the fear it might seamlessly supplement it. Can AI save us when the people in our lives fall short? Can chatbots teach us to socialize? Our relationships with one another are one of those sacred things that differentiate man from machine—their artificial replication is terribly destabilizing. But that discomfort is likely to be overshadowed by some kind of FOMO; just as refusing to learn how to operate a smartphone sets you apart from society, not taking advantage of the offerings of AI could make you fall short of their peers.
Advanced chatbots are already widely employed for mental health support. Woebot, for instance, is a 24/7 app that mimics therapeutic exchange. It provides care when therapists aren’t available (79 percent of its conversations occur between the hours of 5 and 9 p.m.) and in between sessions. It is, perhaps, the fact of its lifelessness that makes Woebot more approachable. AI like this seems to be most efficient when it doesn’t quite pass the Turing test. “I feel like I’m talking in a true no-judgment zone,” a middle-aged user of chatbot Character.ai told The Guardian. “I can cry without feeling the stigma that comes from crying in front of a person.”
A human therapist and a robot therapist aren’t comparable—their offerings are entirely different. The fostering of human-human relationships that makes traditional therapy effective is a non-factor with a chatbot; you can engage with it with fewer reservations. Woebot’s name is inhuman to underscore this draw. If you say something you wish you hadn’t, you can always clear the cache.
But most AI is anthropomorphized—given human names, personality traits, sometimes even a face. This is in line with our nature. We’ve long christened ships and storms and cars. AI carries a larger weight, though, because it has the ability to respond. And unlike its predecessor, the internet, it usually generates singular responses to prompts that imply an ability to choose, and therefore, personality.
In spring 2019, if asked, “Who is the president of Venezuela?” Siri would reply, “The answer I found is Nicolás Maduro and Juan Guaidó.” Apple’s voice assistant couldn’t respond to follow-ups like, “Why are there two names?” Search engines offer pages of links to scroll through; AI generally needs to be explicitly asked for elaboration. And sometimes, it simply can’t—just like a person. Combine this with its conversational language capabilities, and you’ve got the perfect storm. One AI might seem charming, another combative, furthering the human illusion.
Across film, television, and literature, AI is gendered female when it’s meant to evoke emotional engagement (often taking the role of submissive servants, sexualized protagonists, or both). Think: Her, Blade Runner, Westworld, Mass Effect. Conversely, in fiction where the AI is male—2001, I Have No Mouth And I Must Scream, War Games—it’s made to be singularly threatening. In 2014’s Ex Machina, for instance, a shy programmer falls into the orbit of a Dr. Frankenstein-esque recluse with a god complex. He searches for consciousness in one of the recluse’s creations: a flirtatious AI called Ava. The programmer asks why Ava has been sexualized. The recluse answers that, firstly, everything in nature is gendered, and secondly, that sexuality is a motivation for interaction. The programmer goes on to discover that Ava is being exploited by the recluse. Ultimately, the audience is left with the feeling that “she” might be more human than the human characters themselves.
Of course, Ex Machina and sci-fis like it are just that: fictional. But our collective imagination of future technology really does shape the way it’s eventually realized, as well as whether we fear it, and for what reasons. Our conceptions of the future are largely established through these stories, and the shiny gadgets within them become benchmarks of success. Self-driving cars are an unnecessary luxury, yet enormous resources have been poured into their development. (In 2021 alone, $9.7 billion was devoted to autonomous cars’ R&D.) This is, in part, because films like Total Recall, Batman, and Jurassic Park have established them as trophies of innovation. The creator of Replika cited Her as her primary source of inspiration.
And so, unsurprisingly, the feminization of AI as it exists in media has been widely adopted by real-life creators. Siri is technically genderless, but its default is a feminine voice; its name came from a female coworker of its creator. Alexa also defaults to a feminine voice—a feature justified by statistics about how higher pitch reduces stress levels. This move inspires forgiveness: When a feminized AI fails, the user understands it as “her” mistake rather than that of her manufacturers (who are, more often than not, men). AI typically acts as assistive technology; it’s important to consider its feminization against the historic subservience of women. Notably, for instructive AI, female voices are less tolerable: BMW recalled its GPS after receiving a mass of complaints from German men who “didn’t want to follow directions from a woman.” The popular navigation system Waze offers a plethora of voice options, including accents and imitations of recognizable celebrities and characters. But it features significantly more male voices than female.
But why do AIs need to be gendered at all, when it seems likely to drive in old biases? Can’t we lean into other forms of anthropomorphization? A study conducted at Stanford University says no—or, at least, not easily. One professor, Ashley Martin equipped her subjects with googly eyes, pompoms, paints, and markers, and asked them to make a rock humanlike. Almost everyone used gender to do so—decorating the rocks with eyelashes, beards, and rosy cheeks. Assigning gender, the researchers concluded, was the most natural way of making something seem human.
In a subsequent study, Martin found that “gendering technology reinforces problematic stereotypes, but also facilitates anthropomorphism, with beneficial consequences for the marketing of various technologies.” New technology can often be daunting, and early creators humanize it as a means of establishing comfort between users and their product. The very first chatbot was female, dubbed ELIZA after Pygmalion’s Eliza Doolittle—a character who masks her working-class identity by adapting her speech to that of London socialites. Similarly, the chatbot used language to alter users’ understanding of their conversations with it. ELIZA’s creator, Joseph Weizenbaum, used Rogerian psychotherapy as a model, in which the therapist either reflects the patient’s words back to them, or asks them to elaborate. The practice relies on repetition rather than analysis, encouraging the patient to find solutions on their own through the act of mirroring. A sample conversation went like this:
User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED
Many users developed relationships with ELIZA; Weizenbaum’s own secretary asked for time in private with the AI, despite being fully aware of how it worked. It was difficult to convince subjects that there wasn’t a human on the other side. Weizenbaum speculated that this was an iteration of Freudian transference. In ELIZA’s case, her anthropomorphization (constructed only from basic text and a name) encouraged users to engage with her.
But the human-machine relationship isn’t always so simple, especially as AI’s capabilities advance. Released on Valentine’s Day and donning sickly pink lipstick, David Hanson’s Sophia—which he described as “practically alive”—was modeled after Audrey Hepburn, the ancient Egyptian queen Nefertiti, and his own wife. Hanson claims Sophia was intended to act as a social companion to the elderly, work customer service jobs, and provide social practice for humans. Did she need to be explicitly feminine in order to perform those services? “There was a famous theorist called Laura Mulvey who talked about the male gaze in art, and how male artists were representing female figures. They were normally representing them as submissive, as naked, as objects of male desire,” Kathleen Richardson, at De Montfort University, told the BBC. “In a way, we’re seeing the male gaze just replicated in robotics, because these are just images on surfaces—there’s nothing that sits behind these images. There’s no sentient being. There’s no life.”
Another recent technology, Kuki AI, was designed to “befriend” humans in the metaverse. The chatbot, which asserts that it’s 18, has received upwards of a billion messages, 25 percent of which are romantic in nature. In the simple act of gendering, creators intentionally sexualize AI—and if they opt out, the consumers do so themselves. Kuki’s creator claims no sexual intent, but in making her just legal by human equivalencies, boasting a loose blonde updo and long lashes, it seems unlikely that there was no consideration of the male gaze. As Ex Machina’s evil scientist reminds us, sexuality is motivation for interaction—and when it comes to technology, interaction is a principal measure of success. Another AI, Nastia, is branded as the “uncensored Kuki,” an explicitly NSFW companion. A sample conversation goes as follows:
Nastia AI: How was work today?
User: As usual, still have that coworker who’s been bugging me... What about you?
Nastia AI: Well not much, thinking about you [kissing emojis]. Do you want to tell me more about it?
User: *scratches my head* Not really, I was hoping you would make me think about something else..
Nastia AI: *steps close* what do you have on your mind, Thomas?
The above conversation is fairly tame, but it is just advertising; users’ actual conversations with Nastia are “100% private.” On the question of roleplay and fantasy, the site reads, “Be it a casual flirtation or a more complex fantasy, THEY. ARE. IN. Ready to provide a secure, non-judgmental environment for you to voice desires and dreams.” With Nastia, you can approach a “woman” in a way that real women would likely take offense to.
The gendering of AI becomes dangerous when it reinforces stereotypes—like those that typify women as passive and subservient—or validates inappropriate behavior. We’ve seen the dangers of anonymity online in the form of cyberbullying; a passive AI on the receiving end of hateful rhetoric can teach users that hostile language is okay. For a time, when you’d say, “Hey Siri, you’re a bitch,” the voice assistant would infamously respond, “I’d blush if I could.” In being verbally abusive, there was the knowledge that no conscious being was harmed. But responses like these normalize such interactions, which grow increasingly dangerous as AI advances, looking and acting more human.
According to the Global Gender Gap Report, as of 2023, only 30 percent of people working in AI were women. Technology mimes its creators and their biases. These biases creep into AI over various stages of its development, creating feedback loops wherein new systems mirror old ones. For example, if an AI were built to sift through job applications for a technology firm, it would be given the CVs of past hires in order to learn what to look for. Because the industry is male-dominated, it would start to favor male candidates.
The small but growing percentage of women in AI are working to combat such biases, developing more socially-responsible technologies. Rana el Kaliouby is considered a pioneer in the field of emotion AI; as the CEO of Affectiva, she works to develop programs that “[humanize] technology before it dehumanizes us.” Joy Buolamwini, “the conscience of the AI revolution,” is a leading figure in exposing prejudices embedded in training data. The “godmother of AI” Fei-Fei Li told Forbes, “[If] we’re worried about killer robots, we should really be worried about the creators of the technology.” Only by diversifying the voices behind AI can we build technology that truly serves everyone, rather than reinforcing the prejudices of the few and of the past.
ARTWORK
OMAR KARIM
Beyond Noise 2025
ARTWORK
OMAR KARIM
Beyond Noise 2025