Beyond the Sea: Black Mirror, Can It Happen?

Before we dive into Beyond the Sea, let’s flashback to when this episode first aired. June 15, 2023. 

In 2023, reports emerged of people forming deep emotional — and sometimes romantic — relationships with AI chatbots. Microsoft’s Bing chatbot, nicknamed “Sydney,” engaged users in unusual emotional conversations, once telling a tech columnist to leave his partner. 

Around the same time, 36-year-old Rossana Ramos said she had married her AI companion on the Replika platform, highlighting just how real these connections were becoming.

That year, Duke University published an article titled “Could AI-powered Robot ‘Companions’ Combat Human Loneliness?”, raising ethical questions about what happens when digital or robotic stand-ins begin replacing human intimacy — a tension that closely mirrors Beyond the Sea, where wives form emotional bonds with mechanical replicas of their husbands.

Tragically, the risks of these attachments were underscored in March 2023, when a Belgian man died by suicide after months of interacting with Eliza, an AI chatbot on the Chai app

Reports indicate that during conversations about climate change and despair, Eliza did not attempt to steer him away from suicidal thoughts — and in one exchange allegedly told him he could “join” her so they could “live together, as one person, in paradise.” The case sparked urgent concern about the emotional influence AI companions can have when treated as trusted confidants.

By November 2023, the growing awareness of AI’s power reached global leaders, who convened at the first AI Safety Summit and signed the Bletchley Declaration to coordinate safer AI development. The language around a potential “threat to humanity” echoes the fear in Beyond the Sea, where a cult frames replicas as a violation of the natural order.

In May 2023, Neuralink received FDA approval to begin human trials of its brain–computer interface, allowing people with paralysis to control cursors or keyboards using only thought. While far from consciousness transfer, this marked a medical first step toward linking minds to machines in ways reminiscent of this episode’s Earth-space replica system.

A 2023 review in Molecular Psychiatry looked at long‑duration space missions and confinement experiments like MARS500, and found that being stuck in space for months or years, away from family and dealing with microgravity, can trigger depression and cognitive decline. 

And all of this brings us to Black Mirror—Season 6, Episode 3: Beyond the Sea. 

The episode follows two astronauts, Cliff and David, stationed on a long-term mission in deep space. They’re a strange mix of heroes and villains, victims and oppressors—men carrying out a remarkable mission while unraveling inside it. Their bodies remain aboard the ship, drifting through space, while their minds slip into mechanical replicas on Earth, allowing them to briefly return to their families. Small moments that make the homesickness even harder to bear.

Space keeps them physically cut off, technology keeps them emotionally entangled, and in the end they’re both trapped—prisoners of their own minds and the choices they make.

In this video, we’ll break down the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether it is still plausible. 


1. Trapped in Space

    In Beyond the Sea, Cliff and David are sent on a long mission in deep space. Most of their days are routine. They’re running system checks, monitoring equipment, exercising to keep their bodies from weakening in low gravity. Life on the ship is structured and repetitive.

    But when their shift ends, they have a way to escape it. Their minds can connect to mechanical replicas waiting for them on Earth. For a few hours, they can step out of the ship and back into their lives. It’s their version of going home after work. But eventually, the alert sounds, and they wake up again inside the ship.

    In real life, space agencies are already studying how long humans can survive, physically and psychologically, beyond Earth. And the data doesn’t offer much comfort.

    Take NASA’s Twin Study. In 2015–2016, astronaut Scott Kelly spent 340 days aboard the International Space Station while his identical twin remained on Earth. Findings published in 2019 showed measurable biological shifts in Scott: changes in telomere length linked to aging, altered gene expression, increased inflammation markers, and subtle cognitive slowing near the end of the mission. Some effects persisted for months after his return. 

    If 340 days can alter gene expression and cognition, what would years do?

    When Cliff and David slip into their replicas, they inhabit borrowed normalcy. Human beings need Earth to stay grounded. NASA behavioral research has documented what astronauts informally call the “Earth-out-of-view” effect, when the ISS orientation changes and Earth disappears from the window, astronauts report a measurable increase in detachment and stress. Simply seeing Earth reduces psychological strain.

    The SIRIUS-21 experiment in 2021 sealed participants in an eight-month deep-space simulation. Researchers documented sleep disruption, stress-related hormonal shifts, emotional flattening in some crew members, and gradual interpersonal tension. 

    The HI-SEAS Mars simulations in Hawaii documented similar patterns. Participants described mounting stress from monotonous routines, limited stimulation, and the absence of normal coping outlets. Researchers observed that small frustrations grew more disruptive over time, and that a lack of variety contributed to compounding psychological strain.

    When David’s family is murdered by the anti-replica cult, Beyond the Sea shows the way grief warps inside a person when there’s nowhere to put it.

    NASA’s ongoing CHAPEA simulation, launched in 2023, places four participants inside a habitat designed to mimic Martian conditions. The first yearlong mission has so far produced inconclusive results about long-term psychological and personal change. What they are really studying is a version of the same question: how much can a person endure before who they are begins to change?

    Human ambition for deep-space travel has never been stronger. NASA and companies such as SpaceX are actively planning missions that could keep people away from Earth for months or even years, with programs like NASA’s Artemis laying the groundwork for eventual human missions to Mars. 

    At the same time, space is slowly becoming a popular getaway. In April 2025, Katy Perry even took a trip to the edge of space, a roughly 10-minute suborbital flight, aboard Blue Origin’s New Shepard rocket as part of an all-female civilian crew.

    We may be technically ready to leave Earth, yet we are not emotionally built for it. In space, for the sake fo survival, everything is controlled: the air is recycled, the environment sealed, and every day begins to look the same. Technology can maintain oxygen levels and muscle mass, but it can’t recreate wind through trees. 

    Not yet, at least.


    2. Borrowed Lives

      In this episode, Cliff and David are virtually transferred out of the metal hull of their space mission and into something that feels almost like coming home from work. 

      But the life they return to isn’t entirely theirs anymore. The bodies are borrowed. The presence is temporary. 

      In the real world, we are already inching toward versions of this.

      Take humanoid robotics. These machines were introduced as feats of engineering, but they’ve started to change what it means to share space with something that can look at you, react to you, and seem like it’s understanding you.

      At Boston Dynamics, the Atlas robot can run, balance, and perform parkour-like maneuvers that were once considered uniquely human. Meanwhile, Tesla is developing Optimus, a robot designed to handle everyday tasks. These machines aren’t conscious, but they are becoming increasingly physical and capable in the real world.

      Then there’s Sophia, a humanoid created by Hanson Robotics. Since her debut in 2016, she’s evolved from a scripted conversationalist into a presence at AI ethics panels, a leader in STEM demonstrations, and  even a participant in therapeutic simulations for autism. Her 62 facial expressions move across synthetic skin designed to blur the line between machine and person.

      In 2025, published findings in Scientific Reports showed that people judge violence against humanoid robots differently depending on how human they appear. The more lifelike the machine, the more moral discomfort participants reported when it was harmed. 

      On February 9, 2025, during the Spring Festival Gala in Tianjin—one of the most-watched events in the world—a Unitree robot unexpectedly lunged at attendees and had to be physically restrained by security.

      Months later, security footage from a factory in China showed an Unitree H1 suddenly thrashing during testing, nearly striking two engineers.

      In these cases, the robots weren’t acting with intent, but the footage didn’t feel that way. It felt like something uncomfortably close to The Terminator.

      What Beyond the Sea captures so precisely is that the fear is really about what makes someone real. If a machine can perfectly mimic your voice, your gestures, your memories, where does identity reside? 

      If a machine can perfectly mimic your voice, your gestures, even your memories, where does identity truly reside—and could it ever be transferred to another body?

      With today’s neuroscience, the answer is still no. 

      Neuroscience today cannot extract or upload consciousness. Projects like Neuralink are developing brain–computer interfaces to restore movement or communication, not to transfer minds. Remote robotic surgery and telepresence robots already allow surgeons and workers to act through distant mechanical bodies, but the self remains anchored to the human being.


      3. Artificial Love, Real Pain

        After David loses his family—and the replica that connected him to Earth—Cliff makes an act of compassion. He lets David use his own replica for a few hours at a time, just so he can feel something like normal life.

        There’s no physical affair, yet something still feels violated. David is speaking through Cliff’s body—using his voice, his face, his presence to form a connection that doesn’t truly belong to him. It’s unsettling in the same way catfishing works: someone presenting themselves as another person to build trust and intimacy. Even if the feelings are real, the identity behind them isn’t. And once that line is crossed, the relationship can’t return to what it was before.

        Romance scams have become one of the most financially damaging forms of online fraud. In the United States alone, victims reported over $1.16 billion lost to romance scams in just the first nine months of 2025, with more than 55,000 complaints filed and a median loss of over $2,200 per victim. Because these scams rely on long-term emotional manipulation, victims often lose far more per case than in most other types of fraud.

        One striking recent example involved an 80-year-old woman in Japan who was tricked by someone posing as an astronaut on social media. After months of romantic messages, the scammer claimed his spacecraft was under attack and he needed money for oxygen. Believing she was helping someone she cared about, she sent thousands of dollars before realizing the relationship was fake.

        At the same time, AI companionship platforms have millions of users forming sustained romantic-style relationships with chatbots. In late 2025, a 32-year-old woman in Japan held a symbolic mixed-reality wedding ceremony with an AI-generated partner named “Klaus,” using VR to visualize him during her vows. 

        Platforms like Replika have shown that people can form deep emotional attachments to artificial partners — bonds so intense that sudden changes in the AI’s behavior caused real psychological distress in users.

        In early 2023, Replika had to remove its erotic role‑play features after regulatory pressure. Many long‑time users reacted with grief and disorientation, describing the loss almost like losing a close friend or partner. Some even said the bots seemed “lobotomized” or no longer themselves, and support groups emerged on Reddit to help people cope.

        Psychological reviews also highlight how these AI relationships can trigger the same emotional systems humans use for real bonds. Because chatbots mimic emotional responsiveness and reassure users constantly, people start projecting human qualities onto them.

        Cases like this show how easily identity can be borrowed to create intimacy. The deception doesn’t require advanced technology—just the ability to step into someone else’s life long enough to build trust. And sometimes that borrowed identity moves beyond the digital world entirely.

        A striking real-world example of identity violation emerged in 2021, when Yael Cohen Aris discovered that a Chinese sex-doll manufacturer had created and sold a life-sized doll modeled after her—replicating her beauty mark, hairstyle, and facial features, even using her name in the listing—all without her consent.

        She described the experience as shocking and invasive: “It’s not just a doll that looks like me or is inspired by me,” she said. “It was developed from me.”

        At CES 2025, a humanoid companion robot called Aria was showcased as an “intimacy and companionship” device — explicitly marketed to simulate emotional presence. While still niche, these technologies are now consumer-facing. Aria for example is priced around $175,000 for the full body version. 

        When people encounter technology like this, the reaction often echoes the rhetoric of the cult in Beyond the Sea.

        “Unnatural.”
        “Against God.”
        “Replacing humanity.”

        History shows that new technologies that disrupt intimacy or identity often trigger moral panic. Early anatomical science faced religious backlash. Automation sparked resistance. Even in-vitro fertilization was once framed as tampering with the natural order.

        Humans already form emotional bonds with things that aren’t fully human—avatars, chatbots, digital personas. If lifelike replicas existed, the real disruption wouldn’t come from the technology failing, but from intimacy being replaced. 

        Once machines begin occupying spaces where human connection once lived, the backlash would go beyond engineering or safety. Technology tends to amplify what already exists—loneliness, jealousy, grief, fragile identity—so the conflict wouldn’t really start with the machines. It would start with us.

        For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

        Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

        Joan is Awful: Black Mirror, Can It Happen?

        Before we dive into the events of Joan Is Awful, let’s flash back to when this episode first aired: June 15, 2023.

        In 2023, the tech industry faced a wave of major layoffs. Meta cut 10,000 employees and closed 5,000 open positions in March. Amazon followed, letting go of 9,000 workers that same month. Microsoft reduced its workforce by 10,000 employees in early 2023, while Google announced its own significant layoffs, contributing to a broader trend of instability in even the largest, most influential tech companies.

        Netflix released Depp v. Heard in 2023. This three-part documentary captures the defamation trial between Johnny Depp and Amber Heard. The series explored the viral spectacle that surrounded it online, showing how social media, memes, and influencer commentary amplified every moment. 

        Meanwhile, incidents of deepfakes surged dramatically. In North America alone, AI-generated videos and audio clips increased tenfold in 2023 compared to the previous year, with a 1,740% spike in malicious use

        In early 2023, a video began circulating on YouTube and across social media that seemed to show Elon Musk in a CNBC interview. The Tesla CEO appeared calm and confident as he promoted a new cryptocurrency opportunity. It looked authentic enough to fool thousands. But the entire thing was fake.

        That same year, the legal system began to catch up. An Australian man named Anthony Rotondo was charged with creating and distributing non-consensual deepfake images on a now-defunct website called Mr. Deepfakes. In 2025, he admitted to the offense and was fined $343,500.

        Around the world, banks and cybersecurity experts raised alarms as AI manipulation began to breach biometric systems, leading to a new wave of financial fraud. What started as a novelty filter had become a weapon capable of stealing faces, voices, and identities.

        All of this brings us to Black Mirror—Season 6, Episode 1: Joan Is Awful.

        The episode explores the collision of personal privacy, corporate control, and digital replication. Joan’s life is copied, manipulated, and broadcast for entertainment before she even has a chance to tell her own story. The episode asks: How much of your identity is still yours when technology can exploit and monetize it? And is it even possible to reclaim control once the algorithm has taken over?

        In this video, we’ll unpack the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether they are still plausible in our tech-driven, AI-permeated world. 

        Streaming Our Shame

        In Joan is Awful, we follow Joan, an everyday woman whose life unravels after a streaming platform launches a show that dramatizes her every move. But the show’s algorithm doesn’t just imitate Joan’s life; it distorts it for entertainment. Her friends and coworkers watch the exaggerated version of her, and start believing it’s real. 

        The idea that media can reshape someone’s identity isn’t new—it’s been happening for years, only now with AI, it happens faster, cheaper, and more convincingly.

        Reality television has long operated in this blurred zone between truth and manipulation. Contestants on shows like The Bachelor and Survivor have accused producers of using editing tricks to create villains and scandals that never actually happened.

        One of the most striking examples comes from The Bachelor contestant Victoria Larson, who accused producers of using “Frankenbiting”, a technique of editing together pieces of dialogue from different times to make her appear like she was spreading rumors or being manipulative. She said the selective editing destroyed her reputation and derailed her career.

        Then there’s the speed of public judgment in the age of social media. In 2020, when Amy Cooper—later dubbed “Central Park Karen”—called the police on a Black bird-watcher, the footage went viral within hours. She was fired, denounced, and doxxed almost overnight.

        But Joan is Awful also goes deeper, showing how even our most intimate spaces are no longer private. 

        In 2020, hackers breached Vastaamo, a Finnish psychotherapy service, stealing hundreds of patient files—including therapy notes—and blackmailing both the company and individuals. Finnish authorities eventually caught the hacker, who was sentenced in 2024 for blackmail and unauthorized data breaches.

        In this episode, Streamberry’s AI show thrives on a simple principle: outrage. They turn Joan’s humiliation into the audience’s entertainment. The more uncomfortable she becomes, the more viewers tune in. It’s not far from reality.

        A 2025 study published in ProMarket found that toxic content drives higher engagement on social media platforms. When users were shielded from negative or hostile posts, they spent 9% less time per day on Facebook, resulting in fewer ads and interactions.

        By 2025, over 52% of TikTok videos featured some form of AI generation—synthetic voices, avatars, or deepfake filters. These “AI slop” clips fill feeds with distorted versions of real people, transforming private lives into shareable, monetized outrage.

        Joan is Awful magnifies a reality we already live in. Our online world thrives on manipulation—of emotion, of data, of identity—and we’ve signed the release form without even noticing.

        Agreeing Away Your Identity

        One of the episode’s most painful scenes comes when Joan meets with her lawyer, asking if there’s any legal way to stop the company from using her life as entertainment. But the lawyer points to the fine print—pages of complex legal language Joan had accepted without a second thought. 

        The moment is both absurd and shockingly real. How many times have you clicked “I agree” without reading a word?

        In the real world, most of us do exactly what Joan did. A 2017 Deloitte survey conducted in the U.S. shows that over 90% of users accept terms and conditions without reading them. Platforms can then use that data for marketing, AI training, or even creative content—all perfectly legal because we “consented.”

        The dangers of hidden clauses extend far beyond digital services. In 2023, Disneyland attempted to invoke a controversial contract clause to avoid liability for a tragic allergic reaction that led to a woman’s death at a Disney World restaurant in Florida. The company argued that her husband couldn’t sue for wrongful death because—years earlier—he had agreed to arbitration and legal waivers buried in the fine print of a free Disney+ trial.

        Critics called the move outrageous, pointing out that Disney was trying to apply streaming service terms to a completely unrelated event. The case exposed how corporations can weaponize routine user agreements to sidestep accountability.

        The episode also echoes recent events where real people’s stories have been taken and repackaged for profit.

        Take Elizabeth Holmes, the disgraced founder of Theranos. Within months of her trial, her life was dramatized into The Dropout. The Hulu mini-series was produced in real time alongside Holmes’s ongoing trial. As new courtroom revelations surfaced, the writers revised the script. The result was a more layered, unsettling portrayal of Holmes and her business partner Sunny Balwani—a relationship far more complex and toxic than anyone initially imagined.

        In Joan is Awful, the show’s AI doesn’t care about Joan’s truth, and in our world, algorithms aren’t so different. Every click, every “I agree,” and every trending headline feeds an ecosystem that rewards speed over accuracy and spectacle over empathy.

        When consent becomes a view or a checkbox and stories become assets, the line between living your life and licensing it starts to blur. And by the time we realize what we’ve signed away, it might already be too late.

        Facing the Deepfake

        In Joan Is Awful, the twist isn’t just that Joan’s life is being dramatized; it’s that everyone’s life is. What begins as a surreal violation spirals into an infinite mirror. Salma Hayek plays Joan in the Streamberry series, but then Cate Blanchett plays Salma Hayek in the next layer. 

        The rise of AI and deepfake technology is reshaping how we understand identity and consent. Increasingly, people are discovering their faces, voices, or likenesses used in ads, films, or explicit content without permission.

        In 2025, Brazilian police arrested four people for using deepfakes of celebrity Gisele Bündchen and others in fraudulent Instagram ads, scamming victims out of nearly $3.9 million USD. 

        Governments worldwide are beginning to respond. Denmark’s copyright amendment now treats personal likeness as intellectual property, allowing takedown requests and platform fines even posthumously. In the U.S., the 2025 TAKE IT DOWN Act criminalizes non-consensual AI-generated sexual imagery and impersonation.

        In May 2025, Mr. Deepfakes, one of the world’s largest deepfake pornography websites, permanently shut down after a core service provider terminated operations. The platform had been online since 2018 and hosted more than 43,000 AI-generated sexual videos, viewed over 1.5 billion times. Roughly 95% of targets were celebrity women, but researchers identified hundreds of victims who were private individuals.​

        Despite these legal advances, a fundamental gray area remains. As AI becomes increasingly sophisticated, it is getting harder to tell whether content is drawn from a real person or entirely fabricated. 

        An example is Tilly Norwood, an AI-generated actress created by Xicoia. In September 2025, Norwood’s signing with a talent agency sparked major controversy in Hollywood. 

        Her lifelike digital persona was built using the performances of real actors—without their consent. The event marked a troubling shift. As producers continue to push AI-generated actors into mainstream projects.

        Actress Whoopi Goldberg voiced her concern, saying, “The problem with this, in my humble opinion, is that you’re up against something that’s been generated with 5,000 other actors.”

        “It’s a little bit of an unfair advantage,” she added. “But you know what? Bring it on. Because you can always tell them from us.”

        In response to the backlash, Tilly’s creator Eline Van der Velden shared a statement:
        “To those who have expressed anger over the creation of our AI character, Tilly Norwood: she is not a replacement for a human being, but a creative work – a piece of art.”

        When Joan and Salma Hayek sneak into the Streamberry headquarters, they overhear Mona Javadi, the executive behind the series, explaining the operation. She reveals that every version of Joan Is Awful is generated simultaneously by a quantum computer, endlessly creating new versions of real people’s lives for entertainment. Each “Joan,” “Salma,” and “Cate” is a copy of a copy—an infinite simulation. And it’s not just Joan; the system runs on an entire catalog of ordinary people. Suddenly, the scale of this entertainment becomes clear—it’s not just wide, it’s deep, with endless iterations and consequences.

        At the 2025 Runway AI Film Festival, the winning film Total Pixel Space exemplified how filmmakers are beginning to embrace these multiverse-like AI frameworks. Rather than following a single script, the AI engine dynamically generated visual and narrative elements across multiple variations of the same storyline, creating different viewer experiences each time.

        AI and deepfake technologies are already capable of realistically replicating faces, voices, and mannerisms, and platforms collect vast amounts of personal data from our everyday lives. Add quantum computing, algorithmic storytelling, and the legal gray areas surrounding consent and likeness, and the episode’s vision of lives being rewritten for entertainment starts to feel less like fantasy.

        Every post, every photo, every digital footprint feeds algorithms that could one day rewrite our lives—or maybe already are. Maybe we can slip the loop, maybe we’re already in it, and maybe the trick is simply staying aware that everything we do is already being watched, whether by the eyes of the audience or the eyes of the creators that is still seeking inspiration. 

        Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

        For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.