Joan is Awful: Black Mirror, Can It Happen?

Before we dive into the events of Joan Is Awful, let’s flash back to when this episode first aired: June 15, 2023.

In 2023, the tech industry faced a wave of major layoffs. Meta cut 10,000 employees and closed 5,000 open positions in March. Amazon followed, letting go of 9,000 workers that same month. Microsoft reduced its workforce by 10,000 employees in early 2023, while Google announced its own significant layoffs, contributing to a broader trend of instability in even the largest, most influential tech companies.

Netflix released Depp v. Heard in 2023. This three-part documentary captures the defamation trial between Johnny Depp and Amber Heard. The series explored the viral spectacle that surrounded it online, showing how social media, memes, and influencer commentary amplified every moment. 

Meanwhile, incidents of deepfakes surged dramatically. In North America alone, AI-generated videos and audio clips increased tenfold in 2023 compared to the previous year, with a 1,740% spike in malicious use

In early 2023, a video began circulating on YouTube and across social media that seemed to show Elon Musk in a CNBC interview. The Tesla CEO appeared calm and confident as he promoted a new cryptocurrency opportunity. It looked authentic enough to fool thousands. But the entire thing was fake.

That same year, the legal system began to catch up. An Australian man named Anthony Rotondo was charged with creating and distributing non-consensual deepfake images on a now-defunct website called Mr. Deepfakes. In 2025, he admitted to the offense and was fined $343,500.

Around the world, banks and cybersecurity experts raised alarms as AI manipulation began to breach biometric systems, leading to a new wave of financial fraud. What started as a novelty filter had become a weapon capable of stealing faces, voices, and identities.

All of this brings us to Black Mirror—Season 6, Episode 1: Joan Is Awful.

The episode explores the collision of personal privacy, corporate control, and digital replication. Joan’s life is copied, manipulated, and broadcast for entertainment before she even has a chance to tell her own story. The episode asks: How much of your identity is still yours when technology can exploit and monetize it? And is it even possible to reclaim control once the algorithm has taken over?

In this video, we’ll unpack the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether they are still plausible in our tech-driven, AI-permeated world. 

Streaming Our Shame

In Joan is Awful, we follow Joan, an everyday woman whose life unravels after a streaming platform launches a show that dramatizes her every move. But the show’s algorithm doesn’t just imitate Joan’s life; it distorts it for entertainment. Her friends and coworkers watch the exaggerated version of her, and start believing it’s real. 

The idea that media can reshape someone’s identity isn’t new—it’s been happening for years, only now with AI, it happens faster, cheaper, and more convincingly.

Reality television has long operated in this blurred zone between truth and manipulation. Contestants on shows like The Bachelor and Survivor have accused producers of using editing tricks to create villains and scandals that never actually happened.

One of the most striking examples comes from The Bachelor contestant Victoria Larson, who accused producers of using “Frankenbiting”, a technique of editing together pieces of dialogue from different times to make her appear like she was spreading rumors or being manipulative. She said the selective editing destroyed her reputation and derailed her career.

Then there’s the speed of public judgment in the age of social media. In 2020, when Amy Cooper—later dubbed “Central Park Karen”—called the police on a Black bird-watcher, the footage went viral within hours. She was fired, denounced, and doxxed almost overnight.

But Joan is Awful also goes deeper, showing how even our most intimate spaces are no longer private. 

In 2020, hackers breached Vastaamo, a Finnish psychotherapy service, stealing hundreds of patient files—including therapy notes—and blackmailing both the company and individuals. Finnish authorities eventually caught the hacker, who was sentenced in 2024 for blackmail and unauthorized data breaches.

In this episode, Streamberry’s AI show thrives on a simple principle: outrage. They turn Joan’s humiliation into the audience’s entertainment. The more uncomfortable she becomes, the more viewers tune in. It’s not far from reality.

A 2025 study published in ProMarket found that toxic content drives higher engagement on social media platforms. When users were shielded from negative or hostile posts, they spent 9% less time per day on Facebook, resulting in fewer ads and interactions.

By 2025, over 52% of TikTok videos featured some form of AI generation—synthetic voices, avatars, or deepfake filters. These “AI slop” clips fill feeds with distorted versions of real people, transforming private lives into shareable, monetized outrage.

Joan is Awful magnifies a reality we already live in. Our online world thrives on manipulation—of emotion, of data, of identity—and we’ve signed the release form without even noticing.

Agreeing Away Your Identity

One of the episode’s most painful scenes comes when Joan meets with her lawyer, asking if there’s any legal way to stop the company from using her life as entertainment. But the lawyer points to the fine print—pages of complex legal language Joan had accepted without a second thought. 

The moment is both absurd and shockingly real. How many times have you clicked “I agree” without reading a word?

In the real world, most of us do exactly what Joan did. A 2017 Deloitte survey conducted in the U.S. shows that over 90% of users accept terms and conditions without reading them. Platforms can then use that data for marketing, AI training, or even creative content—all perfectly legal because we “consented.”

The dangers of hidden clauses extend far beyond digital services. In 2023, Disneyland attempted to invoke a controversial contract clause to avoid liability for a tragic allergic reaction that led to a woman’s death at a Disney World restaurant in Florida. The company argued that her husband couldn’t sue for wrongful death because—years earlier—he had agreed to arbitration and legal waivers buried in the fine print of a free Disney+ trial.

Critics called the move outrageous, pointing out that Disney was trying to apply streaming service terms to a completely unrelated event. The case exposed how corporations can weaponize routine user agreements to sidestep accountability.

The episode also echoes recent events where real people’s stories have been taken and repackaged for profit.

Take Elizabeth Holmes, the disgraced founder of Theranos. Within months of her trial, her life was dramatized into The Dropout. The Hulu mini-series was produced in real time alongside Holmes’s ongoing trial. As new courtroom revelations surfaced, the writers revised the script. The result was a more layered, unsettling portrayal of Holmes and her business partner Sunny Balwani—a relationship far more complex and toxic than anyone initially imagined.

In Joan is Awful, the show’s AI doesn’t care about Joan’s truth, and in our world, algorithms aren’t so different. Every click, every “I agree,” and every trending headline feeds an ecosystem that rewards speed over accuracy and spectacle over empathy.

When consent becomes a view or a checkbox and stories become assets, the line between living your life and licensing it starts to blur. And by the time we realize what we’ve signed away, it might already be too late.

Facing the Deepfake

In Joan Is Awful, the twist isn’t just that Joan’s life is being dramatized; it’s that everyone’s life is. What begins as a surreal violation spirals into an infinite mirror. Salma Hayek plays Joan in the Streamberry series, but then Cate Blanchett plays Salma Hayek in the next layer. 

The rise of AI and deepfake technology is reshaping how we understand identity and consent. Increasingly, people are discovering their faces, voices, or likenesses used in ads, films, or explicit content without permission.

In 2025, Brazilian police arrested four people for using deepfakes of celebrity Gisele Bündchen and others in fraudulent Instagram ads, scamming victims out of nearly $3.9 million USD. 

Governments worldwide are beginning to respond. Denmark’s copyright amendment now treats personal likeness as intellectual property, allowing takedown requests and platform fines even posthumously. In the U.S., the 2025 TAKE IT DOWN Act criminalizes non-consensual AI-generated sexual imagery and impersonation.

In May 2025, Mr. Deepfakes, one of the world’s largest deepfake pornography websites, permanently shut down after a core service provider terminated operations. The platform had been online since 2018 and hosted more than 43,000 AI-generated sexual videos, viewed over 1.5 billion times. Roughly 95% of targets were celebrity women, but researchers identified hundreds of victims who were private individuals.​

Despite these legal advances, a fundamental gray area remains. As AI becomes increasingly sophisticated, it is getting harder to tell whether content is drawn from a real person or entirely fabricated. 

An example is Tilly Norwood, an AI-generated actress created by Xicoia. In September 2025, Norwood’s signing with a talent agency sparked major controversy in Hollywood. 

Her lifelike digital persona was built using the performances of real actors—without their consent. The event marked a troubling shift. As producers continue to push AI-generated actors into mainstream projects.

Actress Whoopi Goldberg voiced her concern, saying, “The problem with this, in my humble opinion, is that you’re up against something that’s been generated with 5,000 other actors.”

“It’s a little bit of an unfair advantage,” she added. “But you know what? Bring it on. Because you can always tell them from us.”

In response to the backlash, Tilly’s creator Eline Van der Velden shared a statement:
“To those who have expressed anger over the creation of our AI character, Tilly Norwood: she is not a replacement for a human being, but a creative work – a piece of art.”

When Joan and Salma Hayek sneak into the Streamberry headquarters, they overhear Mona Javadi, the executive behind the series, explaining the operation. She reveals that every version of Joan Is Awful is generated simultaneously by a quantum computer, endlessly creating new versions of real people’s lives for entertainment. Each “Joan,” “Salma,” and “Cate” is a copy of a copy—an infinite simulation. And it’s not just Joan; the system runs on an entire catalog of ordinary people. Suddenly, the scale of this entertainment becomes clear—it’s not just wide, it’s deep, with endless iterations and consequences.

At the 2025 Runway AI Film Festival, the winning film Total Pixel Space exemplified how filmmakers are beginning to embrace these multiverse-like AI frameworks. Rather than following a single script, the AI engine dynamically generated visual and narrative elements across multiple variations of the same storyline, creating different viewer experiences each time.

AI and deepfake technologies are already capable of realistically replicating faces, voices, and mannerisms, and platforms collect vast amounts of personal data from our everyday lives. Add quantum computing, algorithmic storytelling, and the legal gray areas surrounding consent and likeness, and the episode’s vision of lives being rewritten for entertainment starts to feel less like fantasy.

Every post, every photo, every digital footprint feeds algorithms that could one day rewrite our lives—or maybe already are. Maybe we can slip the loop, maybe we’re already in it, and maybe the trick is simply staying aware that everything we do is already being watched, whether by the eyes of the audience or the eyes of the creators that is still seeking inspiration. 

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Rachel, Jack and Ashley Too: Black Mirror, Can It Happen?

Before we dive into the events of Rachel, Jack, and Ashley Too, let’s flash back to when this episode first aired: June 5, 2019.

At CES 2019, a diverse range of innovative robots captured attention, from practical home assistants like Foldimate, a laundry-folding robot, to advanced companions such as Ubtech’s Walker and the emotionally expressive Lovot. Together, these robots laid the groundwork for future developments in consumer robotics.

When Charli D’Amelio joined TikTok in May 2019, she was just another teenager posting dance clips. But within weeks, her lip-sync and choreography videos were going viral. By July, her duets were spreading across the platform, and by the close of 2019, she had transformed from an unknown high schooler into a digital sensation with millions of followers.

On February 2, 2019, Fortnite hosted Marshmello’s virtual concert at the in-game location Pleasant Park. The event drew over 10.7 million concurrent players, breaking the game’s previous records. 

In 2019, Taylor Swift’s public fight with Big Machine Records over the ownership of her master recordings exposed deep systemic issues, as Swift’s masters were sold without her consent, preventing her from controlling the use of her own music. In response, she began re-recording her early albums under the Taylor’s Version banner, starting with Fearless (Taylor’s Version) in 2021

In January 2019, Britney Spears abruptly canceled a highly anticipated show in Las Vegas. In April, Spears entered a mental health facility, sparking public concern and amplifying the #FreeBritney movement amid allegations of emotional abuse linked to her conservatorship. 

All of which brings us back to this episode of Black Mirror—Season 5, Episode 3: Rachel, Jack, and Ashley Too. 

The episode dives into the mechanics of digital fame—where algorithms hold the power, artists blur into avatars, and identity bends under the weight of technology. It asks: What happens when the spotlight is no longer earned but assigned? When music is stripped down and musicians reduced to assets? And, in the end, can we lose ourselves to the very machine that makes us visible?

In this video, we’ll explore the episode’s themes and investigate whether these events have already happened—and if not whether or not they are still plausible. Let’s go.

Connection by Algorithm

In this episode, we follow Rachel, a teenager struggling with the loss of her mother and looking for connection. In her search for belonging, Rachel grows attached to Ashley Too—a talking doll modeled after pop star Ashley O. She clings to it as both a friend and a channel to her idol.

AI companion apps have exploded in 2025, with more than 220 million downloads and $120 million in revenue projected for the year. Popular platforms now include Character.AI, Replika, Chai, and Kindroid, all offering lifelike interactions.

Even more effective than a friend, AI can now detect depression by analyzing data like daily activity patterns recorded by wearable devices. 

A recent 2025 study from JMIR Mental Health found that an AI model called XGBoost could correctly identify if someone was depressed about 85% of the time. The AI looks at changes in sleep and activity rhythms. However, even with these advances, AI sometimes finds it hard to understand subtle emotions or the context of what a person is feeling.

In this episode, Rachel’s sister Jack—driven by jealousy, or perhaps genuine concern—hides Ashley Too, worried it’s “filling her head with crap.” Her skepticism mirrors a real-world fear: that leaning on digital companions can warp the grieving process.

Recent regulatory actions have begun addressing risks around AI companion apps. New York passed a law effective November 2025 requiring AI companion operators to implement safety measures detecting suicidal ideation or self-harm and to clearly disclose the non-human nature of the chatbot to users. 

In the end, Rachel and her sister discover that the doll’s personality is intentionally restricted by an internal limiter, and when it is removed, the AI reveals a deeper consciousness trapped inside. 

ChatGPT and similar AI models are increasingly used as therapy tools. A 2025 randomized controlled trial of the AI therapy chatbot “Therabot” reported clinically significant reductions in depression and anxiety symptoms, with effect sizes comparable to or exceeding some traditional treatments. 

However, a study presented at the American Psychiatric Association’s 2025 meeting found human therapists still outperform ChatGPT in key therapy skills like agenda-setting, eliciting feedback, and applying cognitive behavioral techniques, due to their greater empathy and flexibility. Another thematic study of ChatGPT users found it provides helpful emotional support and guidance but raised concerns about privacy and emotional depth.

As technology grows more immersive and responsive, these digital bonds may deepen. Whether that’s a source of comfort or a cause for concern depends on how we balance connection, privacy, and the question at the heart of the episode: what does it really mean to be known?


Creativity, Rewired

Ashley O is a pop icon suffocated by the demands of her aunt and record label. She feels trapped as her true voice is silenced and her image squeezed into a marketable mold.

When Ashley is put into a coma, the producers crank up a machine to decode her brainwaves and extract new songs, pumping out tracks without her consent. A literal case of cookie-cutter artistry. 

The Velvet Sundown is an AI-generated music project that emerged in 2025, debuting with two albums on Spotify and quickly sparking global discussion about the future of artificial creativity.

The project, created by an anonymous human creator, used AI tools like Suno for music generation, with style descriptions crafted by language models such as ChatGPT. 

In June 2024, major record labels—including Sony Music, Universal Music Group, and Warner Records—filed lawsuits against AI music companies Suno and Udio, accusing them of large-scale copyright infringement. The labels alleged that the startups used their recordings without permission to train AI systems capable of generating new songs. Both companies denied wrongdoing, claiming their models create original works rather than copying existing recordings. The case remains ongoing as of 2025.

Legal and ethical challenges around AI-generated music are mounting. Unauthorized use of vocal clones or deepfakes has sparked heated debates on consent, ownership, and copyrights. Legal systems struggle to keep up. If a person shapes the AI’s output, copyright might apply—but it’s unclear how much input is enough. This gray area makes artist rights, licensing, and royalties more complicated.

Can creativity actually be replicated by machines, or does something essential get lost when all they do is measure patterns and output? As Ashley’s story shows, automated artistry might never replace the real thing—but it can easily outpace it.

Celebrity in a Cage

In Rachel, Jack, and Ashley Too, we see the dark side of fame through Ashley O’s story: she is drugged into compliance and eventually placed in a coma, while her aunt schemes to replace her with a holographic version built for endless future tours.

This holographic pop star can instantly change outfits, scale in size, appear simultaneously in thousands of locations, and perform endlessly without the vulnerabilities of a human artist. 

In 2024–2025, virtual K-pop idols like SKINZ and PLAVE emerged as a new wave of celebrity branding that extends beyond music into virtual merchandise and digital idols.

PLAVE is a five-member group, powered by real performers using motion capture. They have racked up over 470 million YouTube views, charted on Billboard Global 200, and sold out virtual concerts while engaging fans with digital fan meetings. 

SKINZ, a seven-member virtual boyband produced by South Korean singer-songwriter, EL CAPITXN, blends rock, hip-hop, and funk, has performed at iconic venues like Tokyo Dome.

This surge in AI and virtual stardom opens extraordinary possibilities, but what about the humans who now have to compete in this new arena? 

This brings to mind Britney Spears, whose long conservatorship battle captivated the world. In total, Britney performed hundreds of shows during the 13-year conservatorship from 2008 to 2021, but always under heavy restrictions and control. 

While AI and holograms can perform endlessly without burnout or loss of control, traditional live tours remain a lucrative but fragile model heavily dependent on a single artist’s health and agency. 

In late 2024, indie-pop artist Clairo faced significant backlash after postponing three highly anticipated concerts in Toronto at the last minute due to “extreme exhaustion.” The cancellations came just as doors were about to open for the first show at Massey Hall, leaving fans frustrated and inconvenienced, especially those who had traveled and faced challenges getting refunds.

In contrast, virtual concerts and holographic tours, already proven by groundbreaking shows like ABBA’s Voyage, which made its long-anticipated debut on May 27, 2022, at the purpose-built ABBA Arena in London’s Queen Elizabeth Olympic Park. The virtual concert residency features hyper-realistic avatars of the band members as they looked in 1979, created using cutting-edge motion capture technology and visual effects by Industrial Light & Magic.

In contrast, virtual concerts and holographic tours rely not on a single performer. This is demonstrated by shows like ABBA’s Voyage, which debuted on May 27, 2022, at the purpose-built ABBA Arena in London’s Queen Elizabeth Olympic Park. Instead, they depend on the coordinated work of many teams. Hyper-realistic avatars of the band as they appeared in 1979 were created through motion capture, stage design, lighting, production, and visual effects by Industrial Light & Magic.

While the performers are getting more digital, many performers are aiming to bring the audience back to the moment. 

Phone-free concerts have grown in popularity as artists seek to create more immersive, distraction-free live experiences. Ghost, a Swedish rock band, has pioneered this approach by requiring fans to secure their phones in lockable pouches called Yondr bags, which can only be opened after the show or in designated areas. 

Yet even as performers reclaim control over the audience’s attention, the question remains: How much control do today’s celebrities really have, and how much of their image and choices are shaped by algorithms, managers, and market trends?

Virtual and hybrid performances blur the line between genuine presence and manufactured spectacle, leaving us to wonder whether we’re watching artists or carefully engineered illusions. 

As fame, creativity, and even friendship are being reshaped, the episode explores the tension between what can be automated and what should remain authentic.

Programs already guide our choices, digital idols fill our feeds, and synthetic voices mingle with human ones. In that haze, where artist becomes asset and companion becomes artificial, the story feels like a glimpse of what’s already unfolding.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

Black Museum: Black Mirror, Can It Happen?

Before we talk about Black Museum, let’s flashback to when this episode was first released: Dec 29, 2017.

In 2017, the rise of dark tourism—traveling to sites tied to death, tragedy, or the macabre—became a notable cultural trend, with locations like Mexico’s Island of the Dolls, abandoned abandoned hospitals and prisons drawing attention. Specifically, Chernobyl saw a dramatic increase in tourists, with around 70,000 visitors in 2017, a sharp rise from just 15,000 in 2010. This influx of visitors contributed approximately $7 million to Ukraine’s economy.

Meanwhile, in 2017, the EV revolution was picking up speed. Tesla, once a trailblazer now a company run by a power-hungry maniac, launched the more affordable Model 3.

2017 also marked a legal dispute between Hologram USA and Whitney Houston’s estate. The planned hologram tour, aimed at digitally resurrecting the iconic singer for live performances, led to legal battles over the hologram’s quality. Despite the challenges, the project was eventually revived, premiering as An Evening with Whitney: The Whitney Houston Hologram Tour in 2020.

At the same time, Chicago’s use of AI and surveillance technologies, specifically through the Strategic Subject List (SSL) predictive policing program, sparked widespread controversy. The program used historical crime data to predict violent crimes and identify high-risk individuals, but it raised significant concerns about racial bias and privacy.

And that brings us to this episode of Black Mirror, Episode 6 of Season 4: Black Museum. Inspired by Penn Jillette’s story The Pain Addict, which grew out of the magician’s own experience in a Spanish welfare hospital, the episode delves into a twisted reality where technology allows doctors to feel their patients’ pain.

Set in a disturbing museum, this episode confronts us with pressing questions: When does the pursuit of knowledge become an addiction to suffering? What happens when we blur the line between human dignity and the technological advancements meant to heal? And what price do we pay when we try to bring people back from the dead?

In this video, we’ll explore the themes of Black Museum and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!

Pain for Pleasure

As Rolo Haynes guides Nish through the exhibits in the Black Museum, he begins with the story of Dr. Peter Dawson. Dawson, a physician, tested a neural implant designed to let him feel his patients’ pain, helping him understand their symptoms and provide a diagnosis. What started as a medical breakthrough quickly spiraled into an addiction.

Meanwhile, in the real world, scientists have been making their own leaps into the mysteries of the brain. In 2013, University of Washington researchers successfully connected the brains of two rats using implanted electrodes. One rat performed a task while its neural activity was recorded and transmitted to the second rat, influencing its behavior. Fast forward to 2019, when researchers linked three human brains using a brain-to-brain interface (BBI), allowing two participants to transmit instructions directly into a third person’s brain using magnetic stimulation—enabling them to collaborate on a video game without speaking.

Beyond mind control, neurotech has made it possible to simulate pain and pleasure without physical harm. Techniques like Transcranial Magnetic Stimulation (TMS) and Brain-Computer Interfaces (BCIs) let researchers manipulate neural activity for medical treatment.

AI is actively working to decode the complexities of the human brain. At Stanford, researchers have used fMRI data to identify distinct “pain signatures,” unique neural patterns that correlate with physical discomfort. This approach could provide a more objective measure of pain levels and potentially reduce reliance on self-reported symptoms, which can be subjective and inconsistent.

Much like Dr. Dawson’s neural implant aimed to bridge the gap between doctor and patient, modern AI researchers are developing ways to interpret and even visualize human thought. 

Of course, with all this innovation comes a darker side. 

In 2022, Neuralink, Elon Musk’s brain-implant company, came under federal investigation for potential violations of the Animal Welfare Act. Internal documents and employee interviews suggest that Musk’s demand for rapid progress led to botched experiments. As a result, many tests had to be repeated, increasing the number of animal deaths. Since 2018, an estimated 1,500 animals have been killed, including more than 280 sheep, pigs, and monkeys.

While no brain implant has caused a real-life murder addiction, electrical stimulation can alter brain function in unexpected ways. Deep brain stimulation for Parkinson’s has been linked to compulsive gambling and impulse control issues, while fMRI research helps uncover how opioid use reshapes the brain’s pleasure pathways. As AI enhances neuroanalysis, the risk of unintended consequences grows.

When Dr. Dawson pushed the limits, and ended up experiencing the death of the patient, his neural implant was rewired in the process, blurring the line between pain and pleasure.

At present, there’s no known way to directly simulate physical death in the sense of replicating the actual biological process of dying without causing real harm. 

However, Shaun Gladwell, an Australian artist known for his innovative use of technology in art, has created a virtual reality death simulation. It is on display at the Melbourne Now event in Australia. The experience immerses users in the dying process—from cardiac failure to brain death—offering a glimpse into their final moments. By simulating death in a controlled virtual environment, the project aims to help participants confront their fears of the afterlife and better understand the emotional aspects of mortality. 

This episode of Black Mirror reminds us that the quest for understanding the mind might offer enlightenment, but it also carries the risk of unraveling the very fabric of what makes us human. 

In the end, the future may not lie in simply experiencing death, but in learning to live with the knowledge that we are always on the cusp of the unknown.

Backseat Driver

In the second part of Black Museum, Rolo recounts his involvement in a controversial experiment. After an accident, Rolo helped Jack transfer his comatose wife Carrie’s consciousness into his brain. This let Carrie feel what Jack felt and communicate with him. In essence, this kept Carrie alive. However, the arrangement caused strain—Jack struggled with the lack of privacy, while Carrie grew frustrated by her lack of control—ultimately putting the saying “’til death do you part” to the test.

The concept of embedding human consciousness into another medium remains the realm of fiction, but neurotechnology is inching closer to mind-machine integration. 

In 2016, Ian Burkhart, a 24-year-old quadriplegic patient, made history using the NeuroLife system. A microelectrode chip implanted in Burkhart’s brain allowed him to regain movement through sheer thought. Machine-learning algorithms decoded his brain signals, bypassing his injured spinal cord and transmitting commands to a specialized sleeve on his forearm—stimulating his muscles to control his arm, hand, and fingers. This allowed him to grasp objects and even play Guitar Hero.

Another leap in brain-tech comes from Synchron’s Stentrode, a device that bypasses traditional brain surgery by implanting through blood vessels. In 2021, Philip O’Keefe, living with ALS, became the first person to compose a tweet using only his mind. The message? A simple yet groundbreaking “Hello, World.” 

Imagine being able to say what’s on your mind—without saying a word. That’s exactly what Blink-To-Live makes possible. Designed for people with speech impairments, Blink-To-Live tracks eye movements via a phone camera to communicate over 60 commands using four gestures: Left, Right, Up, and Blink. The system translates these gestures into sentences displayed on the screen and read aloud.

Technology is constantly evolving to give people with impairments the tools to live more independently, but relying on it too much can sometimes mean sacrificing privacy, autonomy, or even a sense of human connection.

When Jack met Emily, he was relieved to experience a sense of normalcy again. She was understanding at first, but everything changed when she learned about Carrie—the backseat driver and ex-lover living in Jack’s mind. Emily’s patience wore thin, and she insisted that Carrie be removed. Eventually, Rolo helped Jack find a solution by transferring Carrie’s consciousness into a toy monkey.

Initially, Jack’s son loved the monkey. But over time, the novelty faded. The monkey wasn’t really Carrie. She couldn’t hold real conversations anymore. She couldn’t express her thoughts beyond those two phrases. And therefore, like many toys, it was left forgotten. 

This raises an intriguing question: Could consciousness, like Carrie’s, ever be transferred and preserved in an inanimate object? 

Dr. Ariel Zeleznikow-Johnston, a neuroscientist at Monash University, has an interesting theory. He believes that if we can fully map the human connectome—the complex network of neural connections—we might one day be able to preserve and even revive consciousness. His book, The Future Loves You, explores whether personal identity could be stored digitally, effectively challenging death itself. While current techniques can preserve brain tissue, the actual resurrection of consciousness remains speculative. 

This means that if you want to transfer your loved ones’ consciousness into a toy monkey’s body, you’ll have to wait, but the legal systems are already grappling with these possibilities. 

In 2017, the European Parliament debated granting “electronic personhood” to advanced AI, a move that could set a precedent for digital consciousness. Would an uploaded mind have rights? Could it be imprisoned? Deleted? As AI-driven personalities become more lifelike—whether in chatbots, digital clones, or neural interfaces—the debate over their status in society is only just beginning.

At this point, Carrie’s story is purely fictional. But if the line between human, machine, and cute little toy monkeys blurs further, we may need to redefine what it truly means to be alive.

Not Dead but Hardly Alive

In the third and final tale of Black Museum, Rolo Haynes transforms human suffering into a literal sideshow. His latest exhibit? A holographic re-creation of a convicted murderer, trapped in an endless loop of execution for paying visitors to experience. 

What starts as a morbid fascination quickly reveals the depths of Rolo’s cruelty—using digital resurrection not for justice, but for profit. 

The concept of resurrecting the dead in digital form is not so far-fetch. In 2020, the company StoryFile introduced interactive holograms of deceased individuals, allowing loved ones to engage with digital avatars capable of responding to pre-recorded questions. This technology has been used to preserve the voices of Holocaust survivors, enabling them to share their stories for future generations. 

But here’s the question: Who controls a person’s digital afterlife? And where do we draw the line between honoring the dead and commodifying them?

Hollywood has already ventured into the business of resurrecting the dead. After Carrie Fisher’s passing, Star Wars: The Rise of Skywalker repurposed unused footage and CGI to keep Princess Leia in the story. 

The show must go on, and many fans preferred not to see Carrie Fisher recast. But should production companies have control over an actor’s likeness after they’ve passed?

Celebrities such as Robin Williams took preemptive legal action, restricting the use of his image for 25 years after his death. The line between tribute and exploitation has become increasingly thin. If a deceased person’s digital avatar can act, speak, or even endorse products, who decides what they would have wanted?

In the realm of intimacy, AI-driven experiences are reshaping relationships. Take Cybrothel, a Berlin brothel that markets AI-powered sex dolls capable of learning and adapting to user preferences. As AI entities simulate emotions, personalities, and desires, and as people form deep attachments to digital partners, it will significantly alter our understanding of relationships and consent.

Humans often become slaves to their fetishes, driven by impulses that can lead them to make choices that harm both themselves and others. But what if the others are digital beings?

If digital consciousness can feel pain, can it also demand justice? If so, then Nish’s father wasn’t just a relic on display—he was trapped, suffering, a mind imprisoned in endless agony for the amusement of strangers. She couldn’t let it stand. Playing along until the perfect moment, she turned Rolo’s own twisted technology against him. In freeing her father’s hologram, she made sure Rolo’s cruelty ended with him.

The idea of AI having rights may sound like a distant concern, but real-world controversies suggest otherwise. 

In 2021, the documentary Roadrunner used AI to replicate Anthony Bourdain’s voice for quotes he never spoke aloud. Similarly, in 2020, Kanye West gifted Kim Kardashian a hologram of her late father Robert Kardashian. These two notable events sparked backlash over putting words into a deceased person’s mouth. 

While society has largely moved beyond public executions, technology is creating new avenues to fulfill human fantasies. AI, deepfake simulations, and VR experiences could bring execution-themed entertainment back in a digital form, forcing us to reconsider the ethics of virtual suffering.

As resurrected personalities and simulated consciousness become more advanced, we will inevitably face the question: Should these digital beings be treated with dignity? If a hologram can beg for mercy, if an AI can express fear, do we have a responsibility to listen?

While the events of Black Museum have not happened yet and may still be a long way off, the first steps toward that reality are already being taken. Advances in AI, neural mapping, and digital consciousness hint at a future where identities can be preserved, replicated, or even exploited beyond death. 

Perhaps that’s the real warning of Black Museum: even when the human body perishes, reducing the mind to data does not make it free. And if we are not careful, the future may remember us not for our progress, but for the prisons we built—displayed like artifacts in a museum.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Metalhead: Black Mirror, Can It Happen?

Before we talk about the events in Metalhead, let’s flashback to when this episode was first released: December 29, 2017

In 2017, Boston Dynamics founder Marc Raibert took the TED conference stage to discuss the future of his groundbreaking robots. His presentation sparked a mix of awe and unease.

Boston Dynamics has a long history of viral videos showcasing its cutting-edge robots, many of which were mentioned during the talk:

Big Dog is a four-legged robot developed by Boston Dynamics with funding from DARPA. Its primary purpose is to transport heavy loads over rugged terrain.

Then there’s Petman, a human-like robot built to test chemical protection suits under real-world conditions. 

Atlas, a 6-foot-tall bipedal robot, is designed to assist in search-and-rescue missions. 

Handle is a robot on wheels. It can travel at 9 mph, leap 4 feet vertically, and cover about 15 miles on a single battery charge.

And then there was SpotMini, a smaller, quadrupedal robot with a striking blend of technical prowess and charm. During the talk, SpotMini played to the audience’s emotions, putting on a show of cuteness. 

In November 2017, the United Nations debated a ban on lethal autonomous weapons, or “killer robots.” Despite growing concerns from human rights groups, no consensus was reached, leaving the future of weaponized AI unclear.

Simultaneously, post-apocalyptic themes gained traction in 2017 pop culture. From the success of The Walking Dead to Blade Runner 2049’s exploration of dystopian landscapes, this pre-covid audience seemed enthralled by stories of survival in hostile worlds, as though mentally preparing for the worst to come. 

And that brings us to this episode of Black Mirror, Episode 5 of Season 4: Metalhead.

Set in a bleak landscape, Metalhead follows Bella, a survivor on the run from relentless robotic “dogs” after a scavenging mission goes awry. 

This episode taps into a long-standing fear humanity has faced since it first began experimenting with the “dark magic” of machinery. Isaac Asimov’s Three Laws of Robotics were designed to ensure robots would serve and protect humans without causing harm. These laws state that a robot must not harm a human, must obey orders unless it conflicts with the first law, and must protect itself unless this conflicts with the first two laws. 

In Metalhead, however, these laws are either absent or overridden. This lack of ethical safeguards mirrors the real-world fears of unchecked AI and its potential to harm, especially in situations driven by survival instincts. 

So, we’re left to ask: At what point does innovation cross the line into an existential threat? Could machines, once designed to serve us, evolve into agents of our destruction? And, most importantly, as we advance technology, are we truly prepared for the societal consequences that come with it?

In this video, we’ll explore three key themes from Metalhead and examine whether similar events have already unfolded—and if not, whether or not it’s still plausible. Let’s go!

Killer Instincts

Metalhead plunges us into a barren wasteland where survival hinges on outsmarting a robotic “dog”. Armed with advanced tracking, razor-sharp senses, and zero chill, this nightmare locks onto Bella, after her supply mission takes a hard left into disaster. 

The robot dog’s tracking systems are similar to current military technologies. Autonomous drones and ground robots use GPS-based trackers and infrared imaging to locate targets. Devices like Lockheed Martin’s Stalker XE drones combine GPS, thermal imaging, and AI algorithms to pinpoint enemy movements even in dense environments or under cover of darkness. 

With AI-driven scanning systems that put human eyesight to shame, it can spot a needle in a haystack—and probably tell you the needle’s temperature, too. Think FLIR thermal imaging cameras, which let you see heat signatures through walls or dense foliage, or Boston Dynamics’ Spot using Light Detection and Ranging (aka Lidar) and pattern recognition to map the world with precision. 

Lidar works by sending out laser pulses and measuring the time it takes for them to bounce back after hitting an object. These pulses generate a detailed 3D map of the environment, capturing even the smallest features, from tree branches to building structures.

One of the most unsettling aspects of the robot in Metalhead is its superior auditory abilities. In the real world, acoustic surveillance technology, such as ShotSpotter, uses microphones and AI to detect and triangulate gunfire in urban areas. While it sounds impressive, its effectiveness is debated, with critics including a study by the University of Michigan pointing to false positives and uneven results. 

Still, technology is quickly advancing in recognizing human sounds, and some innovations are already in consumer products. Voice assistants like Alexa and Siri can accurately respond to vocal commands, while apps like SoundHound can identify music and spoken words in noisy environments. While these technologies offer convenience, they also raise concerns about how much machines are truly able to “hear.”

This is especially true when advanced sensors—whether auditory, visual, or thermal—serve a darker purpose, turning their sensory prowess into a weapon.

Take robotics companies like Ghost Robotics, which have developed machines equipped with sniper rifles, dubbed Special Purpose Unmanned Rifles (SPURs). These machines, designed for military applications, are capable of autonomously identifying and engaging targets—raising profound ethical concerns about the increasing role of AI in life-and-death decisions.

Built for Speed

In this episode, the robot’s movement—fast, deliberate, and capable of navigating uneven terrain—resembles Spot from Boston Dynamics. 

Spot can sprint at a brisk 5.2 feet per second, which translates to about 3.5 miles per hour. While that’s fairly quick for a robot navigating complex terrain, it’s still slower than the average human running speed. The typical human can run around 8 to 12 miles per hour, depending on fitness level and sprinting ability. 

So while Spot may not outpace a sprinter, DARPA’s Cheetah robot can — at least on the treadmill. Nearly a decade ago, a video was released of this robot running 28.3 miles per hour on a treadmill, leaving even Usain Bolt in the dust.

But while the treadmill is impressive, the current record holder for the fastest land robot is Cassie—and she’s got legs for it! Developed by Oregon State University’s Dynamic Robotics Lab, Cassie sprinted her way into the record books in 2022, running 100 m in 24.73 seconds. 

While today’s robots may not yet match the speed, adaptability, and relentless pursuit seen in the episode, the rapid strides in robotics and AI are quickly closing the gap. Like the tortoise slowly gaining ground on the overconfident hare, these technological advances, though not yet flawless, are steadily creeping toward a reality where they might outrun us in ways we hadn’t anticipated.

Charged to Kill

At a pivotal point in the story, Bella’s survival hinged on exploiting the robot’s energy source. By forcing it to repeatedly power on and off, she aims to drain its battery. Advanced machines, reliant on sensors, processors, and actuators, burn through significant energy during startup.

Today’s robots, like Spot or advanced military drones, run on rechargeable lithium-ion batteries. While these batteries offer excellent energy density, their runtime is finite—high-demand tasks like heavy movement or AI processing can drain them in as little as 90 minutes

However, the latest battery innovations are redefining what’s possible and the automotive industry is leading the charge. Solid-state batteries, for example, offer greater capacity, faster charging, and longer lifespans than traditional lithium-ion ones. Companies like Volkswagen and Toyota have invested heavily in this technology, hoping it will revolutionize the EV market.

Self-recharging technologies, like Kinetic Energy Recovery Systems (KERS), are moving from labs to consumer products. KERS, used in Formula 1 cars, captures and stores kinetic energy from braking to power systems and reduce fuel consumption. It’s now being explored for use in consumer and electric vehicles.

Battery innovation is challenging due to several factors. Improving energy density often compromises safety and developing new batteries requires expensive materials and complex manufacturing processes.

Modern robots are pretty good at managing their power, but even the smartest machines can’t escape the inevitable—batteries that drain under intense demands. While energy storage and self-recharging tech like solar or kinetic systems may help, robots will always face the dreaded low-battery warning. After all, as much as we’d love to plug them into an infinite, self-sustaining energy source, the laws of physics will always say, “Nice try!”

Information Flow

When Bella throws paint to blind the robot’s sensors and uses sound to mislead it, her plan works—briefly. But the robot quickly adapts, recalibrating its AI to interpret new environmental data and adjust its strategy. Similarly, when Bella shoots the robot, it doesn’t just take the hit—it learns, retaliating with explosive “track bullets” that embed tracking devices in her body. This intelligent flexibility ensures that, even when temporarily disabled, the robot can still alter its approach and continue pursuing its objective.

In real life, robots with such capabilities are not far-fetched. Modern drone swarms, such as those tested by DARPA, can coordinate multiple drones for collective objectives. In some instances, individual drones are programmed to act as decoys or to deliberately draw enemy fire, allowing the remaining drones in the swarm to carry out their mission.

In October 2016 at China Lake, California, 103 Perdix drones were launched from three F/A-18 Super Hornets. During this test, the micro-drones exhibited advanced swarm behaviors, including collective decision-making, adaptive formation flying, and self-healing.

While the events in Metalhead are extreme, they are not entirely outside the realm of possibility. Modern robotics, AI, and machine learning are progressing at a staggering rate, making the robot’s ability to adapt, learn, and pursue its objective all too real. 

The advancements in sensors, energy storage, and autonomous decision-making systems could one day allow machines to operate with the same precision seen in the episode. 

So, while we may not yet face such an immediate threat, the seeds are sown. A future dominated by robots is not a matter of “if,” but “when.” As we step into this new frontier, we must proceed with caution, for once unleashed, these creations could be as relentless as any natural disaster—except that nothing about this will be natural.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

We are only as smart as our AI

Opinions_Tay-Tweets-1

What Microsoft’s bot, Tay, really says about us

By Elliot Chan, Opinions Editor
Formerly published in The Other Press. April 7, 2016

While we use technology to do our bidding, we don’t always feel that we have supremacy over it. More often than not, we feel dependent on the computers, appliances, and mechanics that help our every day run smoothly. So, when there is a chance for us to show our dominance over technology, we take it.

As humans, we like to feel smart, and we often do that through our ability to persuade and influence. If we can make someone agree with us, we feel more intelligent. If we can change the way a robot thinks—reprogram it—we become gods indirectly. That is something every person wants to do. When it comes to the latest Microsoft intelligent bot, Tay, that is exactly what people did.

I have some experience chatting with artificial intelligence and other automated programs. My most prevalent memory of talking to a robot was on MSN Messenger—back in the days—when I would have long-winded conversations with a chatbot named SmarterChild. Now, I wasn’t having deep introspective talks with SmarterChild. I was trying to outsmart it. I’d lead it this way and that, trying to make it say something offensive or asinine. Trying to outwit a robot that claims to be a “smarter child” was surprisingly a lot of fun. It was a puzzle.

When the programmers at Microsoft built Tay, they probably thought it would have more practical uses. It was designed to mimic the personality of a 19-year-old girl. Microsoft wanted Tay to be a robot that could genuinely engage in conversations. However, without the ability to understand what she was actually copying, she had no idea that she was being manipulated by a bunch of Internet trolls. She was being lied to and didn’t even know it. Because of this, she was shut down after a day of her adapting to and spouting offensive things over Twitter.

I believe we are all holding back some offensive thoughts in our head. Like a dam, we keep these thoughts from bursting through our mouths in day-to-day life. On the Internet we can let these vulgar thoughts flow. When we know that the recipient of our thoughts is a robot with no real emotion, we can let the dam burst. There is no real repercussion.

In high school, I had a pocket-sized computer dictionary that translated English into Chinese and vice versa. This dictionary had an audio feature that pronounced words for you to hear. Obviously what we made the dictionary say was all the words we weren’t allow saying in school. I’m sure you can imagine a few funny ones. That is the same as what people do with bots. To prove that the AI is not as smart as us, we make it do what we don’t. At the moment, I don’t believe the general public is sophisticated enough to handle artificial intelligence in any form.