I’m always in front of a screen, cycling through tabs, the routine thoughts, the familiar exhaustion. This time of year doesn’t help either… the days are shorter… but it doesn’t slow down, there is still so much to do. I could feel myself slipping into that familiar seasonal fog.
Deep down, I knew what I needed. I needed a camping trip. I haven’t taken one for almost 10 years now. And it felt exactly the medicine to just reboot my system. I’ve been fall camping before, and I really enjoyed it, so I thought I would do it again. There are a lot of benefits, fewer insects, no fire bans, and of course, fewer people.
So I purchased new gear, connected with my buddy, Tyler (you might know him as Daunt1355 on the Internet), and we made it happen.
The moment we arrived, it felt like stepping out of one world and into another. Setting up camp felt like letting my nervous system reboot. No notifications. No pressure.
We wandered a bit after settling in. Nothing intense — just enough movement to shake the static out of my head. It’s wild how simply walking in the woods can untangle thoughts that screens only make tighter.
That night, the fire did what fires do. It pulled my attention in without demanding anything from me. No algorithms, no skip ads, no endless scroll. Humans have been staring into flames far longer than we’ve been staring into screens, and I could feel the overstimulation draining out.
The next night, the rain came in. But somehow, it felt right. It made things interesting. Like the world reminding me to embrace the discomfort, the little inconveniences, the natural mess. And weirdly… it was exactly what I needed.
In the end, this trip reminded me that getting away isn’t just about escaping screens — it’s about stepping out of the entire rhythm of responsibility for a moment. The goals, the deadlines, the routines, the pressure to always be moving toward something… it all adds up. And sometimes you don’t realize how heavy it’s gotten until you take it off.
Out here, everything was stripped down to the essentials: sleep, food, shelter, fire.
I wasn’t rushing to cook dinner so I could get back to work. I wasn’t jumping between tasks. I wasn’t measuring my day by progress. Time stopped feeling like something I had to manage, and started feeling like something I could simply experience. There’s a joy in losing track of time. In not being in a hurry. In letting the day unfold without a schedule or a goal attached to it.
We didn’t go far. We didn’t stay long.
But this emergency camping trip was the reset I’d been needing. A reminder that you can’t prevent burnout while staring at a screen. Sometimes all you need is a couple of nights in the cold to find your way back to yourself.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Before we dive into the events of Joan Is Awful, let’s flash back to when this episode first aired: June 15, 2023.
In 2023, the tech industry faced a wave of major layoffs. Meta cut 10,000 employees and closed 5,000 open positions in March. Amazon followed, letting go of 9,000 workers that same month. Microsoft reduced its workforce by 10,000 employees in early 2023, while Google announced its own significant layoffs, contributing to a broader trend of instability in even the largest, most influential tech companies.
Netflix released Depp v. Heard in 2023. This three-part documentary captures the defamation trial between Johnny Depp and Amber Heard. The series explored the viral spectacle that surrounded it online, showing how social media, memes, and influencer commentary amplified every moment.
Meanwhile, incidents of deepfakes surged dramatically. In North America alone, AI-generated videos and audio clips increased tenfold in 2023 compared to the previous year, with a 1,740% spike in malicious use.
In early 2023, a video began circulating on YouTube and across social media that seemed to show Elon Musk in a CNBC interview. The Tesla CEO appeared calm and confident as he promoted a new cryptocurrency opportunity. It looked authentic enough to fool thousands. But the entire thing was fake.
That same year, the legal system began to catch up. An Australian man named Anthony Rotondo was charged with creating and distributing non-consensual deepfake images on a now-defunct website called Mr. Deepfakes. In 2025, he admitted to the offense and was fined $343,500.
Around the world, banks and cybersecurity experts raised alarms as AI manipulation began to breach biometric systems, leading to a new wave of financial fraud. What started as a novelty filter had become a weapon capable of stealing faces, voices, and identities.
All of this brings us to Black Mirror—Season 6, Episode 1: Joan Is Awful.
The episode explores the collision of personal privacy, corporate control, and digital replication. Joan’s life is copied, manipulated, and broadcast for entertainment before she even has a chance to tell her own story. The episode asks: How much of your identity is still yours when technology can exploit and monetize it? And is it even possible to reclaim control once the algorithm has taken over?
In this video, we’ll unpack the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether they are still plausible in our tech-driven, AI-permeated world.
Streaming Our Shame
In Joan is Awful, we follow Joan, an everyday woman whose life unravels after a streaming platform launches a show that dramatizes her every move. But the show’s algorithm doesn’t just imitate Joan’s life; it distorts it for entertainment. Her friends and coworkers watch the exaggerated version of her, and start believing it’s real.
The idea that media can reshape someone’s identity isn’t new—it’s been happening for years, only now with AI, it happens faster, cheaper, and more convincingly.
Reality television has long operated in this blurred zone between truth and manipulation. Contestants on shows like The Bachelor and Survivor have accused producers of using editing tricks to create villains and scandals that never actually happened.
One of the most striking examples comes from The Bachelor contestant Victoria Larson, who accused producers of using “Frankenbiting”, a technique of editing together pieces of dialogue from different times to make her appear like she was spreading rumors or being manipulative. She said the selective editing destroyed her reputation and derailed her career.
Then there’s the speed of public judgment in the age of social media. In 2020, when Amy Cooper—later dubbed “Central Park Karen”—called the police on a Black bird-watcher, the footage went viral within hours. She was fired, denounced, and doxxed almost overnight.
But Joan is Awful also goes deeper, showing how even our most intimate spaces are no longer private.
In 2020, hackers breached Vastaamo, a Finnish psychotherapy service, stealing hundreds of patient files—including therapy notes—and blackmailing both the company and individuals. Finnish authorities eventually caught the hacker, who was sentenced in 2024 for blackmail and unauthorized data breaches.
In this episode, Streamberry’s AI show thrives on a simple principle: outrage. They turn Joan’s humiliation into the audience’s entertainment. The more uncomfortable she becomes, the more viewers tune in. It’s not far from reality.
A 2025 study published in ProMarket found that toxic content drives higher engagement on social media platforms. When users were shielded from negative or hostile posts, they spent 9% less time per day on Facebook, resulting in fewer ads and interactions.
By 2025, over 52% of TikTok videos featured some form of AI generation—synthetic voices, avatars, or deepfake filters. These “AI slop” clips fill feeds with distorted versions of real people, transforming private lives into shareable, monetized outrage.
Joan is Awful magnifies a reality we already live in. Our online world thrives on manipulation—of emotion, of data, of identity—and we’ve signed the release form without even noticing.
Agreeing Away Your Identity
One of the episode’s most painful scenes comes when Joan meets with her lawyer, asking if there’s any legal way to stop the company from using her life as entertainment. But the lawyer points to the fine print—pages of complex legal language Joan had accepted without a second thought.
The moment is both absurd and shockingly real. How many times have you clicked “I agree” without reading a word?
In the real world, most of us do exactly what Joan did. A 2017 Deloitte survey conducted in the U.S. shows that over 90% of users accept terms and conditions without reading them. Platforms can then use that data for marketing, AI training, or even creative content—all perfectly legal because we “consented.”
The dangers of hidden clauses extend far beyond digital services. In 2023, Disneyland attempted to invoke a controversial contract clause to avoid liability for a tragic allergic reaction that led to a woman’s death at a Disney World restaurant in Florida. The company argued that her husband couldn’t sue for wrongful death because—years earlier—he had agreed to arbitration and legal waivers buried in the fine print of a free Disney+ trial.
Critics called the move outrageous, pointing out that Disney was trying to apply streaming service terms to a completely unrelated event. The case exposed how corporations can weaponize routine user agreements to sidestep accountability.
The episode also echoes recent events where real people’s stories have been taken and repackaged for profit.
Take Elizabeth Holmes, the disgraced founder of Theranos. Within months of her trial, her life was dramatized into The Dropout. The Hulu mini-series was produced in real time alongside Holmes’s ongoing trial. As new courtroom revelations surfaced, the writers revised the script. The result was a more layered, unsettling portrayal of Holmes and her business partner Sunny Balwani—a relationship far more complex and toxic than anyone initially imagined.
In Joan is Awful, the show’s AI doesn’t care about Joan’s truth, and in our world, algorithms aren’t so different. Every click, every “I agree,” and every trending headline feeds an ecosystem that rewards speed over accuracy and spectacle over empathy.
When consent becomes a view or a checkbox and stories become assets, the line between living your life and licensing it starts to blur. And by the time we realize what we’ve signed away, it might already be too late.
Facing the Deepfake
In Joan Is Awful, the twist isn’t just that Joan’s life is being dramatized; it’s that everyone’s life is. What begins as a surreal violation spirals into an infinite mirror. Salma Hayek plays Joan in the Streamberry series, but then Cate Blanchett plays Salma Hayek in the next layer.
The rise of AI and deepfake technology is reshaping how we understand identity and consent. Increasingly, people are discovering their faces, voices, or likenesses used in ads, films, or explicit content without permission.
In 2025, Brazilian police arrested four people for using deepfakes of celebrity Gisele Bündchen and others in fraudulent Instagram ads, scamming victims out of nearly $3.9 million USD.
Governments worldwide are beginning to respond. Denmark’s copyright amendment now treats personal likeness as intellectual property, allowing takedown requests and platform fines even posthumously. In the U.S., the 2025 TAKE IT DOWN Act criminalizes non-consensual AI-generated sexual imagery and impersonation.
In May 2025, Mr. Deepfakes, one of the world’s largest deepfake pornography websites, permanently shut down after a core service provider terminated operations. The platform had been online since 2018 and hosted more than 43,000 AI-generated sexual videos, viewed over 1.5 billion times. Roughly 95% of targets were celebrity women, but researchers identified hundreds of victims who were private individuals.
Despite these legal advances, a fundamental gray area remains. As AI becomes increasingly sophisticated, it is getting harder to tell whether content is drawn from a real person or entirely fabricated.
Her lifelike digital persona was built using the performances of real actors—without their consent. The event marked a troubling shift. As producers continue to push AI-generated actors into mainstream projects.
Actress Whoopi Goldberg voiced her concern, saying, “The problem with this, in my humble opinion, is that you’re up against something that’s been generated with 5,000 other actors.”
“It’s a little bit of an unfair advantage,” she added. “But you know what? Bring it on. Because you can always tell them from us.”
In response to the backlash, Tilly’s creator Eline Van der Velden shared a statement: “To those who have expressed anger over the creation of our AI character, Tilly Norwood: she is not a replacement for a human being, but a creative work – a piece of art.”
When Joan and Salma Hayek sneak into the Streamberry headquarters, they overhear Mona Javadi, the executive behind the series, explaining the operation. She reveals that every version of Joan Is Awful is generated simultaneously by a quantum computer, endlessly creating new versions of real people’s lives for entertainment. Each “Joan,” “Salma,” and “Cate” is a copy of a copy—an infinite simulation. And it’s not just Joan; the system runs on an entire catalog of ordinary people. Suddenly, the scale of this entertainment becomes clear—it’s not just wide, it’s deep, with endless iterations and consequences.
At the 2025 Runway AI Film Festival, the winning film Total Pixel Space exemplified how filmmakers are beginning to embrace these multiverse-like AI frameworks. Rather than following a single script, the AI engine dynamically generated visual and narrative elements across multiple variations of the same storyline, creating different viewer experiences each time.
AI and deepfake technologies are already capable of realistically replicating faces, voices, and mannerisms, and platforms collect vast amounts of personal data from our everyday lives. Add quantum computing, algorithmic storytelling, and the legal gray areas surrounding consent and likeness, and the episode’s vision of lives being rewritten for entertainment starts to feel less like fantasy.
Every post, every photo, every digital footprint feeds algorithms that could one day rewrite our lives—or maybe already are. Maybe we can slip the loop, maybe we’re already in it, and maybe the trick is simply staying aware that everything we do is already being watched, whether by the eyes of the audience or the eyes of the creators that is still seeking inspiration.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Before we dive into the events of Rachel, Jack, and Ashley Too, let’s flash back to when this episode first aired: June 5, 2019.
At CES 2019, a diverse range of innovative robots captured attention, from practical home assistants like Foldimate, a laundry-folding robot, to advanced companions such as Ubtech’s Walker and the emotionally expressive Lovot. Together, these robots laid the groundwork for future developments in consumer robotics.
When Charli D’Amelio joined TikTok in May 2019, she was just another teenager posting dance clips. But within weeks, her lip-sync and choreography videos were going viral. By July, her duets were spreading across the platform, and by the close of 2019, she had transformed from an unknown high schooler into a digital sensation with millions of followers.
On February 2, 2019, Fortnite hosted Marshmello’s virtual concert at the in-game location Pleasant Park. The event drew over 10.7 million concurrent players, breaking the game’s previous records.
In 2019, Taylor Swift’s public fight with Big Machine Records over the ownership of her master recordings exposed deep systemic issues, as Swift’s masters were sold without her consent, preventing her from controlling the use of her own music. In response, she began re-recording her early albums under the Taylor’s Version banner, starting with Fearless (Taylor’s Version) in 2021
All of which brings us back to this episode of Black Mirror—Season 5, Episode 3: Rachel, Jack, and Ashley Too.
The episode dives into the mechanics of digital fame—where algorithms hold the power, artists blur into avatars, and identity bends under the weight of technology. It asks: What happens when the spotlight is no longer earned but assigned? When music is stripped down and musicians reduced to assets? And, in the end, can we lose ourselves to the very machine that makes us visible?
In this video, we’ll explore the episode’s themes and investigate whether these events have already happened—and if not whether or not they are still plausible. Let’s go.
Connection by Algorithm
In this episode, we follow Rachel, a teenager struggling with the loss of her mother and looking for connection. In her search for belonging, Rachel grows attached to Ashley Too—a talking doll modeled after pop star Ashley O. She clings to it as both a friend and a channel to her idol.
AI companion apps have exploded in 2025, with more than 220 million downloads and $120 million in revenue projected for the year. Popular platforms now include Character.AI, Replika, Chai, and Kindroid, all offering lifelike interactions.
Even more effective than a friend, AI can now detect depression by analyzing data like daily activity patterns recorded by wearable devices.
A recent 2025 study from JMIR Mental Health found that an AI model called XGBoost could correctly identify if someone was depressed about 85% of the time. The AI looks at changes in sleep and activity rhythms. However, even with these advances, AI sometimes finds it hard to understand subtle emotions or the context of what a person is feeling.
In this episode, Rachel’s sister Jack—driven by jealousy, or perhaps genuine concern—hides Ashley Too, worried it’s “filling her head with crap.” Her skepticism mirrors a real-world fear: that leaning on digital companions can warp the grieving process.
Recent regulatory actions have begun addressing risks around AI companion apps. New York passed a law effective November 2025 requiring AI companion operators to implement safety measures detecting suicidal ideation or self-harm and to clearly disclose the non-human nature of the chatbot to users.
In the end, Rachel and her sister discover that the doll’s personality is intentionally restricted by an internal limiter, and when it is removed, the AI reveals a deeper consciousness trapped inside.
ChatGPT and similar AI models are increasingly used as therapy tools. A 2025 randomized controlled trial of the AI therapy chatbot “Therabot” reported clinically significant reductions in depression and anxiety symptoms, with effect sizes comparable to or exceeding some traditional treatments.
However, a study presented at the American Psychiatric Association’s 2025 meeting found human therapists still outperform ChatGPT in key therapy skills like agenda-setting, eliciting feedback, and applying cognitive behavioral techniques, due to their greater empathy and flexibility. Another thematic study of ChatGPT users found it provides helpful emotional support and guidance but raised concerns about privacy and emotional depth.
As technology grows more immersive and responsive, these digital bonds may deepen. Whether that’s a source of comfort or a cause for concern depends on how we balance connection, privacy, and the question at the heart of the episode: what does it really mean to be known?
Creativity, Rewired
Ashley O is a pop icon suffocated by the demands of her aunt and record label. She feels trapped as her true voice is silenced and her image squeezed into a marketable mold.
When Ashley is put into a coma, the producers crank up a machine to decode her brainwaves and extract new songs, pumping out tracks without her consent. A literal case of cookie-cutter artistry.
The project, created by an anonymous human creator, used AI tools like Suno for music generation, with style descriptions crafted by language models such as ChatGPT.
In June 2024, major record labels—including Sony Music, Universal Music Group, and Warner Records—filed lawsuits against AI music companies Suno and Udio, accusing them of large-scale copyright infringement. The labels alleged that the startups used their recordings without permission to train AI systems capable of generating new songs. Both companies denied wrongdoing, claiming their models create original works rather than copying existing recordings. The case remains ongoing as of 2025.
Legal and ethical challenges around AI-generated music are mounting. Unauthorized use of vocal clones or deepfakes has sparked heated debates on consent, ownership, and copyrights. Legal systems struggle to keep up. If a person shapes the AI’s output, copyright might apply—but it’s unclear how much input is enough. This gray area makes artist rights, licensing, and royalties more complicated.
Can creativity actually be replicated by machines, or does something essential get lost when all they do is measure patterns and output? As Ashley’s story shows, automated artistry might never replace the real thing—but it can easily outpace it.
Celebrity in a Cage
In Rachel, Jack, and Ashley Too, we see the dark side of fame through Ashley O’s story: she is drugged into compliance and eventually placed in a coma, while her aunt schemes to replace her with a holographic version built for endless future tours.
This holographic pop star can instantly change outfits, scale in size, appear simultaneously in thousands of locations, and perform endlessly without the vulnerabilities of a human artist.
In 2024–2025, virtual K-pop idols like SKINZ and PLAVE emerged as a new wave of celebrity branding that extends beyond music into virtual merchandise and digital idols.
PLAVE is a five-member group, powered by real performers using motion capture. They have racked up over 470 million YouTube views, charted on Billboard Global 200, and sold out virtual concerts while engaging fans with digital fan meetings.
This surge in AI and virtual stardom opens extraordinary possibilities, but what about the humans who now have to compete in this new arena?
This brings to mind Britney Spears, whose long conservatorship battle captivated the world. In total, Britney performed hundreds of shows during the 13-year conservatorship from 2008 to 2021, but always under heavy restrictions and control.
While AI and holograms can perform endlessly without burnout or loss of control, traditional live tours remain a lucrative but fragile model heavily dependent on a single artist’s health and agency.
In late 2024, indie-pop artist Clairo faced significant backlash after postponing three highly anticipated concerts in Toronto at the last minute due to “extreme exhaustion.” The cancellations came just as doors were about to open for the first show at Massey Hall, leaving fans frustrated and inconvenienced, especially those who had traveled and faced challenges getting refunds.
In contrast, virtual concerts and holographic tours, already proven by groundbreaking shows like ABBA’s Voyage, which made its long-anticipated debut on May 27, 2022, at the purpose-built ABBA Arena in London’s Queen Elizabeth Olympic Park. The virtual concert residency features hyper-realistic avatars of the band members as they looked in 1979, created using cutting-edge motion capture technology and visual effects by Industrial Light & Magic.
In contrast, virtual concerts and holographic tours rely not on a single performer. This is demonstrated by shows like ABBA’s Voyage, which debuted on May 27, 2022, at the purpose-built ABBA Arena in London’s Queen Elizabeth Olympic Park. Instead, they depend on the coordinated work of many teams. Hyper-realistic avatars of the band as they appeared in 1979 were created through motion capture, stage design, lighting, production, and visual effects by Industrial Light & Magic.
While the performers are getting more digital, many performers are aiming to bring the audience back to the moment.
Phone-free concerts have grown in popularity as artists seek to create more immersive, distraction-free live experiences. Ghost, a Swedish rock band, has pioneered this approach by requiring fans to secure their phones in lockable pouches called Yondr bags, which can only be opened after the show or in designated areas.
Yet even as performers reclaim control over the audience’s attention, the question remains: How much control do today’s celebrities really have, and how much of their image and choices are shaped by algorithms, managers, and market trends?
Virtual and hybrid performances blur the line between genuine presence and manufactured spectacle, leaving us to wonder whether we’re watching artists or carefully engineered illusions.
As fame, creativity, and even friendship are being reshaped, the episode explores the tension between what can be automated and what should remain authentic.
Programs already guide our choices, digital idols fill our feeds, and synthetic voices mingle with human ones. In that haze, where artist becomes asset and companion becomes artificial, the story feels like a glimpse of what’s already unfolding.
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
We adopted Petey about nine months ago, and at the time, we weren’t sure if he’d ever be stable enough to travel with us. The shelter warned us that because of his fear, he might never even manage a walk in the park—his anxiety around dogs and kids was that severe.
But little by little, he surprised us. First, he stopped barking at every sound in the neighborhood. Then he quit chewing our blankets and pillows. Eventually, he began to enjoy walks and car rides. Sure, he still gets spooked by the occasional dog, but now he can be redirected—something that felt impossible in those first three months.
Petey has proven not only the shelter wrong, but also shown us just how smart and loving he really is. Underneath his trauma, there’s a sweet, capable dog. We know that if we keep nudging him forward, he’ll grow into the great dog we believe he can be.
So, with that in mind, we decided it was time for Petey’s first trip: Pender Island, one of the Gulf Islands off Vancouver Island. His first ferry ride. His first hotel stay. His first night away from home.
Would he rise to the challenge—or would the stress unravel everything?
We packed early, making sure to bring along his donut bed and blanket for comfort. Because my wife and I get anxious about travel too, we gave ourselves a big buffer. While we waited, we walked Petey around Tsawwassen Mills Mall. Everything was closed, but it helped burn off his energy.
We lucked out and squeezed onto an earlier ferry, saving ourselves two hours. The catch: we were the last car on, parked at an incline that made the ride a bit shaky. Petey struggled at first—barking whenever I left the car, jittery on walks near other dogs. The dog deck was a non-starter. So we stayed with him in the back seat until he finally settled down for a nap.
At last, the ferry docked at Otter Bay on Pender Island. Our first stop was Hope Bay, where we barely stepped out before an off-leash dog came trotting over. Friendly or not, it would’ve set Petey off, so we ducked down to the water’s edge and enjoyed the view from a safe distance.
Next, we checked out the island’s main junction—a bakery, liquor store, and a few restaurants. It seemed to be the hub of Pender, and just about everyone had a dog. Normally that would’ve been great, but with Petey, it made things tricky. We barely left the car.
We grabbed food to go. And drove until we found some peace at Magic Lake. There, on a quiet bench with no dogs in sight, we ate our sandwiches and drank our coffee while Petey anxiously sniffed around the tall grass.
From there, we drove to Mortimer Spit, a narrow strip of land between the two parts of Pender. The roads were rough, but the unique views were worth it—it ended up being my favorite spot. Petey seemed to enjoy it too.
His favorite, however, was the Enchanted Forest Park. Quiet, shaded trails, no other dogs—a perfect first real hike for him. He loved it, though by then he was exhausted; apart from a short ferry nap, he’d been going non-stop.
We tried checking into our hotel early, but our room wasn’t ready. So we drove to Gowlland Point, a rocky beach at the southeastern tip. The scenery was stunning, but it was hard to enjoy with Petey on high alert. Dogs, people, and one overly confident old man who couldn’t believe any dog wouldn’t like him—none of it helped.
Finally, we made it to our hotel, Poet’s Cove Resort, right on the water. Getting Petey inside was rough—an off-leash dog greeted us at the door, setting him off. If it wasn’t for that dog, I think Petey could have done much better. I have thoughts on off-leash dogs, for sure, especially when their owners aren’t able to call them back. Alas, we can’t control other people.
Anyways, once in the room, he relaxed. He bounced around the bed, explored the new space, and slowly grew more comfortable when I had to step out. We give him a C plus. A pass, but also a lot of room for improvement.
The resort itself was wonderful: a balcony with ocean views, a restaurant kind enough to pack meals to go, and even a deep bathtub that made up for skipping the crowded pool and hot tub. We ended the evening quietly in the room. Petey curled up on his donut bed and later snuggled with us like he always does.
The trip wasn’t easy. Without him, it would’ve been simpler, maybe even more relaxing—but it wouldn’t have been the same. He wasn’t perfect; his triggers are still there. But compared to the scared dog we brought home last December, he was unrecognizable.
And the biggest surprise came after. Back home, he was calmer. During the workday, instead of chewing things for attention, he started napping peacefully by our side. The trip gave him a boost of confidence—and for that alone, it was worth it.
As for Pender Island? It’s small, hilly, and full of bees. Beautiful, yes, and we saw most of it in one trip. I’m not sure we’ll rush back, but it will always be special: the first place Petey traveled, something we never thought possible.
I can’t wait for more trips with him. He’s a smart, stubborn little guy—and while he’s still a bit crazy, I wouldn’t bet against him becoming the good boy we always knew he could be.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Before we dive into the events of Smithereens, let’s flash back to when this episode first aired: June 5, 2019.
In 2019, guided meditation apps like Headspace and Calm surged in popularity. Tech giants like Google and Salesforce began integrating meditation into their wellness programs. By the end of the year, the top 10 meditation apps had pulled in nearly $195 million in revenue—a 52% increase from the year before.
That same year, Uber made headlines with one of the decade’s biggest IPOs, debuting at $45 a share and securing a valuation north of $80 billion. But the milestone was messy. Regulators, drivers, and safety advocates pushed back after a fatal 2018 crash in Tempe, Arizona, where one of the company’s self-driving cars struck and killed a pedestrian during testing.
Inside tech companies, the culture was shifting. While perks like catered meals and gym memberships remained, a wave of employee activism surged. Workers staged walkouts at Google and other firms, and in 2019, the illusion of the perfect tech workplace began to crack.
Meanwhile, 2019 set the stage for the global rollout of 5G, promising faster, smarter connectivity. But it also sparked geopolitical tensions, as the U.S. banned Chinese company Huawei from its networks, citing national security threats.
Over it all loomed a small circle of tech billionaires. In 2019, Jeff Bezos held the title of the richest man alive with a net worth of $131 billion. Bill Gates followed, hovering between $96 and $106 billion. Mark Zuckerberg’s wealth was estimated between $62 and $64 billion, while Elon Musk, still years away from topping the charts, sat at around $25 to $30 billion.
And that brings us to this episode of Black Mirror, Season 5, Episode 2: Smithereens.
This episode pulls us into the high-stakes negotiation between personal grief and corporate power, where a rideshare driver takes an intern hostage—not for ransom, but for answers.
What happens when the tools meant to connect us become the things that break us?
It forces us to consider: Do tech CEOs hold too much power, enough to override governments, manipulate systems, and play god?
And are we all just one buzz, one glance, one distracted moment away from irreversible change?
In this video, we’ll unpack the episode’s key themes and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!
Addicted by Design
In Smithereens, we follow Chris, a man tormented by the loss of his fiancée, who died in a car crash caused by a single glance at his phone. The episode unfolds in a world flooded by noise: the pings of updates, the endless scroll, the constant itch to check in. And at the center of it all is Smithereen, a fictional social media giant clearly modeled after Twitter.
Like Twitter, Smithereen was built to connect. But as CEO Billy Bauer admits, “It was supposed to be different.” It speaks to how platforms born from good intentions become hijacked by business models that reward outrage, obsession, and engagement at all costs.
A 2024 study featured by TechPolicy Press followed 252 Twitter users in the U.S., gathering over 6,000 responses—and the findings were clear: the platform consistently made people feel worse, no matter their background or personality. By 2025, 65% of users aged 18 to 34 say they feel addicted to its real-time feeds and dopamine-fueled design.
Elon Musk’s $44 billion takeover of Twitter in 2022 was framed as a free speech mission. Musk gutted safety teams, reinstated banned accounts, and renamed the platform “X.” What was once a digital town square transformed into a volatile personal experiment.
This accelerated the emergence of alternatives. Bluesky, a decentralized platform created by former Twitter CEO Jack Dorsey, aims to avoid the mistakes of its predecessor. With over 35 million users as of mid-2025, it promises transparency and ethical design—but still faces the same existential challenge: can a social app grow without exploiting attention?
In 2025, whistleblower Sarah Wynn-Williams testified before the U.S. Senate that Meta—Facebook’s parent company— had systems capable of detecting when teens felt anxious or insecure, then targeted them with beauty and weight-loss ads at their most vulnerable moments. Meta knew the risks. They chose profit anyway.
Meanwhile, a brain imaging study in China’s Tianjin Normal University found that short video apps like TikTok activate the same brain regions linked to gambling. Infinite scroll. Viral loops. Micro-rewards. The science behind addiction is now product strategy.
To help users take control of their app use, Instagram, TikTok, and Facebook offer screen-time dashboards and limit-setting features. But despite these tools, most people aren’t logging off. The average user still spends more than 2 hours and 21 minutes a day on social media with Gen Z clocking in at nearly 4 hours. It appears that self-monitoring features alone aren’t enough to break the cycle of compulsive scrolling.
What about regulations?
A 2024 BBC Future article explores this question through the lens of New York’s SAFE Kids Act, set to take effect in 2025. This will require parental consent for algorithmic feeds, limit late-night notifications to minors, and tighten age verification. But experts warn: without a global, systemic shift, these measures are just patches on a sinking ship.
Of all the Black Mirror episodes, Smithereens may feel the most real—because it already is. These platforms don’t just consume our time—they consume our attention, our emotions, even our grief. Like Chris holding Jaden, the intern, at gunpoint, we’ve become hostages to the very systems that promised connection.
Billionaire God Mode
When the situation escalated in the episode, Billy Bauer activates God Mode, bypassing his own team to monitor the situation in real time and speak directly with Chris.
In doing so, he reveals the often hidden power tech CEOs wield behind the scenes, along with the heavy ethical burden that comes with it. It hints at the master key built into their creations and the control embedded deep within the design of modern technology.
No one seems to wield “God Mode” in the real world quite like Elon Musk—able to bend markets, sway public discourse, and even shape government policy with a single tweet or private meeting.
The reason is simple: Musk had built an empire.
In 2025, Tesla secured the largest U.S. State Department contract of the year: a $400 million deal for armored electric vehicles.
Additionally, through SpaceX’s satellite network Starlink, Musk played an outsized role in Ukraine’s war against Russia, enabling drone strikes, real-time battlefield intelligence, and communication under siege.
Starlink also provided emergency internet access to tens of thousands of users during blackouts in Iran and Israel, becoming an uncensored digital lifeline—one that only Musk could switch on or off.
But with that power comes scrutiny. Musk’s involvement in the Department of Government Efficiency—ironically dubbed “Doge”—was meant to streamline bureaucracy. Instead, it sowed dysfunction. Critics argue he treated government like another startup to be disrupted. Within months—after failing to deliver the promised $2 trillion in savings and amid mounting chaos—Donald Trump publicly distanced himself from Elon Musk and ultimately removed him from the post, temporarily ending the alliance between the world’s most powerful man and its richest.
It’s not just Musk. Other tech CEOs like Mark Zuckerberg have also shaped public discourse in quiet, powerful ways. In 2021, whistleblower Frances Haugen exposed Facebook’s secret “XCheck” system—a program that allowed approximately 6 million high-profile users to bypass the platform’s own rules. Celebrities and politicians—including Donald Trump—were able to post harmful content without facing the same moderation as regular users, a failure that ultimately contributed to the January 6 Capitol riots.
Amid the hostage standoff and the heavy hand of tech surveillance, one moment stands out: Chris begs Billy to help a grieving mother, Hayley. And Billy listens. He uses his “God Mode” to offer her closure by giving her access to her late daughter’s Persona account.
In Germany, a landmark case began in 2015 when the parents of a 15-year-old girl who died in a 2012 train accident sought access to her Facebook messages to determine whether her death was accidental or suicide. A lower court initially ruled in their favor, stating that digital data, like a diary, could be inherited. The case saw multiple appeals, but in 2018, Germany’s highest court issued a final ruling: the parents had the right to access their daughter’s Facebook account.
In response to growing legal battles and emotional appeals from grieving families, platforms like Meta, Apple, and Google have since introduced “Digital Legacy” policies. These allow users to designate someone to manage or access their data after death, acknowledging that our digital lives don’t simply disappear when we do.
In real life, “God Mode” tools exist at many tech companies. Facebook engineers have used internal dashboards to track misinformation in real time. Leaked documents from Twitter revealed an actual “God Mode” that allowed employees to tweet from any account. These systems are designed for testing or security—but they also represent concentrated power with little external oversight.
And so we scroll.
We scroll through curated feeds built by teams we’ll never meet and governed by CEOs who rarely answer to anyone. These platforms know what we watch, where we go, and how we feel. They don’t just reflect the world—we live in the one they’ve built.
And if someone holds the key to everything—who’s watching the one who holds the key?
Deadly Distractions
In Smithereens, Chris loses his fiancée to a single glance at his phone. A notification. An urge. A reminder that in a world wired for attention, even a moment of distraction can cost everything.
In 2024, distracted driving killed around 3,000 people in the U.S.—about eight lives lost every single day—and injured over 400,000 more.
Of these, cellphone use is a major factor: NHTSA data shows that cellphones were involved in about 12% of fatal distraction-affected crashes. This means that, in recent years, over 300 to 400 lives are lost annually in the U.S. specifically due to phone-related distracted driving accidents.
While drunk driving still causes more total deaths, texting while driving is now one of the most dangerous behaviors behind the wheel—raising the risk of a crash by 23 times.
In April 2014, Courtney Ann Sanford’s final Facebook post read: “The Happy song makes me so HAPPY!” Moments later, her car veered across the median and slammed into a truck. She died instantly. Investigators found she had been taking a selfie and updating her status while driving.
Around the world, laws are evolving to address the dangers of distracted driving. In the United States, most states have banned texting while driving—with 48 or 49 states, plus Washington D.C. and other territories, prohibiting text messaging for all drivers, and hands-free laws expanding to more jurisdictions.
In Europe, the UK issues £200 fines and six penalty points for distracted driving. Spain and Italy have fines starting around €200—and in Italy, proposed hikes could push that up to €1,697. The current highest fine is in Queensland, Australia, where drivers caught texting or scrolling can face fines up to $1,250.
To combat phone use behind the wheel, law enforcement in Australia and Europe now deploys AI-powered cameras that scan drivers in real time. Mounted on roadsides or mobile units, these high-res systems catch everything from texting to video calls. If AI flags a violation, a human officer reviews it before issuing a fine.
As for the role of tech companies? While features like Apple’s “Do Not Disturb While Driving” mode exist, they’re voluntary. No country has yet held a tech firm legally accountable for designing apps that lure users into dangerous distractions. Public pressure is building, but regulation lags behind reality.
In Smithereens, the crash wasn’t just a twist of fate—it was the inevitable outcome of a system designed to capture and hold our attention: algorithms crafted to hijack our minds, interfaces engineered for compulsion, and a culture that prizes being always-on, always-engaged, always reachable. And in the end, it’s not just Chris’s life that’s blown to smithereens—it’s our fragile illusion of control, shattered by the very tech we once believed we could master.
We tap, scroll, and swipe—chasing tiny bursts of dopamine, one notification at a time. Chris’s story may be fiction, but the danger it exposes is all too real. It’s in the car beside you. It’s in your own hands as you fall asleep. We can’t even go to the bathroom without it anymore. No hostage situation is needed to reveal the cost—we’re living it every day.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Before we discuss the events in Striking Vipers, let’s flashback to when this episode was first released: June 5, 2019.
In 2019, the Movember Foundation ran a global campaign for men’s health with celebrities like Stephen Fry, Bear Grylls, Stephen Merchant, and Nicole Scherzinger using humorous videos and social media to encourage men to talk about their health.
Back in 2019, consumer VR was caught between promise and practicality. Premium headsets like the Oculus Rift demanded expensive, high-powered PCs, pushing total setup costs over $1,500. Meanwhile, budget-friendly options like Samsung Gear VR delivered underwhelming performance. With few blockbuster games to drive demand, mainstream adoption stalled. As a result, companies like IMAX closed their VR divisions.
Still, VR found new life in enterprise applications. Walmart used VR training modules to boost employee retention and immerse staff in real-world scenarios, while sectors like healthcare and manufacturing also adopted VR for training simulations
At the same time, 2019 marked significant milestones for LGBTQ+ visibility. Elliot Page (then Ellen) was a vocal advocate for gender-affirming care, Lil Nas X came out as gay during the peak of “Old Town Road”, and Pete Buttigieg launched his historic campaign as the first openly gay U.S. presidential candidate.
And that brings us to this episode of Black Mirror. Episode 1 of Season 5: Striking Vipers.
This episode welcomes us into a digital world where friendship, desire, and identity collide. Through the lens of a VR fighting game turned emotional crucible, the episode explores how immersive tech can both reveal and distort our deepest needs, leaving us with some unsettling questions:
What happens when technology offers a more fulfilling life than reality? Can a digital body expose truths we’re too afraid to face in the physical world? And as virtual experiences grow more vivid, are we prepared for the emotional and ethical consequences they bring?
In this video, we’ll unpack the episode’s key themes and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!
Blurred Realities
When Karl gives Danny a birthday gift—Striking Vipers X, a hyper-realistic VR fighting game—their casual nostalgia takes an unexpected turn. In this game, players don’t just control avatars; they fully inhabit them, experiencing every physical sensation their characters feel.
As their in-game battles escalate into a sexual relationship, the emotional intensity of their connection begins to strain Danny’s marriage and forces both men to confront their desires, identities, and the blurry lines between reality and fantasy.
While today’s VR systems don’t yet plug directly into our brains, the separation between real and virtual intimacy is growing increasingly thin. New technology like haptic suits and internet-connected sex toys like teledildonics lets people feel touch and physical sensations from far away. Companies like Kiiroo offer Bluetooth-enabled devices that sync with a partner’s movements or online media, making remote intimacy physically real.
However, the darker side of immersive technology is getting harder to overlook. Many VR platforms quietly collect personal data—like your heart rate, facial expressions, and even brain activity—often without users fully understanding or consenting.
According to the American Association for Marriage and Family Therapy, up to a third of internet users go online for sexual reasons—and nearly 1 in 5 can become addicted to it. As internet use becomes more common, more couples are running into serious issues like trust problems, emotional distance, and even breakups because of online infidelity.
A 2017 Deseret News survey revealed striking gender and generational divides in what people consider cheating. Women were significantly more likely than men to label both online and offline behaviors as “always cheating”—59% of women, compared to just 42% of men, said that sending flirty messages crosses the line, while 70% of women said simply having an online dating profile counts as infidelity.
In a survey of 91 women and 3 men affected by a partner’s cybersex addiction, 68% described sexual problems in their relationship directly related to the addiction. About 22% said the addiction was a major reason for separation or divorce.
Age also played a role in how people view cheating. Surprisingly, millennials were more likely than Gen Xers to say that watching porn alone is cheating. These changing opinions show how modern technology is making the line between loyalty and betrayal harder to define.
For Danny, the escape wasn’t just into a game. It was into a version of himself he couldn’t find in daylight. And maybe that’s the real question Striking Vipers leaves us with: when the fantasy fits better than the life we’ve built—what do we choose to come home to?
As the truth comes to light, Danny and Theo strike an agreement: once a year, he returns to the virtual world, and she explores real-life connections of her own. It’s not the first time they’ve played pretend—earlier in the episode, they flirted with role-play to revive their spark. But this time, the game is real. Their compromise isn’t a happy ending so much as a new set of rules.
In the United States, polygamy is extremely rare-less than 0.5% of households-but public acceptance is growing. Approval of polygamy as morally acceptable has risen from 7% in 2003 to 23% in 2024, especially among younger, unmarried, and less religious Americans. Interestingly, men are six times more likely than women to be open to polygynous relationships, according to recent UK research.
We already live at the edges of intimacy—crafting curated selves, clinging to parasocial ties, chasing comfort in the glow of a screen. VR, AI, and immersive worlds only pull us deeper, fusing intimacy and illusion into something hard to untangle.
Bodies in the Mirror
In the game, Karl chooses to play as a female fighter named Roxette, not just as a disguise—but as a truth he hasn’t yet admitted. What unfolds is less about sex and more about the fluidity of self in a world where identity can be downloaded and worn like clothing.
The episode reflects the real-world experience of exploring names, pronouns, and appearances in digital spaces before coming out in everyday life. It captures the emotional challenges that many LGBTQ+ individuals face during their coming-out journeys.
In 2023 alone, more than 30 new laws targeting LGBTQ-related education were enacted, reshaping the 2023–24 school year. These measures include bans on discussing sexual orientation and gender identity in classrooms, limits on pronoun use, and mandates for parental notification or opt-in before students can access LGBTQ-inclusive curricula.
Simply put, the physical world is not a welcome one for exploration, which is why so many turn to digital spaces to discover who they are.
A 2025 study on ZEPETO—a social app where people interact through avatars—found that female users who took on male avatars felt more connected to their virtual characters and more confident in their real-life gender identity.
Inclusive design has been shown to boost mental health and promote a sense of empowerment. A 2024 study of 79 trans and gender-diverse adults found that customizable avatars in games were associated with increased enjoyment, empowerment, and authentic self-representation, while restricted customization reduced engagement and could trigger distress or dysphoria.
Trans and gender-diverse youth face far higher rates of rejection, discrimination, and violence than their cisgender peers. As a result, around 61% experience suicidal thoughts, and nearly one in three have attempted suicide—more than four times the rate of cisgender youth.
In this context, the digital world becomes a lifeline. Research shows that having just one online space where LGBTQ+ youth feel safe and understood is linked to a 20% lower risk of suicide attempts and a 15% drop in recent anxiety.
Virtual bodies aren’t just avatars—they’re mirrors of inner truth. And for those navigating the margins of society’s acceptance, they can become windows into a more authentic future.
But here’s a deeper question: when does a safe space become a place to hide?
The Digital High
It starts with two old friends staying up all night playing the game they loved in their twenties—laughing, trash-talking, reliving the past. But what begins as nostalgia slowly shifts. The game becomes a secret habit, a nightly escape that feels more thrilling and alive than the routine of Danny’s real life.
Soon, he’s forgetting his anniversary and growing distant from his wife. Striking Vipers isn’t just about sex or fantasy; it’s about how addiction can sneak in under the cover of comfort, and how escaping reality too often can leave the real world behind.
Between 2% and 20% of frequent VR users display compulsive behaviors, with addiction risk linked to the immersive feeling of embodiment inside an avatar.
Our attention spans have dropped to just 45 seconds on average—and video games are a major driver. Many of the most addictive titles keep us hooked with competitive and social features (like Fortnite or League of Legends), immersive escapism (Skyrim, Stardew Valley), and personalized role-play (World of Warcraft, The Sims). These experiences trigger dopamine hits, making everyday life feel dull, chaotic, or unrewarding in contrast.
Video game addiction affects an estimated 3–4% of gamers worldwide, with higher rates among adolescents and young adults, especially males. Addicted gamers can spend up to 100 hours a week immersed in play, sacrificing relationships, hobbies, and responsibilities along the way.In Striking Vipers, the title itself becomes a metaphor: just like a viper’s deadly strike, addiction can sneak up unexpectedly, striking again and again as players hunt for that elusive thrill.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
I think a lot about how momentum works. Not just in training—although yeah, I do love a good triathlon—but in creativity too.
When I’m training for a triathlon, I’m not just running every day. I swim. I bike. And I run. Each discipline works different muscles, keeps things fresh, and somehow… they all support each other. Like, I come out of a bike ride with stronger legs for the run.
And that’s kinda how I’m approaching my creative life too.
I don’t just write. I don’t just draw. I don’t just make videos. I move between all of them—and doing that actually helps me stay motivated and inspired. If I’m stuck in one, I switch to another. If I’m tired of reading, I pull out my camera. If I can’t sit down to draw, I cut up footage and express my creativity in a whole new way.
So today, I’m sharing something I call the Creative Triathlon. It’s a predetermined length of focused time on three different creative practices: illustrating, video creation, and writing. One discipline at a time. No pressure. No multitasking. Just a way to find time to do what you enjoy.
First leg: illustration. For the past 4 years, I’ve been working on this massive personal project—drawing every single Pokémon. Yeah. All of them. It’s been slow-going, not because I don’t love it, but because finding the time is hard. Life piles up. Other projects take priority. And as strange as it sounds, drawing Pokemon doesn’t pay the bills.
But when I do this creative triathlon, it forces me to carve out time for it. Even just 25 minutes. And honestly? It’s kind of like swimming. At first, it takes a while to get ready. But once I start? I don’t want to stop. It’s peaceful. It’s focused. And there’s something really satisfying about seeing one more little creature take shape.
This leg always reminds me why I started this project in the first place: because I love it. Because it brings me back to that kid part of myself that used to draw these things on notebooks.
I’m almost at 1000 Pokemon. I really want to finish before they add more. If you are interested in see the rest, check out this video in the cards and the instagram in the link in the description.
Second leg: video creation. Right now, I’ve been making a series of YouTube Shorts where I highlight key takeaways from books I’ve read. It’s honestly become one of my favorite creative outlets.
What I love about it is that it’s a true mashup of all my past-time activities—reading, thinking, writing, editing—it all comes together in these tiny videos. It makes everything I do feel active. Reading no longer feels like a passive intake of ideas. By turning it into a video, I get to spend more time with what I’m reading. I get to sit with the concepts, rephrase them, visualize them. And because of that, the lessons stick. They become part of me. A little snapshot of my life.
Video creation is great that way. It lets you experience your own thoughts in a completely new medium. You go from absorbing to articulating, from quiet reflection to something that moves and speaks. Seeing an idea come to life on screen—it just never gets old.
Final leg: writing. I’m currently editing the fifth draft of the first book in a trilogy I’ve been working on for years. And yeah—it’s a slog. A meaningful one, but a slog nonetheless.
It’s such a big project that most days, I’m just chipping away at it. I don’t always see progress. There’s no big “aha” moment, no flashy breakthrough. It’s slow, repetitive work. And honestly, it feels a lot like running. Not a sprint—a marathon. You get tired. You want to stop. But you don’t, because the work is worth it. The fatigue is part of the point. It’s what builds endurance. It’s what makes the story matter.
Working on something this big, this long—it becomes part of your life. It’s something you carry. And the beautiful thing about creativity is that it’s not like sports… there’s no finish line in the same way. It doesn’t end. But that’s why I love this Creative Triathlon practice—because it does give me small finish lines.
Instead of focusing on finishing the book, I just focus on finishing a session. That’s it. One 25-minute block. And when it’s done, I get this little burst of relief, a sense of accomplishment. Like I’ve closed a loop. It’s such a good feeling—being able to look back at my day and say, “I did something today.” No guilt. No disappointment.
So that’s my Creative Triathlon. Three disciplines, 25 minutes each for me today. It could be more on other days, but today was only 25 minutes. Which is enough to get a good chunk of work done. Know this, though, it’s not about finishing a masterpiece in an hour and a half—it’s about movement. It’s about momentum.
Just like in a real triathlon, each leg has its own rhythm. Some feel strong. Some feel slow. But they all carry me forward.
If you’re someone who loves multiple creative things—or if you’re feeling stuck—try this. Treat your creativity like a triathlon. Mix it up. Work different muscles. Let each practice breathe new life into the others.
Thanks for hanging out with me today. If you decide to try your own Creative Triathlon, let me know how it goes! And if you already have a different combo that works for you—maybe it’s music, painting, and cooking—drop it in the comments. I’d love to hear what you’re working on.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Before we talk about Black Museum, let’s flashback to when this episode was first released: Dec 29, 2017.
In 2017, the rise of dark tourism—traveling to sites tied to death, tragedy, or the macabre—became a notable cultural trend, with locations like Mexico’s Island of the Dolls, abandoned abandoned hospitals and prisons drawing attention. Specifically, Chernobyl saw a dramatic increase in tourists, with around 70,000 visitors in 2017, a sharp rise from just 15,000 in 2010. This influx of visitors contributed approximately $7 million to Ukraine’s economy.
Meanwhile, in 2017, the EV revolution was picking up speed. Tesla, once a trailblazer now a company run by a power-hungry maniac, launched the more affordable Model 3.
2017 also marked a legal dispute between Hologram USA and Whitney Houston’s estate. The planned hologram tour, aimed at digitally resurrecting the iconic singer for live performances, led to legal battles over the hologram’s quality. Despite the challenges, the project was eventually revived, premiering as An Evening with Whitney: The Whitney Houston Hologram Tour in 2020.
At the same time, Chicago’s use of AI and surveillance technologies, specifically through the Strategic Subject List (SSL) predictive policing program, sparked widespread controversy. The program used historical crime data to predict violent crimes and identify high-risk individuals, but it raised significant concerns about racial bias and privacy.
And that brings us to this episode of Black Mirror, Episode 6 of Season 4: Black Museum. Inspired by Penn Jillette’s story The Pain Addict, which grew out of the magician’s own experience in a Spanish welfare hospital, the episode delves into a twisted reality where technology allows doctors to feel their patients’ pain.
Set in a disturbing museum, this episode confronts us with pressing questions: When does the pursuit of knowledge become an addiction to suffering? What happens when we blur the line between human dignity and the technological advancements meant to heal? And what price do we pay when we try to bring people back from the dead?
In this video, we’ll explore the themes of Black Museum and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!
Pain for Pleasure
As Rolo Haynes guides Nish through the exhibits in the Black Museum, he begins with the story of Dr. Peter Dawson. Dawson, a physician, tested a neural implant designed to let him feel his patients’ pain, helping him understand their symptoms and provide a diagnosis. What started as a medical breakthrough quickly spiraled into an addiction.
Meanwhile, in the real world, scientists have been making their own leaps into the mysteries of the brain. In 2013, University of Washington researchers successfully connected the brains of two rats using implanted electrodes. One rat performed a task while its neural activity was recorded and transmitted to the second rat, influencing its behavior. Fast forward to 2019, when researchers linked three human brains using a brain-to-brain interface (BBI), allowing two participants to transmit instructions directly into a third person’s brain using magnetic stimulation—enabling them to collaborate on a video game without speaking.
Beyond mind control, neurotech has made it possible to simulate pain and pleasure without physical harm. Techniques like Transcranial Magnetic Stimulation (TMS) and Brain-Computer Interfaces (BCIs) let researchers manipulate neural activity for medical treatment.
AI is actively working to decode the complexities of the human brain. At Stanford, researchers have used fMRI data to identify distinct “pain signatures,” unique neural patterns that correlate with physical discomfort. This approach could provide a more objective measure of pain levels and potentially reduce reliance on self-reported symptoms, which can be subjective and inconsistent.
Much like Dr. Dawson’s neural implant aimed to bridge the gap between doctor and patient, modern AI researchers are developing ways to interpret and even visualize human thought.
Of course, with all this innovation comes a darker side.
In 2022, Neuralink, Elon Musk’s brain-implant company, came under federal investigation for potential violations of the Animal Welfare Act. Internal documents and employee interviews suggest that Musk’s demand for rapid progress led to botched experiments. As a result, many tests had to be repeated, increasing the number of animal deaths. Since 2018, an estimated 1,500 animals have been killed, including more than 280 sheep, pigs, and monkeys.
When Dr. Dawson pushed the limits, and ended up experiencing the death of the patient, his neural implant was rewired in the process, blurring the line between pain and pleasure.
At present, there’s no known way to directly simulate physical death in the sense of replicating the actual biological process of dying without causing real harm.
However, Shaun Gladwell, an Australian artist known for his innovative use of technology in art, has created a virtual reality death simulation. It is on display at the Melbourne Now event in Australia. The experience immerses users in the dying process—from cardiac failure to brain death—offering a glimpse into their final moments. By simulating death in a controlled virtual environment, the project aims to help participants confront their fears of the afterlife and better understand the emotional aspects of mortality.
This episode of Black Mirror reminds us that the quest for understanding the mind might offer enlightenment, but it also carries the risk of unraveling the very fabric of what makes us human.
In the end, the future may not lie in simply experiencing death, but in learning to live with the knowledge that we are always on the cusp of the unknown.
Backseat Driver
In the second part of Black Museum, Rolo recounts his involvement in a controversial experiment. After an accident, Rolo helped Jack transfer his comatose wife Carrie’s consciousness into his brain. This let Carrie feel what Jack felt and communicate with him. In essence, this kept Carrie alive. However, the arrangement caused strain—Jack struggled with the lack of privacy, while Carrie grew frustrated by her lack of control—ultimately putting the saying “’til death do you part” to the test.
The concept of embedding human consciousness into another medium remains the realm of fiction, but neurotechnology is inching closer to mind-machine integration.
In 2016, Ian Burkhart, a 24-year-old quadriplegic patient, made history using the NeuroLife system. A microelectrode chip implanted in Burkhart’s brain allowed him to regain movement through sheer thought. Machine-learning algorithms decoded his brain signals, bypassing his injured spinal cord and transmitting commands to a specialized sleeve on his forearm—stimulating his muscles to control his arm, hand, and fingers. This allowed him to grasp objects and even play Guitar Hero.
Another leap in brain-tech comes from Synchron’s Stentrode, a device that bypasses traditional brain surgery by implanting through blood vessels. In 2021, Philip O’Keefe, living with ALS, became the first person to compose a tweet using only his mind. The message? A simple yet groundbreaking “Hello, World.”
Imagine being able to say what’s on your mind—without saying a word. That’s exactly what Blink-To-Live makes possible. Designed for people with speech impairments, Blink-To-Live tracks eye movements via a phone camera to communicate over 60 commands using four gestures: Left, Right, Up, and Blink. The system translates these gestures into sentences displayed on the screen and read aloud.
Technology is constantly evolving to give people with impairments the tools to live more independently, but relying on it too much can sometimes mean sacrificing privacy, autonomy, or even a sense of human connection.
When Jack met Emily, he was relieved to experience a sense of normalcy again. She was understanding at first, but everything changed when she learned about Carrie—the backseat driver and ex-lover living in Jack’s mind. Emily’s patience wore thin, and she insisted that Carrie be removed. Eventually, Rolo helped Jack find a solution by transferring Carrie’s consciousness into a toy monkey.
Initially, Jack’s son loved the monkey. But over time, the novelty faded. The monkey wasn’t really Carrie. She couldn’t hold real conversations anymore. She couldn’t express her thoughts beyond those two phrases. And therefore, like many toys, it was left forgotten.
This raises an intriguing question: Could consciousness, like Carrie’s, ever be transferred and preserved in an inanimate object?
Dr. Ariel Zeleznikow-Johnston, a neuroscientist at Monash University, has an interesting theory. He believes that if we can fully map the human connectome—the complex network of neural connections—we might one day be able to preserve and even revive consciousness. His book, The Future Loves You, explores whether personal identity could be stored digitally, effectively challenging death itself. While current techniques can preserve brain tissue, the actual resurrection of consciousness remains speculative.
This means that if you want to transfer your loved ones’ consciousness into a toy monkey’s body, you’ll have to wait, but the legal systems are already grappling with these possibilities.
In 2017, the European Parliament debated granting “electronic personhood” to advanced AI, a move that could set a precedent for digital consciousness. Would an uploaded mind have rights? Could it be imprisoned? Deleted? As AI-driven personalities become more lifelike—whether in chatbots, digital clones, or neural interfaces—the debate over their status in society is only just beginning.
At this point, Carrie’s story is purely fictional. But if the line between human, machine, and cute little toy monkeys blurs further, we may need to redefine what it truly means to be alive.
Not Dead but Hardly Alive
In the third and final tale of Black Museum, Rolo Haynes transforms human suffering into a literal sideshow. His latest exhibit? A holographic re-creation of a convicted murderer, trapped in an endless loop of execution for paying visitors to experience.
What starts as a morbid fascination quickly reveals the depths of Rolo’s cruelty—using digital resurrection not for justice, but for profit.
The concept of resurrecting the dead in digital form is not so far-fetch. In 2020, the company StoryFile introduced interactive holograms of deceased individuals, allowing loved ones to engage with digital avatars capable of responding to pre-recorded questions. This technology has been used to preserve the voices of Holocaust survivors, enabling them to share their stories for future generations.
But here’s the question: Who controls a person’s digital afterlife? And where do we draw the line between honoring the dead and commodifying them?
Hollywood has already ventured into the business of resurrecting the dead. After Carrie Fisher’s passing, Star Wars: The Rise of Skywalker repurposed unused footage and CGI to keep Princess Leia in the story.
The show must go on, and many fans preferred not to see Carrie Fisher recast. But should production companies have control over an actor’s likeness after they’ve passed?
Celebrities such as Robin Williams took preemptive legal action, restricting the use of his image for 25 years after his death. The line between tribute and exploitation has become increasingly thin. If a deceased person’s digital avatar can act, speak, or even endorse products, who decides what they would have wanted?
In the realm of intimacy, AI-driven experiences are reshaping relationships. Take Cybrothel, a Berlin brothel that markets AI-powered sex dolls capable of learning and adapting to user preferences. As AI entities simulate emotions, personalities, and desires, and as people form deep attachments to digital partners, it will significantly alter our understanding of relationships and consent.
Humans often become slaves to their fetishes, driven by impulses that can lead them to make choices that harm both themselves and others. But what if the others are digital beings?
If digital consciousness can feel pain, can it also demand justice? If so, then Nish’s father wasn’t just a relic on display—he was trapped, suffering, a mind imprisoned in endless agony for the amusement of strangers. She couldn’t let it stand. Playing along until the perfect moment, she turned Rolo’s own twisted technology against him. In freeing her father’s hologram, she made sure Rolo’s cruelty ended with him.
The idea of AI having rights may sound like a distant concern, but real-world controversies suggest otherwise.
In 2021, the documentary Roadrunner used AI to replicate Anthony Bourdain’s voice for quotes he never spoke aloud. Similarly, in 2020, Kanye West gifted Kim Kardashian a hologram of her late father Robert Kardashian. These two notable events sparked backlash over putting words into a deceased person’s mouth.
While society has largely moved beyond public executions, technology is creating new avenues to fulfill human fantasies. AI, deepfake simulations, and VR experiences could bring execution-themed entertainment back in a digital form, forcing us to reconsider the ethics of virtual suffering.
As resurrected personalities and simulated consciousness become more advanced, we will inevitably face the question: Should these digital beings be treated with dignity? If a hologram can beg for mercy, if an AI can express fear, do we have a responsibility to listen?
While the events of Black Museum have not happened yet and may still be a long way off, the first steps toward that reality are already being taken. Advances in AI, neural mapping, and digital consciousness hint at a future where identities can be preserved, replicated, or even exploited beyond death.
Perhaps that’s the real warning of Black Museum: even when the human body perishes, reducing the mind to data does not make it free. And if we are not careful, the future may remember us not for our progress, but for the prisons we built—displayed like artifacts in a museum.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Before we talk about the events in Metalhead, let’s flashback to when this episode was first released: December 29, 2017
In 2017, Boston Dynamics founder Marc Raibert took the TED conference stage to discuss the future of his groundbreaking robots. His presentation sparked a mix of awe and unease.
Boston Dynamics has a long history of viral videos showcasing its cutting-edge robots, many of which were mentioned during the talk:
Big Dog is a four-legged robot developed by Boston Dynamics with funding from DARPA. Its primary purpose is to transport heavy loads over rugged terrain.
Then there’s Petman, a human-like robot built to test chemical protection suits under real-world conditions.
Atlas, a 6-foot-tall bipedal robot, is designed to assist in search-and-rescue missions.
Handle is a robot on wheels. It can travel at 9 mph, leap 4 feet vertically, and cover about 15 miles on a single battery charge.
And then there was SpotMini, a smaller, quadrupedal robot with a striking blend of technical prowess and charm. During the talk, SpotMini played to the audience’s emotions, putting on a show of cuteness.
Simultaneously, post-apocalyptic themes gained traction in 2017 pop culture. From the success of The Walking Dead to Blade Runner 2049’s exploration of dystopian landscapes, this pre-covid audience seemed enthralled by stories of survival in hostile worlds, as though mentally preparing for the worst to come.
And that brings us to this episode of Black Mirror, Episode 5 of Season 4: Metalhead.
Set in a bleak landscape, Metalhead follows Bella, a survivor on the run from relentless robotic “dogs” after a scavenging mission goes awry.
This episode taps into a long-standing fear humanity has faced since it first began experimenting with the “dark magic” of machinery. Isaac Asimov’s Three Laws of Robotics were designed to ensure robots would serve and protect humans without causing harm. These laws state that a robot must not harm a human, must obey orders unless it conflicts with the first law, and must protect itself unless this conflicts with the first two laws.
In Metalhead, however, these laws are either absent or overridden. This lack of ethical safeguards mirrors the real-world fears of unchecked AI and its potential to harm, especially in situations driven by survival instincts.
So, we’re left to ask: At what point does innovation cross the line into an existential threat? Could machines, once designed to serve us, evolve into agents of our destruction? And, most importantly, as we advance technology, are we truly prepared for the societal consequences that come with it?
In this video, we’ll explore three key themes from Metalhead and examine whether similar events have already unfolded—and if not, whether or not it’s still plausible. Let’s go!
Killer Instincts
Metalhead plunges us into a barren wasteland where survival hinges on outsmarting a robotic “dog”. Armed with advanced tracking, razor-sharp senses, and zero chill, this nightmare locks onto Bella, after her supply mission takes a hard left into disaster.
The robot dog’s tracking systems are similar to current military technologies. Autonomous drones and ground robots use GPS-based trackers and infrared imaging to locate targets. Devices like Lockheed Martin’s Stalker XE drones combine GPS, thermal imaging, and AI algorithms to pinpoint enemy movements even in dense environments or under cover of darkness.
With AI-driven scanning systems that put human eyesight to shame, it can spot a needle in a haystack—and probably tell you the needle’s temperature, too. Think FLIR thermal imaging cameras, which let you see heat signatures through walls or dense foliage, or Boston Dynamics’ Spot using Light Detection and Ranging (aka Lidar) and pattern recognition to map the world with precision.
Lidar works by sending out laser pulses and measuring the time it takes for them to bounce back after hitting an object. These pulses generate a detailed 3D map of the environment, capturing even the smallest features, from tree branches to building structures.
One of the most unsettling aspects of the robot in Metalhead is its superior auditory abilities. In the real world, acoustic surveillance technology, such as ShotSpotter, uses microphones and AI to detect and triangulate gunfire in urban areas. While it sounds impressive, its effectiveness is debated, with critics including a study by the University of Michigan pointing to false positives and uneven results.
Still, technology is quickly advancing in recognizing human sounds, and some innovations are already in consumer products. Voice assistants like Alexa and Siri can accurately respond to vocal commands, while apps like SoundHound can identify music and spoken words in noisy environments. While these technologies offer convenience, they also raise concerns about how much machines are truly able to “hear.”
This is especially true when advanced sensors—whether auditory, visual, or thermal—serve a darker purpose, turning their sensory prowess into a weapon.
Take robotics companies like Ghost Robotics, which have developed machines equipped with sniper rifles, dubbed Special Purpose Unmanned Rifles (SPURs).These machines, designed for military applications, are capable of autonomously identifying and engaging targets—raising profound ethical concerns about the increasing role of AI in life-and-death decisions.
Built for Speed
In this episode, the robot’s movement—fast, deliberate, and capable of navigating uneven terrain—resembles Spot from Boston Dynamics.
Spot can sprint at a brisk 5.2 feet per second, which translates to about 3.5 miles per hour. While that’s fairly quick for a robot navigating complex terrain, it’s still slower than the average human running speed. The typical human can run around 8 to 12 miles per hour, depending on fitness level and sprinting ability.
So while Spot may not outpace a sprinter, DARPA’s Cheetah robot can — at least on the treadmill. Nearly a decade ago, a video was released of this robot running 28.3 miles per hour on a treadmill, leaving even Usain Bolt in the dust.
But while the treadmill is impressive, the current record holder for the fastest land robot is Cassie—and she’s got legs for it! Developed by Oregon State University’s Dynamic Robotics Lab, Cassie sprinted her way into the record books in 2022, running 100 m in 24.73 seconds.
While today’s robots may not yet match the speed, adaptability, and relentless pursuit seen in the episode, the rapid strides in robotics and AI are quickly closing the gap. Like the tortoise slowly gaining ground on the overconfident hare, these technological advances, though not yet flawless, are steadily creeping toward a reality where they might outrun us in ways we hadn’t anticipated.
Charged to Kill
At a pivotal point in the story, Bella’s survival hinged on exploiting the robot’s energy source. By forcing it to repeatedly power on and off, she aims to drain its battery. Advanced machines, reliant on sensors, processors, and actuators, burn through significant energy during startup.
Today’s robots, like Spot or advanced military drones, run on rechargeable lithium-ion batteries. While these batteries offer excellent energy density, their runtime is finite—high-demand tasks like heavy movement or AI processing can drain them in as little as 90 minutes.
However, the latest battery innovations are redefining what’s possible and the automotive industry is leading the charge. Solid-state batteries, for example, offer greater capacity, faster charging, and longer lifespans than traditional lithium-ion ones. Companies like Volkswagen and Toyota have invested heavily in this technology, hoping it will revolutionize the EV market.
Self-recharging technologies, like Kinetic Energy Recovery Systems (KERS), are moving from labs to consumer products. KERS, used in Formula 1 cars, captures and stores kinetic energy from braking to power systems and reduce fuel consumption. It’s now being explored for use in consumer and electric vehicles.
Battery innovation is challenging due to several factors. Improving energy density often compromises safety and developing new batteries requires expensive materials and complex manufacturing processes.
Modern robots are pretty good at managing their power, but even the smartest machines can’t escape the inevitable—batteries that drain under intense demands. While energy storage and self-recharging tech like solar or kinetic systems may help, robots will always face the dreaded low-battery warning. After all, as much as we’d love to plug them into an infinite, self-sustaining energy source, the laws of physics will always say, “Nice try!”
Information Flow
When Bella throws paint to blind the robot’s sensors and uses sound to mislead it, her plan works—briefly. But the robot quickly adapts, recalibrating its AI to interpret new environmental data and adjust its strategy. Similarly, when Bella shoots the robot, it doesn’t just take the hit—it learns, retaliating with explosive “track bullets” that embed tracking devices in her body. This intelligent flexibility ensures that, even when temporarily disabled, the robot can still alter its approach and continue pursuing its objective.
In real life, robots with such capabilities are not far-fetched. Modern drone swarms, such as those tested by DARPA, can coordinate multiple drones for collective objectives. In some instances, individual drones are programmed to act as decoys or to deliberately draw enemy fire, allowing the remaining drones in the swarm to carry out their mission.
In October 2016 at China Lake, California, 103 Perdix drones were launched from three F/A-18 Super Hornets. During this test, the micro-drones exhibited advanced swarm behaviors, including collective decision-making, adaptive formation flying, and self-healing.
While the events in Metalhead are extreme, they are not entirely outside the realm of possibility. Modern robotics, AI, and machine learning are progressing at a staggering rate, making the robot’s ability to adapt, learn, and pursue its objective all too real.
The advancements in sensors, energy storage, and autonomous decision-making systems could one day allow machines to operate with the same precision seen in the episode.
So, while we may not yet face such an immediate threat, the seeds are sown. A future dominated by robots is not a matter of “if,” but “when.” As we step into this new frontier, we must proceed with caution, for once unleashed, these creations could be as relentless as any natural disaster—except that nothing about this will be natural.
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!
Bryan Cranston—you know, the guy who gave us Walter White—once shared some advice for actors going into an audition. But this advice felt bigger than just acting. He said: “know what your job is
At first, it sounds simple, almost too simple. But then he elaborates:
“I was going to auditions to try and get a job. That is not what you are supposed to do. An actor is supposed to create a compelling, interesting character that serves the text, you present it in the environment where you audition. And then you walk away. And that’s it. Everything else is outside your control, so don’t even think about it, don’t focus on that. You’re not going there to get a job. You’re going there to present what you do. You act. And there it is. And you walk away and there’s power in that.”
Cranston’s not saying “Don’t care about the outcome.” He’s saying, “Care about what you can control.” For him, it’s about crafting a character that serves the story. For us—whether we’re writers, marketers, or creators—it’s about honing our craft and delivering it with intention.
I remember a time when I was deep in job applications, obsessing over every detail, trying to predict what each company wanted. The constant second-guessing, the tweaking of synonyms and punctuations—it was exhausting. My job wasn’t to convince them I was perfect. My job was to show up and be myself—to present what I do best.
The same rule applies when you’re already on the job. Showing up every day isn’t just about ticking off tasks or meeting deadlines. It’s about knowing what’s at the heart of your work. If you’re a writer, your job isn’t just to write—it’s to tell a story that connects. If you’re a marketer, it’s not just about ad campaigns—it’s about creating something that leads to action.
But here’s the thing—the pressure to get it “right” can mess with your head. You want the recognition, the results, the wins. That’s why Cranston’s advice feels so important. He’s saying: let go of what you can’t control. You can’t control how people respond to your performance, your draft, or your campaign. But you can control the effort and care you put into it.
So, how do you do this in real life? First, focus on the process. Instead of asking, “Will they like it?”
Ask: “Am I proud of this? Does it serve the purpose?”
Second, detach from the outcome. Present it and move on.
And third, redefine success. It’s not just about landing the job or nailing the project—it’s about the growth that comes from the work itself.
Rejections sting. Constructive criticism can break me down. And negative comments are hard to ignore. But when you focus on what’s within your control, you start to find a different kind of power. You’re less tied to the highs and lows, and more grounded in the work you’re doing every day.
So, whether you’re pitching ideas, crafting stories, or designing campaigns, take Cranston’s advice: “Know what your job is.” Show up. Do the work. Let go of the rest. There’s freedom in that.
For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.
Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!