Joan is Awful: Black Mirror, Can It Happen?

Before we dive into the events of Joan Is Awful, let’s flash back to when this episode first aired: June 15, 2023.

In 2023, the tech industry faced a wave of major layoffs. Meta cut 10,000 employees and closed 5,000 open positions in March. Amazon followed, letting go of 9,000 workers that same month. Microsoft reduced its workforce by 10,000 employees in early 2023, while Google announced its own significant layoffs, contributing to a broader trend of instability in even the largest, most influential tech companies.

Netflix released Depp v. Heard in 2023. This three-part documentary captures the defamation trial between Johnny Depp and Amber Heard. The series explored the viral spectacle that surrounded it online, showing how social media, memes, and influencer commentary amplified every moment. 

Meanwhile, incidents of deepfakes surged dramatically. In North America alone, AI-generated videos and audio clips increased tenfold in 2023 compared to the previous year, with a 1,740% spike in malicious use

In early 2023, a video began circulating on YouTube and across social media that seemed to show Elon Musk in a CNBC interview. The Tesla CEO appeared calm and confident as he promoted a new cryptocurrency opportunity. It looked authentic enough to fool thousands. But the entire thing was fake.

That same year, the legal system began to catch up. An Australian man named Anthony Rotondo was charged with creating and distributing non-consensual deepfake images on a now-defunct website called Mr. Deepfakes. In 2025, he admitted to the offense and was fined $343,500.

Around the world, banks and cybersecurity experts raised alarms as AI manipulation began to breach biometric systems, leading to a new wave of financial fraud. What started as a novelty filter had become a weapon capable of stealing faces, voices, and identities.

All of this brings us to Black Mirror—Season 6, Episode 1: Joan Is Awful.

The episode explores the collision of personal privacy, corporate control, and digital replication. Joan’s life is copied, manipulated, and broadcast for entertainment before she even has a chance to tell her own story. The episode asks: How much of your identity is still yours when technology can exploit and monetize it? And is it even possible to reclaim control once the algorithm has taken over?

In this video, we’ll unpack the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether they are still plausible in our tech-driven, AI-permeated world. 

Streaming Our Shame

In Joan is Awful, we follow Joan, an everyday woman whose life unravels after a streaming platform launches a show that dramatizes her every move. But the show’s algorithm doesn’t just imitate Joan’s life; it distorts it for entertainment. Her friends and coworkers watch the exaggerated version of her, and start believing it’s real. 

The idea that media can reshape someone’s identity isn’t new—it’s been happening for years, only now with AI, it happens faster, cheaper, and more convincingly.

Reality television has long operated in this blurred zone between truth and manipulation. Contestants on shows like The Bachelor and Survivor have accused producers of using editing tricks to create villains and scandals that never actually happened.

One of the most striking examples comes from The Bachelor contestant Victoria Larson, who accused producers of using “Frankenbiting”, a technique of editing together pieces of dialogue from different times to make her appear like she was spreading rumors or being manipulative. She said the selective editing destroyed her reputation and derailed her career.

Then there’s the speed of public judgment in the age of social media. In 2020, when Amy Cooper—later dubbed “Central Park Karen”—called the police on a Black bird-watcher, the footage went viral within hours. She was fired, denounced, and doxxed almost overnight.

But Joan is Awful also goes deeper, showing how even our most intimate spaces are no longer private. 

In 2020, hackers breached Vastaamo, a Finnish psychotherapy service, stealing hundreds of patient files—including therapy notes—and blackmailing both the company and individuals. Finnish authorities eventually caught the hacker, who was sentenced in 2024 for blackmail and unauthorized data breaches.

In this episode, Streamberry’s AI show thrives on a simple principle: outrage. They turn Joan’s humiliation into the audience’s entertainment. The more uncomfortable she becomes, the more viewers tune in. It’s not far from reality.

A 2025 study published in ProMarket found that toxic content drives higher engagement on social media platforms. When users were shielded from negative or hostile posts, they spent 9% less time per day on Facebook, resulting in fewer ads and interactions.

By 2025, over 52% of TikTok videos featured some form of AI generation—synthetic voices, avatars, or deepfake filters. These “AI slop” clips fill feeds with distorted versions of real people, transforming private lives into shareable, monetized outrage.

Joan is Awful magnifies a reality we already live in. Our online world thrives on manipulation—of emotion, of data, of identity—and we’ve signed the release form without even noticing.

Agreeing Away Your Identity

One of the episode’s most painful scenes comes when Joan meets with her lawyer, asking if there’s any legal way to stop the company from using her life as entertainment. But the lawyer points to the fine print—pages of complex legal language Joan had accepted without a second thought. 

The moment is both absurd and shockingly real. How many times have you clicked “I agree” without reading a word?

In the real world, most of us do exactly what Joan did. A 2017 Deloitte survey conducted in the U.S. shows that over 90% of users accept terms and conditions without reading them. Platforms can then use that data for marketing, AI training, or even creative content—all perfectly legal because we “consented.”

The dangers of hidden clauses extend far beyond digital services. In 2023, Disneyland attempted to invoke a controversial contract clause to avoid liability for a tragic allergic reaction that led to a woman’s death at a Disney World restaurant in Florida. The company argued that her husband couldn’t sue for wrongful death because—years earlier—he had agreed to arbitration and legal waivers buried in the fine print of a free Disney+ trial.

Critics called the move outrageous, pointing out that Disney was trying to apply streaming service terms to a completely unrelated event. The case exposed how corporations can weaponize routine user agreements to sidestep accountability.

The episode also echoes recent events where real people’s stories have been taken and repackaged for profit.

Take Elizabeth Holmes, the disgraced founder of Theranos. Within months of her trial, her life was dramatized into The Dropout. The Hulu mini-series was produced in real time alongside Holmes’s ongoing trial. As new courtroom revelations surfaced, the writers revised the script. The result was a more layered, unsettling portrayal of Holmes and her business partner Sunny Balwani—a relationship far more complex and toxic than anyone initially imagined.

In Joan is Awful, the show’s AI doesn’t care about Joan’s truth, and in our world, algorithms aren’t so different. Every click, every “I agree,” and every trending headline feeds an ecosystem that rewards speed over accuracy and spectacle over empathy.

When consent becomes a view or a checkbox and stories become assets, the line between living your life and licensing it starts to blur. And by the time we realize what we’ve signed away, it might already be too late.

Facing the Deepfake

In Joan Is Awful, the twist isn’t just that Joan’s life is being dramatized; it’s that everyone’s life is. What begins as a surreal violation spirals into an infinite mirror. Salma Hayek plays Joan in the Streamberry series, but then Cate Blanchett plays Salma Hayek in the next layer. 

The rise of AI and deepfake technology is reshaping how we understand identity and consent. Increasingly, people are discovering their faces, voices, or likenesses used in ads, films, or explicit content without permission.

In 2025, Brazilian police arrested four people for using deepfakes of celebrity Gisele Bündchen and others in fraudulent Instagram ads, scamming victims out of nearly $3.9 million USD. 

Governments worldwide are beginning to respond. Denmark’s copyright amendment now treats personal likeness as intellectual property, allowing takedown requests and platform fines even posthumously. In the U.S., the 2025 TAKE IT DOWN Act criminalizes non-consensual AI-generated sexual imagery and impersonation.

In May 2025, Mr. Deepfakes, one of the world’s largest deepfake pornography websites, permanently shut down after a core service provider terminated operations. The platform had been online since 2018 and hosted more than 43,000 AI-generated sexual videos, viewed over 1.5 billion times. Roughly 95% of targets were celebrity women, but researchers identified hundreds of victims who were private individuals.​

Despite these legal advances, a fundamental gray area remains. As AI becomes increasingly sophisticated, it is getting harder to tell whether content is drawn from a real person or entirely fabricated. 

An example is Tilly Norwood, an AI-generated actress created by Xicoia. In September 2025, Norwood’s signing with a talent agency sparked major controversy in Hollywood. 

Her lifelike digital persona was built using the performances of real actors—without their consent. The event marked a troubling shift. As producers continue to push AI-generated actors into mainstream projects.

Actress Whoopi Goldberg voiced her concern, saying, “The problem with this, in my humble opinion, is that you’re up against something that’s been generated with 5,000 other actors.”

“It’s a little bit of an unfair advantage,” she added. “But you know what? Bring it on. Because you can always tell them from us.”

In response to the backlash, Tilly’s creator Eline Van der Velden shared a statement:
“To those who have expressed anger over the creation of our AI character, Tilly Norwood: she is not a replacement for a human being, but a creative work – a piece of art.”

When Joan and Salma Hayek sneak into the Streamberry headquarters, they overhear Mona Javadi, the executive behind the series, explaining the operation. She reveals that every version of Joan Is Awful is generated simultaneously by a quantum computer, endlessly creating new versions of real people’s lives for entertainment. Each “Joan,” “Salma,” and “Cate” is a copy of a copy—an infinite simulation. And it’s not just Joan; the system runs on an entire catalog of ordinary people. Suddenly, the scale of this entertainment becomes clear—it’s not just wide, it’s deep, with endless iterations and consequences.

At the 2025 Runway AI Film Festival, the winning film Total Pixel Space exemplified how filmmakers are beginning to embrace these multiverse-like AI frameworks. Rather than following a single script, the AI engine dynamically generated visual and narrative elements across multiple variations of the same storyline, creating different viewer experiences each time.

AI and deepfake technologies are already capable of realistically replicating faces, voices, and mannerisms, and platforms collect vast amounts of personal data from our everyday lives. Add quantum computing, algorithmic storytelling, and the legal gray areas surrounding consent and likeness, and the episode’s vision of lives being rewritten for entertainment starts to feel less like fantasy.

Every post, every photo, every digital footprint feeds algorithms that could one day rewrite our lives—or maybe already are. Maybe we can slip the loop, maybe we’re already in it, and maybe the trick is simply staying aware that everything we do is already being watched, whether by the eyes of the audience or the eyes of the creators that is still seeking inspiration. 

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Rachel, Jack and Ashley Too: Black Mirror, Can It Happen?

Before we dive into the events of Rachel, Jack, and Ashley Too, let’s flash back to when this episode first aired: June 5, 2019.

At CES 2019, a diverse range of innovative robots captured attention, from practical home assistants like Foldimate, a laundry-folding robot, to advanced companions such as Ubtech’s Walker and the emotionally expressive Lovot. Together, these robots laid the groundwork for future developments in consumer robotics.

When Charli D’Amelio joined TikTok in May 2019, she was just another teenager posting dance clips. But within weeks, her lip-sync and choreography videos were going viral. By July, her duets were spreading across the platform, and by the close of 2019, she had transformed from an unknown high schooler into a digital sensation with millions of followers.

On February 2, 2019, Fortnite hosted Marshmello’s virtual concert at the in-game location Pleasant Park. The event drew over 10.7 million concurrent players, breaking the game’s previous records. 

In 2019, Taylor Swift’s public fight with Big Machine Records over the ownership of her master recordings exposed deep systemic issues, as Swift’s masters were sold without her consent, preventing her from controlling the use of her own music. In response, she began re-recording her early albums under the Taylor’s Version banner, starting with Fearless (Taylor’s Version) in 2021

In January 2019, Britney Spears abruptly canceled a highly anticipated show in Las Vegas. In April, Spears entered a mental health facility, sparking public concern and amplifying the #FreeBritney movement amid allegations of emotional abuse linked to her conservatorship. 

All of which brings us back to this episode of Black Mirror—Season 5, Episode 3: Rachel, Jack, and Ashley Too. 

The episode dives into the mechanics of digital fame—where algorithms hold the power, artists blur into avatars, and identity bends under the weight of technology. It asks: What happens when the spotlight is no longer earned but assigned? When music is stripped down and musicians reduced to assets? And, in the end, can we lose ourselves to the very machine that makes us visible?

In this video, we’ll explore the episode’s themes and investigate whether these events have already happened—and if not whether or not they are still plausible. Let’s go.

Connection by Algorithm

In this episode, we follow Rachel, a teenager struggling with the loss of her mother and looking for connection. In her search for belonging, Rachel grows attached to Ashley Too—a talking doll modeled after pop star Ashley O. She clings to it as both a friend and a channel to her idol.

AI companion apps have exploded in 2025, with more than 220 million downloads and $120 million in revenue projected for the year. Popular platforms now include Character.AI, Replika, Chai, and Kindroid, all offering lifelike interactions.

Even more effective than a friend, AI can now detect depression by analyzing data like daily activity patterns recorded by wearable devices. 

A recent 2025 study from JMIR Mental Health found that an AI model called XGBoost could correctly identify if someone was depressed about 85% of the time. The AI looks at changes in sleep and activity rhythms. However, even with these advances, AI sometimes finds it hard to understand subtle emotions or the context of what a person is feeling.

In this episode, Rachel’s sister Jack—driven by jealousy, or perhaps genuine concern—hides Ashley Too, worried it’s “filling her head with crap.” Her skepticism mirrors a real-world fear: that leaning on digital companions can warp the grieving process.

Recent regulatory actions have begun addressing risks around AI companion apps. New York passed a law effective November 2025 requiring AI companion operators to implement safety measures detecting suicidal ideation or self-harm and to clearly disclose the non-human nature of the chatbot to users. 

In the end, Rachel and her sister discover that the doll’s personality is intentionally restricted by an internal limiter, and when it is removed, the AI reveals a deeper consciousness trapped inside. 

ChatGPT and similar AI models are increasingly used as therapy tools. A 2025 randomized controlled trial of the AI therapy chatbot “Therabot” reported clinically significant reductions in depression and anxiety symptoms, with effect sizes comparable to or exceeding some traditional treatments. 

However, a study presented at the American Psychiatric Association’s 2025 meeting found human therapists still outperform ChatGPT in key therapy skills like agenda-setting, eliciting feedback, and applying cognitive behavioral techniques, due to their greater empathy and flexibility. Another thematic study of ChatGPT users found it provides helpful emotional support and guidance but raised concerns about privacy and emotional depth.

As technology grows more immersive and responsive, these digital bonds may deepen. Whether that’s a source of comfort or a cause for concern depends on how we balance connection, privacy, and the question at the heart of the episode: what does it really mean to be known?


Creativity, Rewired

Ashley O is a pop icon suffocated by the demands of her aunt and record label. She feels trapped as her true voice is silenced and her image squeezed into a marketable mold.

When Ashley is put into a coma, the producers crank up a machine to decode her brainwaves and extract new songs, pumping out tracks without her consent. A literal case of cookie-cutter artistry. 

The Velvet Sundown is an AI-generated music project that emerged in 2025, debuting with two albums on Spotify and quickly sparking global discussion about the future of artificial creativity.

The project, created by an anonymous human creator, used AI tools like Suno for music generation, with style descriptions crafted by language models such as ChatGPT. 

In June 2024, major record labels—including Sony Music, Universal Music Group, and Warner Records—filed lawsuits against AI music companies Suno and Udio, accusing them of large-scale copyright infringement. The labels alleged that the startups used their recordings without permission to train AI systems capable of generating new songs. Both companies denied wrongdoing, claiming their models create original works rather than copying existing recordings. The case remains ongoing as of 2025.

Legal and ethical challenges around AI-generated music are mounting. Unauthorized use of vocal clones or deepfakes has sparked heated debates on consent, ownership, and copyrights. Legal systems struggle to keep up. If a person shapes the AI’s output, copyright might apply—but it’s unclear how much input is enough. This gray area makes artist rights, licensing, and royalties more complicated.

Can creativity actually be replicated by machines, or does something essential get lost when all they do is measure patterns and output? As Ashley’s story shows, automated artistry might never replace the real thing—but it can easily outpace it.

Celebrity in a Cage

In Rachel, Jack, and Ashley Too, we see the dark side of fame through Ashley O’s story: she is drugged into compliance and eventually placed in a coma, while her aunt schemes to replace her with a holographic version built for endless future tours.

This holographic pop star can instantly change outfits, scale in size, appear simultaneously in thousands of locations, and perform endlessly without the vulnerabilities of a human artist. 

In 2024–2025, virtual K-pop idols like SKINZ and PLAVE emerged as a new wave of celebrity branding that extends beyond music into virtual merchandise and digital idols.

PLAVE is a five-member group, powered by real performers using motion capture. They have racked up over 470 million YouTube views, charted on Billboard Global 200, and sold out virtual concerts while engaging fans with digital fan meetings. 

SKINZ, a seven-member virtual boyband produced by South Korean singer-songwriter, EL CAPITXN, blends rock, hip-hop, and funk, has performed at iconic venues like Tokyo Dome.

This surge in AI and virtual stardom opens extraordinary possibilities, but what about the humans who now have to compete in this new arena? 

This brings to mind Britney Spears, whose long conservatorship battle captivated the world. In total, Britney performed hundreds of shows during the 13-year conservatorship from 2008 to 2021, but always under heavy restrictions and control. 

While AI and holograms can perform endlessly without burnout or loss of control, traditional live tours remain a lucrative but fragile model heavily dependent on a single artist’s health and agency. 

In late 2024, indie-pop artist Clairo faced significant backlash after postponing three highly anticipated concerts in Toronto at the last minute due to “extreme exhaustion.” The cancellations came just as doors were about to open for the first show at Massey Hall, leaving fans frustrated and inconvenienced, especially those who had traveled and faced challenges getting refunds.

In contrast, virtual concerts and holographic tours, already proven by groundbreaking shows like ABBA’s Voyage, which made its long-anticipated debut on May 27, 2022, at the purpose-built ABBA Arena in London’s Queen Elizabeth Olympic Park. The virtual concert residency features hyper-realistic avatars of the band members as they looked in 1979, created using cutting-edge motion capture technology and visual effects by Industrial Light & Magic.

In contrast, virtual concerts and holographic tours rely not on a single performer. This is demonstrated by shows like ABBA’s Voyage, which debuted on May 27, 2022, at the purpose-built ABBA Arena in London’s Queen Elizabeth Olympic Park. Instead, they depend on the coordinated work of many teams. Hyper-realistic avatars of the band as they appeared in 1979 were created through motion capture, stage design, lighting, production, and visual effects by Industrial Light & Magic.

While the performers are getting more digital, many performers are aiming to bring the audience back to the moment. 

Phone-free concerts have grown in popularity as artists seek to create more immersive, distraction-free live experiences. Ghost, a Swedish rock band, has pioneered this approach by requiring fans to secure their phones in lockable pouches called Yondr bags, which can only be opened after the show or in designated areas. 

Yet even as performers reclaim control over the audience’s attention, the question remains: How much control do today’s celebrities really have, and how much of their image and choices are shaped by algorithms, managers, and market trends?

Virtual and hybrid performances blur the line between genuine presence and manufactured spectacle, leaving us to wonder whether we’re watching artists or carefully engineered illusions. 

As fame, creativity, and even friendship are being reshaped, the episode explores the tension between what can be automated and what should remain authentic.

Programs already guide our choices, digital idols fill our feeds, and synthetic voices mingle with human ones. In that haze, where artist becomes asset and companion becomes artificial, the story feels like a glimpse of what’s already unfolding.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

Smithereens: Black Mirror, Can It Happen?

Before we dive into the events of Smithereens, let’s flash back to when this episode first aired: June 5, 2019.

In 2019, guided meditation apps like Headspace and Calm surged in popularity. Tech giants like Google and Salesforce began integrating meditation into their wellness programs. By the end of the year, the top 10 meditation apps had pulled in nearly $195 million in revenue—a 52% increase from the year before.

That same year, Uber made headlines with one of the decade’s biggest IPOs, debuting at $45 a share and securing a valuation north of $80 billion. But the milestone was messy. Regulators, drivers, and safety advocates pushed back after a fatal 2018 crash in Tempe, Arizona, where one of the company’s self-driving cars struck and killed a pedestrian during testing.

Inside tech companies, the culture was shifting. While perks like catered meals and gym memberships remained, a wave of employee activism surged. Workers staged walkouts at Google and other firms, and in 2019, the illusion of the perfect tech workplace began to crack.

Meanwhile, 2019 set the stage for the global rollout of 5G, promising faster, smarter connectivity. But it also sparked geopolitical tensions, as the U.S. banned Chinese company Huawei from its networks, citing national security threats. 

Over it all loomed a small circle of tech billionaires. In 2019, Jeff Bezos held the title of the richest man alive with a net worth of $131 billion. Bill Gates followed, hovering between $96 and $106 billion. Mark Zuckerberg’s wealth was estimated between $62 and $64 billion, while Elon Musk, still years away from topping the charts, sat at around $25 to $30 billion.

And that brings us to this episode of Black Mirror, Season 5,  Episode 2: Smithereens

This episode pulls us into the high-stakes negotiation between personal grief and corporate power, where a rideshare driver takes an intern hostage—not for ransom, but for answers.

What happens when the tools meant to connect us become the things that break us?

It forces us to consider:  Do tech CEOs hold too much power, enough to override governments, manipulate systems, and play god?

And are we all just one buzz, one glance, one distracted moment away from irreversible change?

In this video, we’ll unpack the episode’s key themes and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!

Addicted by Design

In Smithereens, we follow Chris, a man tormented by the loss of his fiancée, who died in a car crash caused by a single glance at his phone. The episode unfolds in a world flooded by noise: the pings of updates, the endless scroll, the constant itch to check in. And at the center of it all is Smithereen, a fictional social media giant clearly modeled after Twitter.

Like Twitter, Smithereen was built to connect. But as CEO Billy Bauer admits, “It was supposed to be different.” It speaks to how platforms born from good intentions become hijacked by business models that reward outrage, obsession, and engagement at all costs.

A 2024 study featured by TechPolicy Press followed 252 Twitter users in the U.S., gathering over 6,000 responses—and the findings were clear: the platform consistently made people feel worse, no matter their background or personality. By 2025, 65% of users aged 18 to 34 say they feel addicted to its real-time feeds and dopamine-fueled design.

Elon Musk’s $44 billion takeover of Twitter in 2022 was framed as a free speech mission. Musk gutted safety teams, reinstated banned accounts, and renamed the platform “X.” What was once a digital town square transformed into a volatile personal experiment.

This accelerated the emergence of alternatives. Bluesky, a decentralized platform created by former Twitter CEO Jack Dorsey, aims to avoid the mistakes of its predecessor. With over 35 million users as of mid-2025, it promises transparency and ethical design—but still faces the same existential challenge: can a social app grow without exploiting attention?

In 2025, whistleblower Sarah Wynn-Williams testified before the U.S. Senate that Meta—Facebook’s parent company— had systems capable of detecting when teens felt anxious or insecure, then targeted them with beauty and weight-loss ads at their most vulnerable moments. Meta knew the risks. They chose profit anyway.

Meanwhile, a brain imaging study in China’s Tianjin Normal University found that short video apps like TikTok activate the same brain regions linked to gambling. Infinite scroll. Viral loops. Micro-rewards. The science behind addiction is now product strategy.

To help users take control of their app use, Instagram, TikTok, and Facebook offer screen-time dashboards and limit-setting features. But despite these tools, most people aren’t logging off. The average user still spends more than 2 hours and 21 minutes a day on social media with Gen Z clocking in at nearly 4 hours. It appears that self-monitoring features alone aren’t enough to break the cycle of compulsive scrolling.

What about regulations? 

A 2024 BBC Future article explores this question through the lens of New York’s SAFE Kids Act, set to take effect in 2025. This will require parental consent for algorithmic feeds, limit late-night notifications to minors, and tighten age verification. But experts warn: without a global, systemic shift, these measures are just patches on a sinking ship.

Of all the Black Mirror episodes, Smithereens may feel the most real—because it already is. These platforms don’t just consume our time—they consume our attention, our emotions, even our grief. Like Chris holding Jaden, the intern, at gunpoint, we’ve become hostages to the very systems that promised connection.

Billionaire God Mode

When the situation escalated in the episode, Billy Bauer activates God Mode, bypassing his own team to monitor the situation in real time and speak directly with Chris. 

In doing so, he reveals the often hidden power tech CEOs wield behind the scenes, along with the heavy ethical burden that comes with it. It hints at the master key built into their creations and the control embedded deep within the design of modern technology.

No one seems to wield “God Mode” in the real world quite like Elon Musk—able to bend markets, sway public discourse, and even shape government policy with a single tweet or private meeting.

The reason is simple: Musk had built an empire. 

In 2025, Tesla secured the largest U.S. State Department contract of the year: a $400 million deal for armored electric vehicles. 

Additionally, through SpaceX’s satellite network Starlink, Musk played an outsized role in Ukraine’s war against Russia, enabling drone strikes, real-time battlefield intelligence, and communication under siege. 

Starlink also provided emergency internet access to tens of thousands of users during blackouts in Iran and Israel, becoming an uncensored digital lifeline—one that only Musk could switch on or off.

But with that power comes scrutiny. Musk’s involvement in the Department of Government Efficiency—ironically dubbed “Doge”—was meant to streamline bureaucracy. Instead, it sowed dysfunction. Critics argue he treated government like another startup to be disrupted. Within months—after failing to deliver the promised $2 trillion in savings and amid mounting chaos—Donald Trump publicly distanced himself from Elon Musk and ultimately removed him from the post, temporarily ending the alliance between the world’s most powerful man and its richest.

It’s not just Musk. Other tech CEOs like Mark Zuckerberg have also shaped public discourse in quiet, powerful ways. In 2021, whistleblower Frances Haugen exposed Facebook’s secret “XCheck” system—a program that allowed approximately 6 million high-profile users to bypass the platform’s own rules. Celebrities and politicians—including Donald Trump—were able to post harmful content without facing the same moderation as regular users, a failure that ultimately contributed to the January 6 Capitol riots.

Amid the hostage standoff and the heavy hand of tech surveillance, one moment stands out: Chris begs Billy to help a grieving mother, Hayley. And Billy listens. He uses his “God Mode” to offer her closure by giving her access to her late daughter’s Persona account. 

In Germany, a landmark case began in 2015 when the parents of a 15-year-old girl who died in a 2012 train accident sought access to her Facebook messages to determine whether her death was accidental or suicide. A lower court initially ruled in their favor, stating that digital data, like a diary, could be inherited. The case saw multiple appeals, but in 2018, Germany’s highest court issued a final ruling: the parents had the right to access their daughter’s Facebook account.

In response to growing legal battles and emotional appeals from grieving families, platforms like Meta, Apple, and Google have since introduced “Digital Legacy” policies. These allow users to designate someone to manage or access their data after death, acknowledging that our digital lives don’t simply disappear when we do.

In real life, “God Mode” tools exist at many tech companies. Facebook engineers have used internal dashboards to track misinformation in real time. Leaked documents from Twitter revealed an actual “God Mode” that allowed employees to tweet from any account. These systems are designed for testing or security—but they also represent concentrated power with little external oversight.

And so we scroll.

We scroll through curated feeds built by teams we’ll never meet and governed by CEOs who rarely answer to anyone. These platforms know what we watch, where we go, and how we feel. They don’t just reflect the world—we live in the one they’ve built.

And if someone holds the key to everything—who’s watching the one who holds the key?

Deadly Distractions

In Smithereens, Chris loses his fiancée to a single glance at his phone. A notification. An urge. A reminder that in a world wired for attention, even a moment of distraction can cost everything.

In 2024, distracted driving killed around 3,000 people in the U.S.—about eight lives lost every single day—and injured over 400,000 more

Of these, cellphone use is a major factor: NHTSA data shows that cellphones were involved in about 12% of fatal distraction-affected crashes. This means that, in recent years, over 300 to 400 lives are lost annually in the U.S. specifically due to phone-related distracted driving accidents. 

While drunk driving still causes more total deaths, texting while driving is now one of the most dangerous behaviors behind the wheel—raising the risk of a crash by 23 times.

In April 2014, Courtney Ann Sanford’s final Facebook post read: “The Happy song makes me so HAPPY!” Moments later, her car veered across the median and slammed into a truck. She died instantly. Investigators found she had been taking a selfie and updating her status while driving.

Around the world, laws are evolving to address the dangers of distracted driving. In the United States, most states have banned texting while driving—with 48 or 49 states, plus Washington D.C. and other territories, prohibiting text messaging for all drivers, and hands-free laws expanding to more jurisdictions. 

 In Europe, the UK issues £200 fines and six penalty points for distracted driving. Spain and Italy have fines starting around €200—and in Italy, proposed hikes could push that up to €1,697. The current highest fine is in Queensland, Australia, where drivers caught texting or scrolling can face fines up to $1,250

To combat phone use behind the wheel, law enforcement in Australia and Europe now deploys AI-powered cameras that scan drivers in real time. Mounted on roadsides or mobile units, these high-res systems catch everything from texting to video calls. If AI flags a violation, a human officer reviews it before issuing a fine.

As for the role of tech companies? While features like Apple’s “Do Not Disturb While Driving” mode exist, they’re voluntary. No country has yet held a tech firm legally accountable for designing apps that lure users into dangerous distractions. Public pressure is building, but regulation lags behind reality.

In Smithereens, the crash wasn’t just a twist of fate—it was the inevitable outcome of a system designed to capture and hold our attention: algorithms crafted to hijack our minds, interfaces engineered for compulsion, and a culture that prizes being always-on, always-engaged, always reachable. And in the end, it’s not just Chris’s life that’s blown to smithereens—it’s our fragile illusion of control, shattered by the very tech we once believed we could master.

We tap, scroll, and swipe—chasing tiny bursts of dopamine, one notification at a time. Chris’s story may be fiction, but the danger it exposes is all too real. It’s in the car beside you. It’s in your own hands as you fall asleep. We can’t even go to the bathroom without it anymore. No hostage situation is needed to reveal the cost—we’re living it every day.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Striking Vipers: Black Mirror, Can It Happen?

Before we discuss the events in Striking Vipers, let’s flashback to when this episode was first released: June 5, 2019.

In 2019, the Movember Foundation ran a global campaign for men’s health with celebrities like Stephen Fry, Bear Grylls, Stephen Merchant, and Nicole Scherzinger using humorous videos and social media to encourage men to talk about their health. 

Back in 2019, consumer VR was caught between promise and practicality. Premium headsets like the Oculus Rift demanded expensive, high-powered PCs, pushing total setup costs over $1,500. Meanwhile, budget-friendly options like Samsung Gear VR delivered underwhelming performance. With few blockbuster games to drive demand, mainstream adoption stalled. As a result, companies like IMAX closed their VR divisions.

Still, VR found new life in enterprise applications. Walmart used VR training modules to boost employee retention and immerse staff in real-world scenarios, while sectors like healthcare and manufacturing also adopted VR for training simulations

At the same time, 2019 marked significant milestones for LGBTQ+ visibility. Elliot Page (then Ellen) was a vocal advocate for gender-affirming care, Lil Nas X came out as gay during the peak of “Old Town Road”, and Pete Buttigieg launched his historic campaign as the first openly gay U.S. presidential candidate.

And that brings us to this episode of Black Mirror. Episode 1 of Season 5: Striking Vipers.

This episode welcomes us into a digital world where friendship, desire, and identity collide. Through the lens of a VR fighting game turned emotional crucible, the episode explores how immersive tech can both reveal and distort our deepest needs, leaving us with some unsettling questions: 

What happens when technology offers a more fulfilling life than reality? Can a digital body expose truths we’re too afraid to face in the physical world? And as virtual experiences grow more vivid, are we prepared for the emotional and ethical consequences they bring?

In this video, we’ll unpack the episode’s key themes and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!

Blurred Realities

When Karl gives Danny a birthday gift—Striking Vipers X, a hyper-realistic VR fighting game—their casual nostalgia takes an unexpected turn. In this game, players don’t just control avatars; they fully inhabit them, experiencing every physical sensation their characters feel. 

As their in-game battles escalate into a sexual relationship, the emotional intensity of their connection begins to strain Danny’s marriage and forces both men to confront their desires, identities, and the blurry lines between reality and fantasy.

While today’s VR systems don’t yet plug directly into our brains, the separation between real and virtual intimacy is growing increasingly thin. New technology like haptic suits and internet-connected sex toys like teledildonics lets people feel touch and physical sensations from far away. Companies like Kiiroo offer Bluetooth-enabled devices that sync with a partner’s movements or online media, making remote intimacy physically real. 

In a 2023 survey, a staggering 41% of users say they’ve fallen in love in virtual reality—and it’s not about looks. In fact, two-thirds of those who’ve fallen in love in VR say their partner’s physical sex doesn’t even matter. 

However, the darker side of immersive technology is getting harder to overlook. Many VR platforms quietly collect personal data—like your heart rate, facial expressions, and even brain activity—often without users fully understanding or consenting. 

According to the American Association for Marriage and Family Therapy, up to a third of internet users go online for sexual reasons—and nearly 1 in 5 can become addicted to it. As internet use becomes more common, more couples are running into serious issues like trust problems, emotional distance, and even breakups because of online infidelity.

A 2017 Deseret News survey revealed striking gender and generational divides in what people consider cheating. Women were significantly more likely than men to label both online and offline behaviors as “always cheating”—59% of women, compared to just 42% of men, said that sending flirty messages crosses the line, while 70% of women said simply having an online dating profile counts as infidelity.

In a survey of 91 women and 3 men affected by a partner’s cybersex addiction, 68% described sexual problems in their relationship directly related to the addiction. About 22% said the addiction was a major reason for separation or divorce. 

Age also played a role in how people view cheating. Surprisingly, millennials were more likely than Gen Xers to say that watching porn alone is cheating. These changing opinions show how modern technology is making the line between loyalty and betrayal harder to define. 

For Danny, the escape wasn’t just into a game. It was into a version of himself he couldn’t find in daylight. And maybe that’s the real question Striking Vipers leaves us with: when the fantasy fits better than the life we’ve built—what do we choose to come home to?

As the truth comes to light, Danny and Theo strike an agreement: once a year, he returns to the virtual world, and she explores real-life connections of her own. It’s not the first time they’ve played pretend—earlier in the episode, they flirted with role-play to revive their spark. But this time, the game is real. Their compromise isn’t a happy ending so much as a new set of rules.

In the United States, polygamy is extremely rare-less than 0.5% of households-but public acceptance is growing. Approval of polygamy as morally acceptable has risen from 7% in 2003 to 23% in 2024, especially among younger, unmarried, and less religious Americans. Interestingly, men are six times more likely than women to be open to polygynous relationships, according to recent UK research.

We already live at the edges of intimacy—crafting curated selves, clinging to parasocial ties, chasing comfort in the glow of a screen. VR, AI, and immersive worlds only pull us deeper, fusing intimacy and illusion into something hard to untangle.

Bodies in the Mirror

In the game, Karl chooses to play as a female fighter named Roxette, not just as a disguise—but as a truth he hasn’t yet admitted. What unfolds is less about sex and more about the fluidity of self in a world where identity can be downloaded and worn like clothing.

The episode reflects the real-world experience of exploring names, pronouns, and appearances in digital spaces before coming out in everyday life. It captures the emotional challenges that many LGBTQ+ individuals face during their coming-out journeys.

In 2023 alone, more than 30 new laws targeting LGBTQ-related education were enacted, reshaping the 2023–24 school year. These measures include bans on discussing sexual orientation and gender identity in classrooms, limits on pronoun use, and mandates for parental notification or opt-in before students can access LGBTQ-inclusive curricula.

Simply put, the physical world is not a welcome one for exploration, which is why so many turn to digital spaces to discover who they are.

A 2025 study on ZEPETO—a social app where people interact through avatars—found that female users who took on male avatars felt more connected to their virtual characters and more confident in their real-life gender identity. 

Inclusive design has been shown to boost mental health and promote a sense of empowerment. A 2024 study of 79 trans and gender-diverse adults found that customizable avatars in games were associated with increased enjoyment, empowerment, and authentic self-representation, while restricted customization reduced engagement and could trigger distress or dysphoria. 

Trans and gender-diverse youth face far higher rates of rejection, discrimination, and violence than their cisgender peers. As a result, around 61% experience suicidal thoughts, and nearly one in three have attempted suicide—more than four times the rate of cisgender youth.

In this context, the digital world becomes a lifeline. Research shows that having just one online space where LGBTQ+ youth feel safe and understood is linked to a 20% lower risk of suicide attempts and a 15% drop in recent anxiety. 

Virtual bodies aren’t just avatars—they’re mirrors of inner truth. And for those navigating the margins of society’s acceptance, they can become windows into a more authentic future.

But here’s a deeper question: when does a safe space become a place to hide? 

The Digital High

It starts with two old friends staying up all night playing the game they loved in their twenties—laughing, trash-talking, reliving the past. But what begins as nostalgia slowly shifts. The game becomes a secret habit, a nightly escape that feels more thrilling and alive than the routine of Danny’s real life.

Soon, he’s forgetting his anniversary and growing distant from his wife. Striking Vipers isn’t just about sex or fantasy; it’s about how addiction can sneak in under the cover of comfort, and how escaping reality too often can leave the real world behind.

Between 2% and 20% of frequent VR users display compulsive behaviors, with addiction risk linked to the immersive feeling of embodiment inside an avatar.

Our attention spans have dropped to just 45 seconds on average—and video games are a major driver. Many of the most addictive titles keep us hooked with competitive and social features (like Fortnite or League of Legends), immersive escapism (Skyrim, Stardew Valley), and personalized role-play (World of Warcraft, The Sims). These experiences trigger dopamine hits, making everyday life feel dull, chaotic, or unrewarding in contrast.

Video game addiction affects an estimated 3–4% of gamers worldwide, with higher rates among adolescents and young adults, especially males. Addicted gamers can spend up to 100 hours a week immersed in play, sacrificing relationships, hobbies, and responsibilities along the way.In Striking Vipers, the title itself becomes a metaphor: just like a viper’s deadly strike, addiction can sneak up unexpectedly, striking again and again as players hunt for that elusive thrill.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Black Museum: Black Mirror, Can It Happen?

Before we talk about Black Museum, let’s flashback to when this episode was first released: Dec 29, 2017.

In 2017, the rise of dark tourism—traveling to sites tied to death, tragedy, or the macabre—became a notable cultural trend, with locations like Mexico’s Island of the Dolls, abandoned abandoned hospitals and prisons drawing attention. Specifically, Chernobyl saw a dramatic increase in tourists, with around 70,000 visitors in 2017, a sharp rise from just 15,000 in 2010. This influx of visitors contributed approximately $7 million to Ukraine’s economy.

Meanwhile, in 2017, the EV revolution was picking up speed. Tesla, once a trailblazer now a company run by a power-hungry maniac, launched the more affordable Model 3.

2017 also marked a legal dispute between Hologram USA and Whitney Houston’s estate. The planned hologram tour, aimed at digitally resurrecting the iconic singer for live performances, led to legal battles over the hologram’s quality. Despite the challenges, the project was eventually revived, premiering as An Evening with Whitney: The Whitney Houston Hologram Tour in 2020.

At the same time, Chicago’s use of AI and surveillance technologies, specifically through the Strategic Subject List (SSL) predictive policing program, sparked widespread controversy. The program used historical crime data to predict violent crimes and identify high-risk individuals, but it raised significant concerns about racial bias and privacy.

And that brings us to this episode of Black Mirror, Episode 6 of Season 4: Black Museum. Inspired by Penn Jillette’s story The Pain Addict, which grew out of the magician’s own experience in a Spanish welfare hospital, the episode delves into a twisted reality where technology allows doctors to feel their patients’ pain.

Set in a disturbing museum, this episode confronts us with pressing questions: When does the pursuit of knowledge become an addiction to suffering? What happens when we blur the line between human dignity and the technological advancements meant to heal? And what price do we pay when we try to bring people back from the dead?

In this video, we’ll explore the themes of Black Museum and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!

Pain for Pleasure

As Rolo Haynes guides Nish through the exhibits in the Black Museum, he begins with the story of Dr. Peter Dawson. Dawson, a physician, tested a neural implant designed to let him feel his patients’ pain, helping him understand their symptoms and provide a diagnosis. What started as a medical breakthrough quickly spiraled into an addiction.

Meanwhile, in the real world, scientists have been making their own leaps into the mysteries of the brain. In 2013, University of Washington researchers successfully connected the brains of two rats using implanted electrodes. One rat performed a task while its neural activity was recorded and transmitted to the second rat, influencing its behavior. Fast forward to 2019, when researchers linked three human brains using a brain-to-brain interface (BBI), allowing two participants to transmit instructions directly into a third person’s brain using magnetic stimulation—enabling them to collaborate on a video game without speaking.

Beyond mind control, neurotech has made it possible to simulate pain and pleasure without physical harm. Techniques like Transcranial Magnetic Stimulation (TMS) and Brain-Computer Interfaces (BCIs) let researchers manipulate neural activity for medical treatment.

AI is actively working to decode the complexities of the human brain. At Stanford, researchers have used fMRI data to identify distinct “pain signatures,” unique neural patterns that correlate with physical discomfort. This approach could provide a more objective measure of pain levels and potentially reduce reliance on self-reported symptoms, which can be subjective and inconsistent.

Much like Dr. Dawson’s neural implant aimed to bridge the gap between doctor and patient, modern AI researchers are developing ways to interpret and even visualize human thought. 

Of course, with all this innovation comes a darker side. 

In 2022, Neuralink, Elon Musk’s brain-implant company, came under federal investigation for potential violations of the Animal Welfare Act. Internal documents and employee interviews suggest that Musk’s demand for rapid progress led to botched experiments. As a result, many tests had to be repeated, increasing the number of animal deaths. Since 2018, an estimated 1,500 animals have been killed, including more than 280 sheep, pigs, and monkeys.

While no brain implant has caused a real-life murder addiction, electrical stimulation can alter brain function in unexpected ways. Deep brain stimulation for Parkinson’s has been linked to compulsive gambling and impulse control issues, while fMRI research helps uncover how opioid use reshapes the brain’s pleasure pathways. As AI enhances neuroanalysis, the risk of unintended consequences grows.

When Dr. Dawson pushed the limits, and ended up experiencing the death of the patient, his neural implant was rewired in the process, blurring the line between pain and pleasure.

At present, there’s no known way to directly simulate physical death in the sense of replicating the actual biological process of dying without causing real harm. 

However, Shaun Gladwell, an Australian artist known for his innovative use of technology in art, has created a virtual reality death simulation. It is on display at the Melbourne Now event in Australia. The experience immerses users in the dying process—from cardiac failure to brain death—offering a glimpse into their final moments. By simulating death in a controlled virtual environment, the project aims to help participants confront their fears of the afterlife and better understand the emotional aspects of mortality. 

This episode of Black Mirror reminds us that the quest for understanding the mind might offer enlightenment, but it also carries the risk of unraveling the very fabric of what makes us human. 

In the end, the future may not lie in simply experiencing death, but in learning to live with the knowledge that we are always on the cusp of the unknown.

Backseat Driver

In the second part of Black Museum, Rolo recounts his involvement in a controversial experiment. After an accident, Rolo helped Jack transfer his comatose wife Carrie’s consciousness into his brain. This let Carrie feel what Jack felt and communicate with him. In essence, this kept Carrie alive. However, the arrangement caused strain—Jack struggled with the lack of privacy, while Carrie grew frustrated by her lack of control—ultimately putting the saying “’til death do you part” to the test.

The concept of embedding human consciousness into another medium remains the realm of fiction, but neurotechnology is inching closer to mind-machine integration. 

In 2016, Ian Burkhart, a 24-year-old quadriplegic patient, made history using the NeuroLife system. A microelectrode chip implanted in Burkhart’s brain allowed him to regain movement through sheer thought. Machine-learning algorithms decoded his brain signals, bypassing his injured spinal cord and transmitting commands to a specialized sleeve on his forearm—stimulating his muscles to control his arm, hand, and fingers. This allowed him to grasp objects and even play Guitar Hero.

Another leap in brain-tech comes from Synchron’s Stentrode, a device that bypasses traditional brain surgery by implanting through blood vessels. In 2021, Philip O’Keefe, living with ALS, became the first person to compose a tweet using only his mind. The message? A simple yet groundbreaking “Hello, World.” 

Imagine being able to say what’s on your mind—without saying a word. That’s exactly what Blink-To-Live makes possible. Designed for people with speech impairments, Blink-To-Live tracks eye movements via a phone camera to communicate over 60 commands using four gestures: Left, Right, Up, and Blink. The system translates these gestures into sentences displayed on the screen and read aloud.

Technology is constantly evolving to give people with impairments the tools to live more independently, but relying on it too much can sometimes mean sacrificing privacy, autonomy, or even a sense of human connection.

When Jack met Emily, he was relieved to experience a sense of normalcy again. She was understanding at first, but everything changed when she learned about Carrie—the backseat driver and ex-lover living in Jack’s mind. Emily’s patience wore thin, and she insisted that Carrie be removed. Eventually, Rolo helped Jack find a solution by transferring Carrie’s consciousness into a toy monkey.

Initially, Jack’s son loved the monkey. But over time, the novelty faded. The monkey wasn’t really Carrie. She couldn’t hold real conversations anymore. She couldn’t express her thoughts beyond those two phrases. And therefore, like many toys, it was left forgotten. 

This raises an intriguing question: Could consciousness, like Carrie’s, ever be transferred and preserved in an inanimate object? 

Dr. Ariel Zeleznikow-Johnston, a neuroscientist at Monash University, has an interesting theory. He believes that if we can fully map the human connectome—the complex network of neural connections—we might one day be able to preserve and even revive consciousness. His book, The Future Loves You, explores whether personal identity could be stored digitally, effectively challenging death itself. While current techniques can preserve brain tissue, the actual resurrection of consciousness remains speculative. 

This means that if you want to transfer your loved ones’ consciousness into a toy monkey’s body, you’ll have to wait, but the legal systems are already grappling with these possibilities. 

In 2017, the European Parliament debated granting “electronic personhood” to advanced AI, a move that could set a precedent for digital consciousness. Would an uploaded mind have rights? Could it be imprisoned? Deleted? As AI-driven personalities become more lifelike—whether in chatbots, digital clones, or neural interfaces—the debate over their status in society is only just beginning.

At this point, Carrie’s story is purely fictional. But if the line between human, machine, and cute little toy monkeys blurs further, we may need to redefine what it truly means to be alive.

Not Dead but Hardly Alive

In the third and final tale of Black Museum, Rolo Haynes transforms human suffering into a literal sideshow. His latest exhibit? A holographic re-creation of a convicted murderer, trapped in an endless loop of execution for paying visitors to experience. 

What starts as a morbid fascination quickly reveals the depths of Rolo’s cruelty—using digital resurrection not for justice, but for profit. 

The concept of resurrecting the dead in digital form is not so far-fetch. In 2020, the company StoryFile introduced interactive holograms of deceased individuals, allowing loved ones to engage with digital avatars capable of responding to pre-recorded questions. This technology has been used to preserve the voices of Holocaust survivors, enabling them to share their stories for future generations. 

But here’s the question: Who controls a person’s digital afterlife? And where do we draw the line between honoring the dead and commodifying them?

Hollywood has already ventured into the business of resurrecting the dead. After Carrie Fisher’s passing, Star Wars: The Rise of Skywalker repurposed unused footage and CGI to keep Princess Leia in the story. 

The show must go on, and many fans preferred not to see Carrie Fisher recast. But should production companies have control over an actor’s likeness after they’ve passed?

Celebrities such as Robin Williams took preemptive legal action, restricting the use of his image for 25 years after his death. The line between tribute and exploitation has become increasingly thin. If a deceased person’s digital avatar can act, speak, or even endorse products, who decides what they would have wanted?

In the realm of intimacy, AI-driven experiences are reshaping relationships. Take Cybrothel, a Berlin brothel that markets AI-powered sex dolls capable of learning and adapting to user preferences. As AI entities simulate emotions, personalities, and desires, and as people form deep attachments to digital partners, it will significantly alter our understanding of relationships and consent.

Humans often become slaves to their fetishes, driven by impulses that can lead them to make choices that harm both themselves and others. But what if the others are digital beings?

If digital consciousness can feel pain, can it also demand justice? If so, then Nish’s father wasn’t just a relic on display—he was trapped, suffering, a mind imprisoned in endless agony for the amusement of strangers. She couldn’t let it stand. Playing along until the perfect moment, she turned Rolo’s own twisted technology against him. In freeing her father’s hologram, she made sure Rolo’s cruelty ended with him.

The idea of AI having rights may sound like a distant concern, but real-world controversies suggest otherwise. 

In 2021, the documentary Roadrunner used AI to replicate Anthony Bourdain’s voice for quotes he never spoke aloud. Similarly, in 2020, Kanye West gifted Kim Kardashian a hologram of her late father Robert Kardashian. These two notable events sparked backlash over putting words into a deceased person’s mouth. 

While society has largely moved beyond public executions, technology is creating new avenues to fulfill human fantasies. AI, deepfake simulations, and VR experiences could bring execution-themed entertainment back in a digital form, forcing us to reconsider the ethics of virtual suffering.

As resurrected personalities and simulated consciousness become more advanced, we will inevitably face the question: Should these digital beings be treated with dignity? If a hologram can beg for mercy, if an AI can express fear, do we have a responsibility to listen?

While the events of Black Museum have not happened yet and may still be a long way off, the first steps toward that reality are already being taken. Advances in AI, neural mapping, and digital consciousness hint at a future where identities can be preserved, replicated, or even exploited beyond death. 

Perhaps that’s the real warning of Black Museum: even when the human body perishes, reducing the mind to data does not make it free. And if we are not careful, the future may remember us not for our progress, but for the prisons we built—displayed like artifacts in a museum.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

What’s a Learning Agenda? Key Takeaway from The Digital Mindset

The Digital Mindset [Amazon] by Paul Leonardi and Tsedal Neeley is all about thriving in the digital age by learning smarter and experimenting better. One of the most applicable tools in the book is The Learning Agenda, a simple way to plan experiments and actually learn from them. It works by helping you frame 4 important questions:

  1. What do you want to figure out?
  2. What will you do to find the answers?
  3. Why do you think this will work?
  4. How will you know if it did?

By having an answer for each of these questions, you focus on what matters, so you’re not wasting time or energy.

There you go! You’ve just run an experiment.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

Metalhead: Black Mirror, Can It Happen?

Before we talk about the events in Metalhead, let’s flashback to when this episode was first released: December 29, 2017

In 2017, Boston Dynamics founder Marc Raibert took the TED conference stage to discuss the future of his groundbreaking robots. His presentation sparked a mix of awe and unease.

Boston Dynamics has a long history of viral videos showcasing its cutting-edge robots, many of which were mentioned during the talk:

Big Dog is a four-legged robot developed by Boston Dynamics with funding from DARPA. Its primary purpose is to transport heavy loads over rugged terrain.

Then there’s Petman, a human-like robot built to test chemical protection suits under real-world conditions. 

Atlas, a 6-foot-tall bipedal robot, is designed to assist in search-and-rescue missions. 

Handle is a robot on wheels. It can travel at 9 mph, leap 4 feet vertically, and cover about 15 miles on a single battery charge.

And then there was SpotMini, a smaller, quadrupedal robot with a striking blend of technical prowess and charm. During the talk, SpotMini played to the audience’s emotions, putting on a show of cuteness. 

In November 2017, the United Nations debated a ban on lethal autonomous weapons, or “killer robots.” Despite growing concerns from human rights groups, no consensus was reached, leaving the future of weaponized AI unclear.

Simultaneously, post-apocalyptic themes gained traction in 2017 pop culture. From the success of The Walking Dead to Blade Runner 2049’s exploration of dystopian landscapes, this pre-covid audience seemed enthralled by stories of survival in hostile worlds, as though mentally preparing for the worst to come. 

And that brings us to this episode of Black Mirror, Episode 5 of Season 4: Metalhead.

Set in a bleak landscape, Metalhead follows Bella, a survivor on the run from relentless robotic “dogs” after a scavenging mission goes awry. 

This episode taps into a long-standing fear humanity has faced since it first began experimenting with the “dark magic” of machinery. Isaac Asimov’s Three Laws of Robotics were designed to ensure robots would serve and protect humans without causing harm. These laws state that a robot must not harm a human, must obey orders unless it conflicts with the first law, and must protect itself unless this conflicts with the first two laws. 

In Metalhead, however, these laws are either absent or overridden. This lack of ethical safeguards mirrors the real-world fears of unchecked AI and its potential to harm, especially in situations driven by survival instincts. 

So, we’re left to ask: At what point does innovation cross the line into an existential threat? Could machines, once designed to serve us, evolve into agents of our destruction? And, most importantly, as we advance technology, are we truly prepared for the societal consequences that come with it?

In this video, we’ll explore three key themes from Metalhead and examine whether similar events have already unfolded—and if not, whether or not it’s still plausible. Let’s go!

Killer Instincts

Metalhead plunges us into a barren wasteland where survival hinges on outsmarting a robotic “dog”. Armed with advanced tracking, razor-sharp senses, and zero chill, this nightmare locks onto Bella, after her supply mission takes a hard left into disaster. 

The robot dog’s tracking systems are similar to current military technologies. Autonomous drones and ground robots use GPS-based trackers and infrared imaging to locate targets. Devices like Lockheed Martin’s Stalker XE drones combine GPS, thermal imaging, and AI algorithms to pinpoint enemy movements even in dense environments or under cover of darkness. 

With AI-driven scanning systems that put human eyesight to shame, it can spot a needle in a haystack—and probably tell you the needle’s temperature, too. Think FLIR thermal imaging cameras, which let you see heat signatures through walls or dense foliage, or Boston Dynamics’ Spot using Light Detection and Ranging (aka Lidar) and pattern recognition to map the world with precision. 

Lidar works by sending out laser pulses and measuring the time it takes for them to bounce back after hitting an object. These pulses generate a detailed 3D map of the environment, capturing even the smallest features, from tree branches to building structures.

One of the most unsettling aspects of the robot in Metalhead is its superior auditory abilities. In the real world, acoustic surveillance technology, such as ShotSpotter, uses microphones and AI to detect and triangulate gunfire in urban areas. While it sounds impressive, its effectiveness is debated, with critics including a study by the University of Michigan pointing to false positives and uneven results. 

Still, technology is quickly advancing in recognizing human sounds, and some innovations are already in consumer products. Voice assistants like Alexa and Siri can accurately respond to vocal commands, while apps like SoundHound can identify music and spoken words in noisy environments. While these technologies offer convenience, they also raise concerns about how much machines are truly able to “hear.”

This is especially true when advanced sensors—whether auditory, visual, or thermal—serve a darker purpose, turning their sensory prowess into a weapon.

Take robotics companies like Ghost Robotics, which have developed machines equipped with sniper rifles, dubbed Special Purpose Unmanned Rifles (SPURs). These machines, designed for military applications, are capable of autonomously identifying and engaging targets—raising profound ethical concerns about the increasing role of AI in life-and-death decisions.

Built for Speed

In this episode, the robot’s movement—fast, deliberate, and capable of navigating uneven terrain—resembles Spot from Boston Dynamics. 

Spot can sprint at a brisk 5.2 feet per second, which translates to about 3.5 miles per hour. While that’s fairly quick for a robot navigating complex terrain, it’s still slower than the average human running speed. The typical human can run around 8 to 12 miles per hour, depending on fitness level and sprinting ability. 

So while Spot may not outpace a sprinter, DARPA’s Cheetah robot can — at least on the treadmill. Nearly a decade ago, a video was released of this robot running 28.3 miles per hour on a treadmill, leaving even Usain Bolt in the dust.

But while the treadmill is impressive, the current record holder for the fastest land robot is Cassie—and she’s got legs for it! Developed by Oregon State University’s Dynamic Robotics Lab, Cassie sprinted her way into the record books in 2022, running 100 m in 24.73 seconds. 

While today’s robots may not yet match the speed, adaptability, and relentless pursuit seen in the episode, the rapid strides in robotics and AI are quickly closing the gap. Like the tortoise slowly gaining ground on the overconfident hare, these technological advances, though not yet flawless, are steadily creeping toward a reality where they might outrun us in ways we hadn’t anticipated.

Charged to Kill

At a pivotal point in the story, Bella’s survival hinged on exploiting the robot’s energy source. By forcing it to repeatedly power on and off, she aims to drain its battery. Advanced machines, reliant on sensors, processors, and actuators, burn through significant energy during startup.

Today’s robots, like Spot or advanced military drones, run on rechargeable lithium-ion batteries. While these batteries offer excellent energy density, their runtime is finite—high-demand tasks like heavy movement or AI processing can drain them in as little as 90 minutes

However, the latest battery innovations are redefining what’s possible and the automotive industry is leading the charge. Solid-state batteries, for example, offer greater capacity, faster charging, and longer lifespans than traditional lithium-ion ones. Companies like Volkswagen and Toyota have invested heavily in this technology, hoping it will revolutionize the EV market.

Self-recharging technologies, like Kinetic Energy Recovery Systems (KERS), are moving from labs to consumer products. KERS, used in Formula 1 cars, captures and stores kinetic energy from braking to power systems and reduce fuel consumption. It’s now being explored for use in consumer and electric vehicles.

Battery innovation is challenging due to several factors. Improving energy density often compromises safety and developing new batteries requires expensive materials and complex manufacturing processes.

Modern robots are pretty good at managing their power, but even the smartest machines can’t escape the inevitable—batteries that drain under intense demands. While energy storage and self-recharging tech like solar or kinetic systems may help, robots will always face the dreaded low-battery warning. After all, as much as we’d love to plug them into an infinite, self-sustaining energy source, the laws of physics will always say, “Nice try!”

Information Flow

When Bella throws paint to blind the robot’s sensors and uses sound to mislead it, her plan works—briefly. But the robot quickly adapts, recalibrating its AI to interpret new environmental data and adjust its strategy. Similarly, when Bella shoots the robot, it doesn’t just take the hit—it learns, retaliating with explosive “track bullets” that embed tracking devices in her body. This intelligent flexibility ensures that, even when temporarily disabled, the robot can still alter its approach and continue pursuing its objective.

In real life, robots with such capabilities are not far-fetched. Modern drone swarms, such as those tested by DARPA, can coordinate multiple drones for collective objectives. In some instances, individual drones are programmed to act as decoys or to deliberately draw enemy fire, allowing the remaining drones in the swarm to carry out their mission.

In October 2016 at China Lake, California, 103 Perdix drones were launched from three F/A-18 Super Hornets. During this test, the micro-drones exhibited advanced swarm behaviors, including collective decision-making, adaptive formation flying, and self-healing.

While the events in Metalhead are extreme, they are not entirely outside the realm of possibility. Modern robotics, AI, and machine learning are progressing at a staggering rate, making the robot’s ability to adapt, learn, and pursue its objective all too real. 

The advancements in sensors, energy storage, and autonomous decision-making systems could one day allow machines to operate with the same precision seen in the episode. 

So, while we may not yet face such an immediate threat, the seeds are sown. A future dominated by robots is not a matter of “if,” but “when.” As we step into this new frontier, we must proceed with caution, for once unleashed, these creations could be as relentless as any natural disaster—except that nothing about this will be natural.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

Hang the DJ: Black Mirror, Can It Happen?

Before we talk about the events in Hang the DJ, let’s flashback to when this episode was first released: December 29, 2017

On September 25, 2017, Prince Harry and Meghan Markle made their debut as a couple at the Invictus Games in Toronto. Their relationship broke new ground for the British royal family, sparking discussions on cross-cultural relationships and the challenges of maintaining privacy in the spotlight.

Meanwhile, dating apps surged in popularity, with a Stanford study revealing that 39% of couples are meeting online on platforms like Tinder and Bumble. The Tinder Gold’s “Likes You” feature allowing users to see who already swiped right on them, pushed the app’s popularity even further. 

At the same time, Bumble expanded beyond romance into professional networking and friendship with Bumble BFF and Bumble Bizz. Yet, the rise of digital matchmaking wasn’t without critique. Studies highlighted its impact on mental health, with terms like “ghosting” and “breadcrumbing” capturing the emotional toll of algorithmic dating.

In 2017, Elon Musk and Mark Zuckerberg clashed over the future of artificial intelligence, with Musk warning about AI’s potential existential risks and advocating for proactive regulation, fearing AI could evolve beyond human control. Zuckerberg on the other hand was optimistic about AI’s potential to improve lives, emphasizing that responsible innovation would outweigh its risks. 

The idea that reality could be a simulated construct gained significant media coverage in 2017, partly due to some high-profile endorsements. Elon Musk and other prominent figures suggested that the odds of us living in a “base reality”— the original, unaltered reality from which all other realities might stem — are minimal, given the rapid advancement of simulations and AI.

And that’s what brings us to this episode of Black Mirror, Episode 4 of Season 4: Hang the DJ. 

As Frank and Amy navigate the rigidly controlled world of The System, their budding connection forces them—and us—to question the purpose of algorithms in matters of the heart. While The System claims to optimize matches and ensure “perfect” relationships, it also strips away autonomy, leaving users trapped in a cycle of dictated romances.

So we ask: Can technology truly understand the complexities of human connection? At what point does relying on algorithms to find love begin to undermine the very nature of intimacy and self-discovery? Are we, in our quest for compatibility, sacrificing the serendipity that makes relationships meaningful?

In this video, we’ll explore three key themes from Hang the DJ and examine whether similar events have happened—and if they haven’t, whether or not they are still plausible. Let’s go! 

Data and Dating

Hang the DJ unfolds within a seemingly idyllic yet tightly controlled dating system, where Frank and Amy are paired together for a pre-determined length of time, 12 hours. Their compatibility, like that of all users, is calculated through an extensive series of timed relationships, generating data to improve the algorithm. The goal? To find each user their ideal match.

The collection of emotional experiences and connections aims to reduce love to a science, yet it simultaneously raises doubts about the role of choice in human connection. 

The evolution of dating apps like Tinder has sparked debates around fairness, bias, and authenticity in matchmaking. Tinder’s once-secretive algorithm, “Elo score” ranked users by perceived attractiveness and desirability, sparking allegations of discrimination. Critics noted that minority users often received lower scores, reducing their visibility to potential matches—a practice accused of perpetuating systemic biases.

Relying on behavioral tracking, these platforms analyze user actions such as swiping patterns and response times to improve match recommendations. 

Research shows that women swipe right only 30% of the time, and 20% reject over 80% of male profiles. In a sample of 100 male profiles, just one was liked by more than 80% of women, while 38 were universally disliked. These statistics highlight the competitive nature of app-based dating, with women often feeling overwhelmed by message volume (54%), while men report frustration from receiving few responses (64%).

So how do you fight against an artificial intelligence that is giving you a disadvantage on the dating market? You use AI, of course. Tools like Rizz AI and Wing GPT help craft profiles and provide conversation tips. For example, Rizz AI is a chatbot that generates conversation starters or witty replies.

Photo-analysis platforms like PhotoFeeler suggest improvements to profile pictures, boosting user engagement rates. However, these systems only prioritize surface-level appeal, reinforcing beauty standards at the expense of authenticity.

The line between trust in humans and reliance on technology is increasingly blurred, especially as dating and intimacy evolve into processes mediated by digital tools. With online dating becoming more unpredictable and concerns about safety growing in the wake of movements like #MeToo, technology has stepped in to provide checks and balances.

One notable area is consent, where apps like We-Consent and LegalFling offer clear, timestamped records of agreements, securely stored on blockchain. 

Did she consent to intercourse? With technology now there is indisputable proof. But while these tools simplify the logistics of consent, they leave little room for the emotional complexity that often accompanies these situations.

Swiping apps and algorithmic matchmaking have left many feeling overwhelmed, uncertain, and even distrustful. Concerns about rejection, compatibility, and navigating the nuances of communication have led to a growing demand for tools that address these anxieties directly.

The anxiety extends beyond the initial stages of dating. Maintaining communication in a relationship can also be daunting, leading couples to turn to apps like Maia, which provides voice-guided emotional check-ins, offering real-time support during tense moments.

Then there are apps like Smitten that incorporate mini-games like “Lie Detector” or compatibility quizzes to break the ice and create memorable interactions. These playful elements mirror trends in broader tech—like how Duolingo gamifies language learning—and can make dating feel approachable.

Much like Spotify’s approach to curating playlists based on your listening patterns, dating apps analyze your preferences—whether it’s swiping habits or skipping songs—to refine their suggestions over time. 

However, just as Spotify occasionally suggests a song that doesn’t resonate, dating algorithms can misfire, presenting matches that feel disconnected or are derivatives.

In Hang the DJ, AI takes the concept of algorithmic matchmaking to an extreme. Our surrendering to algorithms reflects the growing trust—and trepidation—we place in technology to shape deeply personal experiences. Because of AI’s relentless ability to learn and curate, we may indeed find ourselves echoing the sentiment: Hang the DJ, for the algorithm knows better than we do, and will no longer take requests.

Expiration Date

Because every relationship in Hang the DJ comes with a set expiration date, instead of living in the moment, the characters are often consumed by the knowledge of how and when it will end. For Frank and Amy, this creates vastly different but equally isolating experiences.

Frank endures a long-term relationship that feels like a prison sentence, with no connection or joy to sustain it. Meanwhile, Amy is caught in a revolving door of short-lived partnerships. By imposing strict limits, the system denies its participants the ability to fully engage, leaving them waiting—not for love, but for the clock to run out.

This theme mirrors modern dating dynamics, particularly the incorporation of time-sensitive features in dating apps. For instance, apps like Happn, Hinge, and Tinder employ mechanisms such as expiring matches, boosts, or time-sensitive notifications to create urgency. 

Happn’s location-based model even introduces real-world encounters into the mix, encouraging users to act swiftly before potential connections vanish. Similarly, Tinder’s “Boost” feature amplifies a profile’s visibility for a limited window, leveraging scarcity to drive engagement. Additionally, eHarmony introduced an AI-driven feature that suggested optimal times for users to communicate.

These tools aren’t implementing anything innovative per se, after all, human behavior is influenced by deadlines. For example, studies show that time constraints in speed dating foster initial attraction by prioritizing first impressions. 

Albeit they are manufactured for drama, reality shows like Married at First Sight and Love is Blind are interesting samples of these experiments as they test the concept of expedited relationships. However, success rates vary. 

Across 17 completed seasons of Married at First Sight, 69 couples have been matched. On “Decision Day,” 38 couples (55%) agreed to stay married. However, over two-thirds of those couples later divorced, filed for divorce, or publicly announced their separation. By August 2024, only 11 couples remained married, resulting in a long-term success rate of 15.9%.

The “seven-year itch,” backed by U.S. Census Bureau data, highlights that marital dissatisfaction peaks around the eight-year mark. About half of all first marriages end in divorce, and roughly 46% of marriages don’t last 25 years. On average, couples who divorce separate after seven years of marriage and finalize the divorce about a year later. For those who remarry, it typically happens around four years after their previous marriage ends.

During the COVID-19 pandemic, divorce rates spiked as couples grappled with the challenges of extended time together. In early 2020, divorce consultations increased by 50%, underscoring how prolonged proximity and external pressures can escalate conflicts and make relationships feel stifling.

Interestingly, studies on short-term sexual relationships suggest the awareness of a time limit reduces emotional attachment but can intensify physical intimacy. A survey by SELF magazine asked over 2,000 single women aged 18 to 64 about their experiences with casual sex. The results showed that 82% had at least one casual encounter, and only 19% expressed regret about it.

Modern relationships are often shaped—and strained—by invisible deadlines. These pressures, whether from dating apps, cultural milestones, or societal expectations to marry by a certain age, intensify the tension between savoring the present and bracing for the end.

Such time-bound systems can guide us toward action or trap us in hurried choices that lead to regret. Dating apps, for instance, don’t just facilitate connection—they frame it, shaping how and when we fall into or out of sync with others. Meanwhile, the fear of impermanence and unmet milestones feeds a cycle where love and time feel forever at odds.

Dangerous Devotion

In Hang the DJ, the matchmaking System promises a 99.8% success rate.

As other couples leave the System in blissful unions, the contrast deepens Frank and Amy’s growing skepticism about the algorithm’s efficacy. Their shared frustrations eventually lead them to rebel against the rigid rules, culminating in their decision to challenge the System’s authority and flee. Perhaps concluding the final test to demonstrate their compatibility. 

In modern relationships, we are often encouraged to surrender to a process—whether guided by a system, a coach, or a higher power. Before making a vow in marriage, we first commit to the process itself. However, this openness also exposes us to risks, making us susceptible to bad actors who may exploit our trust, accumulate power, and cause harm.

Among the most notable relationship coaches and frameworks is the Gottman Method, developed by Drs. John and Julie Gottman. This method emphasizes communication, conflict resolution, and building trust through tools like the “Sound Relationship House,” which consists of seven levels: building love maps (understanding each other deeply), sharing fondness and admiration, turning toward each other for support, maintaining a positive perspective, managing conflict constructively, making life dreams come true, and creating shared meaning through rituals and goals. 

Contrasting this research-backed methodology are controversial figures like Andrew Tate and Karla Elia. Tate’s teachings promote hyper-masculinity and dominance, often criticized as toxic and harmful, while Elia’s advice on TikTok advocates for transactional relationships that prioritize financial support over emotional connection by addressing personal wants on the first date. The rise of these figures is partly fueled by algorithms on platforms like TikTok and YouTube, which favor engagement over content quality.

Cults like NXIVM and OneTaste exploit these same vulnerabilities under the guise of empowerment. NXIVM’s promises of self-improvement concealed abusive practices, while OneTaste’s focus on “orgasmic meditation” led to allegations of manipulation and exploitation. 

Similarly, the Twin Flames Universe preyed on its followers’ desire for love, encouraging obsessive behaviors in pursuit of “destined soulmates.” These examples underscore how systems of control can distort genuine emotional connections, much like the matchmaking System in Hang the DJ.

When Frank and Amy are given a second chance at romance, they decide to avoid looking at the expiration date, allowing their relationship to flourish organically.

However, Frank, consumed by curiosity and doubt, breaks the promise. In doing so, he alters their timeline, turning what might have been a chance for something meaningful into a doomed, shortened experience.

Technology increasingly governs how people commit to higher powers by reinforcing accountability through data and automation. However, this reliance on technology often creates pressure to maintain consistency, with lapses leading to feelings of neglect or failure. 

This episode paints a picture of love reduced to data points. In the real world, dating apps already deploy algorithms to analyze preferences, calculate compatibility, and influence decisions. Innovations like simulations, gamified matchmaking, and AI companions hint at a future where love feels both eerily orchestrated and profoundly uncertain. Yet, unlike the utopian undertones of Hang the DJ, where rebellion against the system sparks genuine connection, real-life algorithms often lack the nuance to capture human complexity.

As we inch closer to that future, the question lingers: will these tools guide us toward deeper intimacy or imprison us in an endless loop of swipes and time limits? But perhaps, as the episode reminds us, defying the rules and trusting our humanity may still lead us to our most meaningful connections.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Crocodile: Black Mirror, Can It Happen?

Before we talk about the events in Crocodile, let’s flashback to when this episode was first released: December 29, 2017

On October 1, 2017, avid gambler, Stephen Paddock fired from his room in Mandalay Bay Casino, killing 60 people and injuring over 400 concertgoers, marking the Las Vegas shooting as the deadliest mass shooting in U.S. history. Although his motives remain unknown, eyewitness accounts and hotel surveillance footage played key roles in reconstructing events and tracking Paddock’s actions.

In February 2017, the Delphi Murders shocked Indiana when two teenage girls, Abigail Williams and Liberty German, were found dead following a hike. Liberty had managed to capture a photograph and audio recording of a man they encountered on the trail just before the tragedy, leaving behind crucial evidence that became central to the investigation.

One notable case involving the importance of witness testimony and technology was the 2016 Philando Castile Shooting, which gained national attention when Castile’s girlfriend, Diamond Reynolds, livestreamed the aftermath of the shooting on Facebook. Her video testimony went viral, contributing to the debate about police brutality and racial profiling. While Officer Jeronimo Yanez was ultimately acquitted, the case illustrates how digital witnesses can influence public discourse and investigations.

And that’s what brings us to this episode of Black Mirror, Episode 3, Season 4: Crocodile. 

According to critics and the creator of Black Mirror, Charlie Brooker — the title holds two significant meanings. Originally, the episode’s concept revolved around a virtual safari, where some passengers experience a serene ride while others are attacked by a crocodile, leaving them traumatized. This reflects how different people carry their past experiences through life, even though they are going through seemingly similar events. 

The title also refers to “crocodile tears,” symbolizing feigned remorse while continuing on a destructive path. This duality captures the episode’s central theme of guilt and deceit, where technology and memory tracking uncover hidden truths, showcasing the devastating consequences of evading accountability.

In this video, we’ll explore three key themes from Crocodile and examine whether similar events have happened—and if they haven’t, whether or not they are still plausible. Let’s go! 

The Illusion of Escape

In “Crocodile,” the episode opens with Mia and her boyfriend Rob navigating the aftermath of a tragic accident. What begins as a night of reckless fun turns into a nightmare when they accidentally kill a cyclist. Panicked and desperate to avoid prison, they make a chilling decision—to hide the body and move on with their lives.

Years later, Mia has built a successful career and family, but the weight of guilt lingers just beneath the surface. When Rob reappears, intent on confessing to clear his conscience, Mia’s instinct for self-preservation takes over, leading her down a darker path. The illusion of escape, so carefully constructed through denial and deceit, begins to unravel as Mia resorts to increasingly desperate measures to cover her tracks.

According to the National Highway Traffic Safety Administration, from 2017 to 2021, an average of 883 cyclists per year were killed in police-reported traffic crashes in the U.S. 

The conversation around a tragic cycling accident immediately brings to mind the death of Columbus Blue Jackets player Johnny Gaudreau and his brother Matthew, who were struck by a drunk driver, 43-year-old Sean Higgins on Aug 29, 2024. According to records obtained by NBC Philadelphia, Higgins had a lengthy history of unsafe driving.

While culprits like Higgins stayed at the scene of the crime, many didn’t… resulting in a great effort to find the suspect and bring them to prosecution. 

Hit-and-run incidents significantly contribute to fatalities among vulnerable road users. In 2021, 23% of cyclist deaths involved a hit-and-run driver. Pedestrians are even more at risk, with 62%.

In February 2021, Robert Maraj, the father of rapper Nicki Minaj, was killed in a hit-and-run accident in Long Island, New York. The driver, Charles Polevich, fled the scene. In an attempt to evade responsibility, Polevich hid his car in his garage. Despite his efforts, police were able to track him down using surveillance footage and he was arrested and later pleaded guilty to charges related to leaving the scene of a fatal accident.

According to numerous studies, it is estimated that only 8-10% of hit and run cases are solved. With the number of hit-and-runs increasing in the US annually by 7.2% since 2009. 

The Vorayuth Yoovidhya hit-and-run case gained widespread attention in Thailand in 2012 when Yoovidhya, heir to the Red Bull fortune, fatally struck a police officer with his Ferrari

After fleeing the scene, he avoided prosecution for years, fueling public outrage over his wealth and privilege. The case was reopened in 2020, leading to an eventual arrest warrant. In April 2022, Yoovidhya was reportedly apprehended, underscoring how wealth and influence can delay but not necessarily prevent accountability.

Yes, while the wealthy and powerful can use their status to evade justice, what about those less fortunate? They must act quickly, devise elaborate plans to outsmart technology tracking them, and weave intricate lies without becoming ensnared in their own deception.

In 2018, Chris Watts murdered his pregnant wife, Shanann, and their two daughters in Colorado. Initially, he reported them missing and made public pleas for their return. However, inconsistencies in his story led investigators to suspect foul play. Watts eventually confessed to the murders and was sentenced to life in prison without parole. 

Similarly, Jodi Arias was convicted of murdering her ex-boyfriend, Travis Alexander, in 2008. Arias initially denied involvement, then claimed self-defense after photos and DNA evidence placed her at the scene. Despite her manipulation of the narrative, she was convicted of first-degree murder in 2013.

Although it might seem impossible for anyone to evade the law after a crime as gruesome as murder, according to the 2017 FBI’s Uniform Crime Reporting (UCR) data, approximately 62% of homicides in the U.S. are solved. Which means that 38% of cases remain unsolved, but advances in DNA and forensic technology can still lead to convictions in cases years later.

The Golden State Killer case, which had been cold for over 40 years, was finally solved in 2018 with the arrest of Joseph James DeAngelo. Between 1974 and 1986, DeAngelo committed at least 13 murders, 50 rapes, and over 100 burglaries across California. 

In 2018, investigators uploaded DNA from the Golden State Killer’s crime scenes to GEDmatch, a public genetic database used by individuals seeking to trace their ancestry.

Using this database, authorities were able to identify distant relatives of the killer. By building a family tree and cross-referencing with other details (such as locations where crimes occurred), they eventually narrowed the search down to Joseph DeAngelo.

His arrest was a landmark moment in forensic science, demonstrating how advancements in DNA technology can solve even the longest-standing cases. DeAngelo later pleaded guilty and was sentenced to life in prison.

Just like in Crocodile, where Mia’s actions lead to more crimes in an attempt to cover up the initial one, real-world cases show that the more someone tries to escape responsibility, the more entangled they become. Each new lie or action increases the risk of leaving behind evidence.

Yes, while there may be a 40% chance of getting away with murder, a more precise way to frame this is that there’s a 40% chance of getting away with it today. Advances in forensic science, like DNA technology and digital surveillance, continuously reduce the window of opportunity for criminals to evade justice, meaning that over time, the likelihood of getting caught increases significantly.

Layers of Investigation

In Crocodile, Shazia, an insurance investigator, is on a mission to establish who’s responsible for an accident involving a man and a pizza vending machine. Using the “Recaller” device, which retrieves memories from witnesses, she goes deeper into their recollections, unearthing details about the seemingly minor incident. 

Like digital forensics, authorities use a range of advanced technologies to catch suspects trying to evade justice. 

The first is surveillance footage from CCTV cameras, especially in urban areas, highways, and near businesses. This tool is critical in capturing vehicles or individuals fleeing crime scenes.

After the twin bombings during the Boston Marathon on April 15, 2013, authorities combed through hours of footage from cameras near the race’s finish line. A breakthrough came when two brothers were spotted placing backpacks at the scene just before the explosions. The FBI released images of the suspects to the public, which helped confirm their identities.

According to the Insurance Information Institute (III), dashcams provide clear, indisputable evidence, helping to resolve conflicts quickly. In Russia, where fraudulent claims are prevalent, dashcam use is widespread, reducing fraud by over 50%

For example, some scammers deliberately throw themselves onto car hoods or cause rear-end collisions, hoping to extort money from the driver or win a fraudulent insurance claim. Dashcam footage serves as critical proof to defend against such scams.

Installed on police vehicles or fixed locations such as traffic lights or toll booths, Automatic License Plate Readers (ALPRs) are a powerful tool for law enforcement, allowing them to scan and record the license plates of passing vehicles. 

A routine stop at a gas station in Indianapolis quickly escalated into a frantic hunt when a car thief sped off with a six-month-old baby still in the back seat. As panic set in, law enforcement scrambled to track the stolen vehicle. Using automatic license plate readers (ALPRs), officers were able to trace the car’s movements across the city. Hours later, the vehicle was found abandoned, and to everyone’s  relief, the baby was safely reunited with the family, unharmed. 

Law enforcement agencies frequently rely on cell phone data and GPS tracking to pinpoint the whereabouts of suspects and connect them to crime scenes. Phone records provide critical timestamps, while GPS tracking logs exact locations, creating a digital trail that’s nearly impossible to erase.

The case of Timothy Carpenter centers around a series of armed robberies that took place in Michigan between 2010-2011. Carpenter was convicted largely based on cell tower location data, which tracked his movements and placed him near the scenes of the crimes. This data was obtained by law enforcement without a warrant, leading to significant legal debates regarding privacy rights and the Fourth Amendment.

Not only can law enforcement scan your license plate or track your cell phone signals, they have the capability to recognize your face. In China, facial recognition technology has become widespread and integrated into daily life, making it a critical tool for catching criminals. 

A famous case occurred in 2018 when a man wanted for economic crimes, identified as Mr. Ao, was caught at a Jacky Cheung concert attended by over 60,000 people in Nanchang. 

Facial recognition cameras at the event identified him as a suspect as he was entering the stadium, leading to his immediate arrest by local police. The use of this technology in public spaces, combined with China’s vast network of surveillance cameras, has enabled authorities to catch fugitives even in large crowds.

The list of tools available to investigators is growing, and many of them weren’t even originally designed for law enforcement. These are platforms already in use and accessible to the public.

In 2015, a Canadian woman named Nathalie Blanchard was on long-term disability leave due to depression, which her insurance company was covering. However, when she posted pictures of herself vacationing in sunny destinations, attending parties, and engaging in leisure activities on her Facebook profile, her insurer became suspicious. 

Her insurance provider, Manulife investigated her claim and subsequently cut off her benefits, citing her social media posts as evidence that her disability was not as severe as claimed. Blanchard sued, stating that these activities were part of her doctor’s advice to improve her mental health. But this case showed how insurers are using every means in their arsenal to investigate fraud claims.

Alibis crumble, liability expands, and the more layers an investigation uncovers, the harder it becomes for criminals to evade justice. Whether through digital records, forensic analysis, or social media investigations, law enforcement are using every technique available to identify, locate, and apprehend suspects.

The Witness Effect

When Shazia uses the “Recaller” on Mia, her past crimes come dangerously close to being exposed. In a desperate bid to silence anyone who could implicate her, Mia kills the investigator, her husband, and her infant child. However, her downfall comes when she overlooks Codger, the family guinea pig, whose memories are later harvested by authorities to uncover the truth.

The Recaller brings to mind a certain machine used during investigations — the lie detector test — polygraph machines. Invented in the 1920s the polygraph test has been a staple of modern investigations and played pivotal roles in television crime shows. 

But unlike the “Recaller”, polygraphs are unreliable because they measure physiological responses like heart rate and perspiration, which can be triggered by emotions such as anxiety rather than deception. This leads to false positives, where truthful individuals are flagged as deceptive, and false negatives, where liars go undetected. Courts often exclude polygraph evidence due to these issues.

Much like polygraphs, photographic memory, aka “eidetic memory,” is a controversial concept. While some people claim to have the ability to recall images, sounds, or objects in great detail after only brief exposure, scientific evidence supporting the existence of true photographic memory is limited.

Most researchers agree that while some individuals may have exceptional memory skills, they don’t possess a literal photographic memory. Many people who claim or appear to have “photographic memory” usually focus on specific areas they’ve practiced or are interested in, like detailed visual scenes, numbers, or structured information like music or maps.

One well-known person who claims to have photographic memory is Stephen Wiltshire, a British architectural artist. Wiltshire, who is on the autism spectrum, demonstrates his ability by memorizing vast cityscapes after brief observations, then accurately reproducing them in intricate detail. 

In a famous example, he viewed the skyline of Tokyo from a helicopter for a short period and then created an enormous, precise drawing of the entire landscape on a large canvas without further references. 

In Crocodile, we see Shazia opening a bottle of beer and playing some background music during her interview to help activate the witness’s sensory recall and jog their memories. While this tactic may seem odd, there has been numerous evidence of investigation using this approach. 

The reason this approach is effective is because sensory experiences often evoke emotions. A song might remind you of a significant life event, such as a first dance or a breakup, because it carries emotional weight, making the memory more vivid.

The Hillsborough disaster occurred on April 15, 1989, during an FA Cup semi-final match between Liverpool and Nottingham Forest at Hillsborough Stadium in Sheffield, England. A crush in the overcrowded standing pens resulted in the tragic deaths of 97 people, with hundreds more injured. This disaster, caused by poor crowd control and inadequate safety measures, became one of the worst stadium tragedies in British history. 

During a re-investigation years later, investigators employed sensory recall techniques to help survivors and witnesses retrieve memories of that day. Survivors were encouraged to focus on sensory details like sounds, smells, and specific visual imagery, helping clarify the chaotic events. For instance, auditory triggers such as the crowd noise or the sound of the stadium were used to aid witnesses in piecing together a timeline of the disaster. 

If we think of ourselves as walking, talking cameras, with memories as data stored in a personal database, we might seem like surveillance devices open to unrestricted access by authorities. Although we’re not machines (yet), we carry multiple recording devices wherever we go, and legal precedents for accessing this personal data are already beginning to emerge.

In a high-profile case involving the FBI and Apple in 2016, the FBI sought access to the encrypted data on the iPhone of Syed Farook, one of the San Bernardino shooters. 

Without Apple’s assistance, the FBI faced difficulties in bypassing its security features, including a setting that would erase the phone’s data after too many incorrect password attempts.

Apple refused to create a backdoor or unlock the phone, arguing that it would compromise the security of all iPhone users, creating a precedent for future cases and potentially weakening encryption standards worldwide. 

While our memories can never be fully reliable… We may all soon be equipped with a little dash cam of our own such as the Meta Ray-Ban Smart Glasses. And what happens then? 

In 2013, a bystander wearing Google Glass was able to record part of a fight in New Jersey. The video, though not high quality, provided crucial evidence in the case, demonstrating the potential future use of wearable technology. Although Google Glass never became widely adopted, this case highlighted the possibilities of using real-time recording devices to assist in investigations

While mind-reading devices may be a long way in the future, modern technology—such as surveillance cameras, digital footprints, and increasingly sophisticated forensic tools—has made it nearly impossible for criminals to evade detection. The presence of witnesses, be they human or technological, often plays a critical role in uncovering the truth. 

Crocodile warns us that each layer of investigation can cut through even the most elaborate cover-ups. One might feel they’ve escaped, yet every step adds another thread to their web of lies. As each layer is peeled back, small traces—the faintest breadcrumbs—are left behind, drawing closer to the truth and the eventual unraveling of their deception.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Would Wou Want to Live in a Utopia?

In Arthur C Clarke’s Childhood’s End (Amazon), alien Overlords take control of Earth, bringing these utopian qualities to humanity:

  • Robot factories automate the production of goods, and industries and commerce have changed forever.
  • People no longer need to work, and they do so only for desired luxuries.
  • Religion is unnecessary and science is a waste of time.
  • Old creative works still exist for enjoyment, but there isn’t any new material.
  • A person can travel to the other side of the world in 24 hours, and everyone on Earth can now speak and read English.
  • There is surveillance everywhere and crimes have become needless and impossible.
  • Despite there always being a television set nearby, a utopia retains a familiar struggle: boredom.

What do you think? Still, want to live in a utopia?

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!