Joan is Awful: Black Mirror, Can It Happen?

Before we dive into the events of Joan Is Awful, let’s flash back to when this episode first aired: June 15, 2023.

In 2023, the tech industry faced a wave of major layoffs. Meta cut 10,000 employees and closed 5,000 open positions in March. Amazon followed, letting go of 9,000 workers that same month. Microsoft reduced its workforce by 10,000 employees in early 2023, while Google announced its own significant layoffs, contributing to a broader trend of instability in even the largest, most influential tech companies.

Netflix released Depp v. Heard in 2023. This three-part documentary captures the defamation trial between Johnny Depp and Amber Heard. The series explored the viral spectacle that surrounded it online, showing how social media, memes, and influencer commentary amplified every moment. 

Meanwhile, incidents of deepfakes surged dramatically. In North America alone, AI-generated videos and audio clips increased tenfold in 2023 compared to the previous year, with a 1,740% spike in malicious use

In early 2023, a video began circulating on YouTube and across social media that seemed to show Elon Musk in a CNBC interview. The Tesla CEO appeared calm and confident as he promoted a new cryptocurrency opportunity. It looked authentic enough to fool thousands. But the entire thing was fake.

That same year, the legal system began to catch up. An Australian man named Anthony Rotondo was charged with creating and distributing non-consensual deepfake images on a now-defunct website called Mr. Deepfakes. In 2025, he admitted to the offense and was fined $343,500.

Around the world, banks and cybersecurity experts raised alarms as AI manipulation began to breach biometric systems, leading to a new wave of financial fraud. What started as a novelty filter had become a weapon capable of stealing faces, voices, and identities.

All of this brings us to Black Mirror—Season 6, Episode 1: Joan Is Awful.

The episode explores the collision of personal privacy, corporate control, and digital replication. Joan’s life is copied, manipulated, and broadcast for entertainment before she even has a chance to tell her own story. The episode asks: How much of your identity is still yours when technology can exploit and monetize it? And is it even possible to reclaim control once the algorithm has taken over?

In this video, we’ll unpack the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether they are still plausible in our tech-driven, AI-permeated world. 

Streaming Our Shame

In Joan is Awful, we follow Joan, an everyday woman whose life unravels after a streaming platform launches a show that dramatizes her every move. But the show’s algorithm doesn’t just imitate Joan’s life; it distorts it for entertainment. Her friends and coworkers watch the exaggerated version of her, and start believing it’s real. 

The idea that media can reshape someone’s identity isn’t new—it’s been happening for years, only now with AI, it happens faster, cheaper, and more convincingly.

Reality television has long operated in this blurred zone between truth and manipulation. Contestants on shows like The Bachelor and Survivor have accused producers of using editing tricks to create villains and scandals that never actually happened.

One of the most striking examples comes from The Bachelor contestant Victoria Larson, who accused producers of using “Frankenbiting”, a technique of editing together pieces of dialogue from different times to make her appear like she was spreading rumors or being manipulative. She said the selective editing destroyed her reputation and derailed her career.

Then there’s the speed of public judgment in the age of social media. In 2020, when Amy Cooper—later dubbed “Central Park Karen”—called the police on a Black bird-watcher, the footage went viral within hours. She was fired, denounced, and doxxed almost overnight.

But Joan is Awful also goes deeper, showing how even our most intimate spaces are no longer private. 

In 2020, hackers breached Vastaamo, a Finnish psychotherapy service, stealing hundreds of patient files—including therapy notes—and blackmailing both the company and individuals. Finnish authorities eventually caught the hacker, who was sentenced in 2024 for blackmail and unauthorized data breaches.

In this episode, Streamberry’s AI show thrives on a simple principle: outrage. They turn Joan’s humiliation into the audience’s entertainment. The more uncomfortable she becomes, the more viewers tune in. It’s not far from reality.

A 2025 study published in ProMarket found that toxic content drives higher engagement on social media platforms. When users were shielded from negative or hostile posts, they spent 9% less time per day on Facebook, resulting in fewer ads and interactions.

By 2025, over 52% of TikTok videos featured some form of AI generation—synthetic voices, avatars, or deepfake filters. These “AI slop” clips fill feeds with distorted versions of real people, transforming private lives into shareable, monetized outrage.

Joan is Awful magnifies a reality we already live in. Our online world thrives on manipulation—of emotion, of data, of identity—and we’ve signed the release form without even noticing.

Agreeing Away Your Identity

One of the episode’s most painful scenes comes when Joan meets with her lawyer, asking if there’s any legal way to stop the company from using her life as entertainment. But the lawyer points to the fine print—pages of complex legal language Joan had accepted without a second thought. 

The moment is both absurd and shockingly real. How many times have you clicked “I agree” without reading a word?

In the real world, most of us do exactly what Joan did. A 2017 Deloitte survey conducted in the U.S. shows that over 90% of users accept terms and conditions without reading them. Platforms can then use that data for marketing, AI training, or even creative content—all perfectly legal because we “consented.”

The dangers of hidden clauses extend far beyond digital services. In 2023, Disneyland attempted to invoke a controversial contract clause to avoid liability for a tragic allergic reaction that led to a woman’s death at a Disney World restaurant in Florida. The company argued that her husband couldn’t sue for wrongful death because—years earlier—he had agreed to arbitration and legal waivers buried in the fine print of a free Disney+ trial.

Critics called the move outrageous, pointing out that Disney was trying to apply streaming service terms to a completely unrelated event. The case exposed how corporations can weaponize routine user agreements to sidestep accountability.

The episode also echoes recent events where real people’s stories have been taken and repackaged for profit.

Take Elizabeth Holmes, the disgraced founder of Theranos. Within months of her trial, her life was dramatized into The Dropout. The Hulu mini-series was produced in real time alongside Holmes’s ongoing trial. As new courtroom revelations surfaced, the writers revised the script. The result was a more layered, unsettling portrayal of Holmes and her business partner Sunny Balwani—a relationship far more complex and toxic than anyone initially imagined.

In Joan is Awful, the show’s AI doesn’t care about Joan’s truth, and in our world, algorithms aren’t so different. Every click, every “I agree,” and every trending headline feeds an ecosystem that rewards speed over accuracy and spectacle over empathy.

When consent becomes a view or a checkbox and stories become assets, the line between living your life and licensing it starts to blur. And by the time we realize what we’ve signed away, it might already be too late.

Facing the Deepfake

In Joan Is Awful, the twist isn’t just that Joan’s life is being dramatized; it’s that everyone’s life is. What begins as a surreal violation spirals into an infinite mirror. Salma Hayek plays Joan in the Streamberry series, but then Cate Blanchett plays Salma Hayek in the next layer. 

The rise of AI and deepfake technology is reshaping how we understand identity and consent. Increasingly, people are discovering their faces, voices, or likenesses used in ads, films, or explicit content without permission.

In 2025, Brazilian police arrested four people for using deepfakes of celebrity Gisele Bündchen and others in fraudulent Instagram ads, scamming victims out of nearly $3.9 million USD. 

Governments worldwide are beginning to respond. Denmark’s copyright amendment now treats personal likeness as intellectual property, allowing takedown requests and platform fines even posthumously. In the U.S., the 2025 TAKE IT DOWN Act criminalizes non-consensual AI-generated sexual imagery and impersonation.

In May 2025, Mr. Deepfakes, one of the world’s largest deepfake pornography websites, permanently shut down after a core service provider terminated operations. The platform had been online since 2018 and hosted more than 43,000 AI-generated sexual videos, viewed over 1.5 billion times. Roughly 95% of targets were celebrity women, but researchers identified hundreds of victims who were private individuals.​

Despite these legal advances, a fundamental gray area remains. As AI becomes increasingly sophisticated, it is getting harder to tell whether content is drawn from a real person or entirely fabricated. 

An example is Tilly Norwood, an AI-generated actress created by Xicoia. In September 2025, Norwood’s signing with a talent agency sparked major controversy in Hollywood. 

Her lifelike digital persona was built using the performances of real actors—without their consent. The event marked a troubling shift. As producers continue to push AI-generated actors into mainstream projects.

Actress Whoopi Goldberg voiced her concern, saying, “The problem with this, in my humble opinion, is that you’re up against something that’s been generated with 5,000 other actors.”

“It’s a little bit of an unfair advantage,” she added. “But you know what? Bring it on. Because you can always tell them from us.”

In response to the backlash, Tilly’s creator Eline Van der Velden shared a statement:
“To those who have expressed anger over the creation of our AI character, Tilly Norwood: she is not a replacement for a human being, but a creative work – a piece of art.”

When Joan and Salma Hayek sneak into the Streamberry headquarters, they overhear Mona Javadi, the executive behind the series, explaining the operation. She reveals that every version of Joan Is Awful is generated simultaneously by a quantum computer, endlessly creating new versions of real people’s lives for entertainment. Each “Joan,” “Salma,” and “Cate” is a copy of a copy—an infinite simulation. And it’s not just Joan; the system runs on an entire catalog of ordinary people. Suddenly, the scale of this entertainment becomes clear—it’s not just wide, it’s deep, with endless iterations and consequences.

At the 2025 Runway AI Film Festival, the winning film Total Pixel Space exemplified how filmmakers are beginning to embrace these multiverse-like AI frameworks. Rather than following a single script, the AI engine dynamically generated visual and narrative elements across multiple variations of the same storyline, creating different viewer experiences each time.

AI and deepfake technologies are already capable of realistically replicating faces, voices, and mannerisms, and platforms collect vast amounts of personal data from our everyday lives. Add quantum computing, algorithmic storytelling, and the legal gray areas surrounding consent and likeness, and the episode’s vision of lives being rewritten for entertainment starts to feel less like fantasy.

Every post, every photo, every digital footprint feeds algorithms that could one day rewrite our lives—or maybe already are. Maybe we can slip the loop, maybe we’re already in it, and maybe the trick is simply staying aware that everything we do is already being watched, whether by the eyes of the audience or the eyes of the creators that is still seeking inspiration. 

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Black Museum: Black Mirror, Can It Happen?

Before we talk about Black Museum, let’s flashback to when this episode was first released: Dec 29, 2017.

In 2017, the rise of dark tourism—traveling to sites tied to death, tragedy, or the macabre—became a notable cultural trend, with locations like Mexico’s Island of the Dolls, abandoned abandoned hospitals and prisons drawing attention. Specifically, Chernobyl saw a dramatic increase in tourists, with around 70,000 visitors in 2017, a sharp rise from just 15,000 in 2010. This influx of visitors contributed approximately $7 million to Ukraine’s economy.

Meanwhile, in 2017, the EV revolution was picking up speed. Tesla, once a trailblazer now a company run by a power-hungry maniac, launched the more affordable Model 3.

2017 also marked a legal dispute between Hologram USA and Whitney Houston’s estate. The planned hologram tour, aimed at digitally resurrecting the iconic singer for live performances, led to legal battles over the hologram’s quality. Despite the challenges, the project was eventually revived, premiering as An Evening with Whitney: The Whitney Houston Hologram Tour in 2020.

At the same time, Chicago’s use of AI and surveillance technologies, specifically through the Strategic Subject List (SSL) predictive policing program, sparked widespread controversy. The program used historical crime data to predict violent crimes and identify high-risk individuals, but it raised significant concerns about racial bias and privacy.

And that brings us to this episode of Black Mirror, Episode 6 of Season 4: Black Museum. Inspired by Penn Jillette’s story The Pain Addict, which grew out of the magician’s own experience in a Spanish welfare hospital, the episode delves into a twisted reality where technology allows doctors to feel their patients’ pain.

Set in a disturbing museum, this episode confronts us with pressing questions: When does the pursuit of knowledge become an addiction to suffering? What happens when we blur the line between human dignity and the technological advancements meant to heal? And what price do we pay when we try to bring people back from the dead?

In this video, we’ll explore the themes of Black Museum and examine whether these events have happened in the real world—and if not, whether or not it is plausible. Let’s go!

Pain for Pleasure

As Rolo Haynes guides Nish through the exhibits in the Black Museum, he begins with the story of Dr. Peter Dawson. Dawson, a physician, tested a neural implant designed to let him feel his patients’ pain, helping him understand their symptoms and provide a diagnosis. What started as a medical breakthrough quickly spiraled into an addiction.

Meanwhile, in the real world, scientists have been making their own leaps into the mysteries of the brain. In 2013, University of Washington researchers successfully connected the brains of two rats using implanted electrodes. One rat performed a task while its neural activity was recorded and transmitted to the second rat, influencing its behavior. Fast forward to 2019, when researchers linked three human brains using a brain-to-brain interface (BBI), allowing two participants to transmit instructions directly into a third person’s brain using magnetic stimulation—enabling them to collaborate on a video game without speaking.

Beyond mind control, neurotech has made it possible to simulate pain and pleasure without physical harm. Techniques like Transcranial Magnetic Stimulation (TMS) and Brain-Computer Interfaces (BCIs) let researchers manipulate neural activity for medical treatment.

AI is actively working to decode the complexities of the human brain. At Stanford, researchers have used fMRI data to identify distinct “pain signatures,” unique neural patterns that correlate with physical discomfort. This approach could provide a more objective measure of pain levels and potentially reduce reliance on self-reported symptoms, which can be subjective and inconsistent.

Much like Dr. Dawson’s neural implant aimed to bridge the gap between doctor and patient, modern AI researchers are developing ways to interpret and even visualize human thought. 

Of course, with all this innovation comes a darker side. 

In 2022, Neuralink, Elon Musk’s brain-implant company, came under federal investigation for potential violations of the Animal Welfare Act. Internal documents and employee interviews suggest that Musk’s demand for rapid progress led to botched experiments. As a result, many tests had to be repeated, increasing the number of animal deaths. Since 2018, an estimated 1,500 animals have been killed, including more than 280 sheep, pigs, and monkeys.

While no brain implant has caused a real-life murder addiction, electrical stimulation can alter brain function in unexpected ways. Deep brain stimulation for Parkinson’s has been linked to compulsive gambling and impulse control issues, while fMRI research helps uncover how opioid use reshapes the brain’s pleasure pathways. As AI enhances neuroanalysis, the risk of unintended consequences grows.

When Dr. Dawson pushed the limits, and ended up experiencing the death of the patient, his neural implant was rewired in the process, blurring the line between pain and pleasure.

At present, there’s no known way to directly simulate physical death in the sense of replicating the actual biological process of dying without causing real harm. 

However, Shaun Gladwell, an Australian artist known for his innovative use of technology in art, has created a virtual reality death simulation. It is on display at the Melbourne Now event in Australia. The experience immerses users in the dying process—from cardiac failure to brain death—offering a glimpse into their final moments. By simulating death in a controlled virtual environment, the project aims to help participants confront their fears of the afterlife and better understand the emotional aspects of mortality. 

This episode of Black Mirror reminds us that the quest for understanding the mind might offer enlightenment, but it also carries the risk of unraveling the very fabric of what makes us human. 

In the end, the future may not lie in simply experiencing death, but in learning to live with the knowledge that we are always on the cusp of the unknown.

Backseat Driver

In the second part of Black Museum, Rolo recounts his involvement in a controversial experiment. After an accident, Rolo helped Jack transfer his comatose wife Carrie’s consciousness into his brain. This let Carrie feel what Jack felt and communicate with him. In essence, this kept Carrie alive. However, the arrangement caused strain—Jack struggled with the lack of privacy, while Carrie grew frustrated by her lack of control—ultimately putting the saying “’til death do you part” to the test.

The concept of embedding human consciousness into another medium remains the realm of fiction, but neurotechnology is inching closer to mind-machine integration. 

In 2016, Ian Burkhart, a 24-year-old quadriplegic patient, made history using the NeuroLife system. A microelectrode chip implanted in Burkhart’s brain allowed him to regain movement through sheer thought. Machine-learning algorithms decoded his brain signals, bypassing his injured spinal cord and transmitting commands to a specialized sleeve on his forearm—stimulating his muscles to control his arm, hand, and fingers. This allowed him to grasp objects and even play Guitar Hero.

Another leap in brain-tech comes from Synchron’s Stentrode, a device that bypasses traditional brain surgery by implanting through blood vessels. In 2021, Philip O’Keefe, living with ALS, became the first person to compose a tweet using only his mind. The message? A simple yet groundbreaking “Hello, World.” 

Imagine being able to say what’s on your mind—without saying a word. That’s exactly what Blink-To-Live makes possible. Designed for people with speech impairments, Blink-To-Live tracks eye movements via a phone camera to communicate over 60 commands using four gestures: Left, Right, Up, and Blink. The system translates these gestures into sentences displayed on the screen and read aloud.

Technology is constantly evolving to give people with impairments the tools to live more independently, but relying on it too much can sometimes mean sacrificing privacy, autonomy, or even a sense of human connection.

When Jack met Emily, he was relieved to experience a sense of normalcy again. She was understanding at first, but everything changed when she learned about Carrie—the backseat driver and ex-lover living in Jack’s mind. Emily’s patience wore thin, and she insisted that Carrie be removed. Eventually, Rolo helped Jack find a solution by transferring Carrie’s consciousness into a toy monkey.

Initially, Jack’s son loved the monkey. But over time, the novelty faded. The monkey wasn’t really Carrie. She couldn’t hold real conversations anymore. She couldn’t express her thoughts beyond those two phrases. And therefore, like many toys, it was left forgotten. 

This raises an intriguing question: Could consciousness, like Carrie’s, ever be transferred and preserved in an inanimate object? 

Dr. Ariel Zeleznikow-Johnston, a neuroscientist at Monash University, has an interesting theory. He believes that if we can fully map the human connectome—the complex network of neural connections—we might one day be able to preserve and even revive consciousness. His book, The Future Loves You, explores whether personal identity could be stored digitally, effectively challenging death itself. While current techniques can preserve brain tissue, the actual resurrection of consciousness remains speculative. 

This means that if you want to transfer your loved ones’ consciousness into a toy monkey’s body, you’ll have to wait, but the legal systems are already grappling with these possibilities. 

In 2017, the European Parliament debated granting “electronic personhood” to advanced AI, a move that could set a precedent for digital consciousness. Would an uploaded mind have rights? Could it be imprisoned? Deleted? As AI-driven personalities become more lifelike—whether in chatbots, digital clones, or neural interfaces—the debate over their status in society is only just beginning.

At this point, Carrie’s story is purely fictional. But if the line between human, machine, and cute little toy monkeys blurs further, we may need to redefine what it truly means to be alive.

Not Dead but Hardly Alive

In the third and final tale of Black Museum, Rolo Haynes transforms human suffering into a literal sideshow. His latest exhibit? A holographic re-creation of a convicted murderer, trapped in an endless loop of execution for paying visitors to experience. 

What starts as a morbid fascination quickly reveals the depths of Rolo’s cruelty—using digital resurrection not for justice, but for profit. 

The concept of resurrecting the dead in digital form is not so far-fetch. In 2020, the company StoryFile introduced interactive holograms of deceased individuals, allowing loved ones to engage with digital avatars capable of responding to pre-recorded questions. This technology has been used to preserve the voices of Holocaust survivors, enabling them to share their stories for future generations. 

But here’s the question: Who controls a person’s digital afterlife? And where do we draw the line between honoring the dead and commodifying them?

Hollywood has already ventured into the business of resurrecting the dead. After Carrie Fisher’s passing, Star Wars: The Rise of Skywalker repurposed unused footage and CGI to keep Princess Leia in the story. 

The show must go on, and many fans preferred not to see Carrie Fisher recast. But should production companies have control over an actor’s likeness after they’ve passed?

Celebrities such as Robin Williams took preemptive legal action, restricting the use of his image for 25 years after his death. The line between tribute and exploitation has become increasingly thin. If a deceased person’s digital avatar can act, speak, or even endorse products, who decides what they would have wanted?

In the realm of intimacy, AI-driven experiences are reshaping relationships. Take Cybrothel, a Berlin brothel that markets AI-powered sex dolls capable of learning and adapting to user preferences. As AI entities simulate emotions, personalities, and desires, and as people form deep attachments to digital partners, it will significantly alter our understanding of relationships and consent.

Humans often become slaves to their fetishes, driven by impulses that can lead them to make choices that harm both themselves and others. But what if the others are digital beings?

If digital consciousness can feel pain, can it also demand justice? If so, then Nish’s father wasn’t just a relic on display—he was trapped, suffering, a mind imprisoned in endless agony for the amusement of strangers. She couldn’t let it stand. Playing along until the perfect moment, she turned Rolo’s own twisted technology against him. In freeing her father’s hologram, she made sure Rolo’s cruelty ended with him.

The idea of AI having rights may sound like a distant concern, but real-world controversies suggest otherwise. 

In 2021, the documentary Roadrunner used AI to replicate Anthony Bourdain’s voice for quotes he never spoke aloud. Similarly, in 2020, Kanye West gifted Kim Kardashian a hologram of her late father Robert Kardashian. These two notable events sparked backlash over putting words into a deceased person’s mouth. 

While society has largely moved beyond public executions, technology is creating new avenues to fulfill human fantasies. AI, deepfake simulations, and VR experiences could bring execution-themed entertainment back in a digital form, forcing us to reconsider the ethics of virtual suffering.

As resurrected personalities and simulated consciousness become more advanced, we will inevitably face the question: Should these digital beings be treated with dignity? If a hologram can beg for mercy, if an AI can express fear, do we have a responsibility to listen?

While the events of Black Museum have not happened yet and may still be a long way off, the first steps toward that reality are already being taken. Advances in AI, neural mapping, and digital consciousness hint at a future where identities can be preserved, replicated, or even exploited beyond death. 

Perhaps that’s the real warning of Black Museum: even when the human body perishes, reducing the mind to data does not make it free. And if we are not careful, the future may remember us not for our progress, but for the prisons we built—displayed like artifacts in a museum.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Metalhead: Black Mirror, Can It Happen?

Before we talk about the events in Metalhead, let’s flashback to when this episode was first released: December 29, 2017

In 2017, Boston Dynamics founder Marc Raibert took the TED conference stage to discuss the future of his groundbreaking robots. His presentation sparked a mix of awe and unease.

Boston Dynamics has a long history of viral videos showcasing its cutting-edge robots, many of which were mentioned during the talk:

Big Dog is a four-legged robot developed by Boston Dynamics with funding from DARPA. Its primary purpose is to transport heavy loads over rugged terrain.

Then there’s Petman, a human-like robot built to test chemical protection suits under real-world conditions. 

Atlas, a 6-foot-tall bipedal robot, is designed to assist in search-and-rescue missions. 

Handle is a robot on wheels. It can travel at 9 mph, leap 4 feet vertically, and cover about 15 miles on a single battery charge.

And then there was SpotMini, a smaller, quadrupedal robot with a striking blend of technical prowess and charm. During the talk, SpotMini played to the audience’s emotions, putting on a show of cuteness. 

In November 2017, the United Nations debated a ban on lethal autonomous weapons, or “killer robots.” Despite growing concerns from human rights groups, no consensus was reached, leaving the future of weaponized AI unclear.

Simultaneously, post-apocalyptic themes gained traction in 2017 pop culture. From the success of The Walking Dead to Blade Runner 2049’s exploration of dystopian landscapes, this pre-covid audience seemed enthralled by stories of survival in hostile worlds, as though mentally preparing for the worst to come. 

And that brings us to this episode of Black Mirror, Episode 5 of Season 4: Metalhead.

Set in a bleak landscape, Metalhead follows Bella, a survivor on the run from relentless robotic “dogs” after a scavenging mission goes awry. 

This episode taps into a long-standing fear humanity has faced since it first began experimenting with the “dark magic” of machinery. Isaac Asimov’s Three Laws of Robotics were designed to ensure robots would serve and protect humans without causing harm. These laws state that a robot must not harm a human, must obey orders unless it conflicts with the first law, and must protect itself unless this conflicts with the first two laws. 

In Metalhead, however, these laws are either absent or overridden. This lack of ethical safeguards mirrors the real-world fears of unchecked AI and its potential to harm, especially in situations driven by survival instincts. 

So, we’re left to ask: At what point does innovation cross the line into an existential threat? Could machines, once designed to serve us, evolve into agents of our destruction? And, most importantly, as we advance technology, are we truly prepared for the societal consequences that come with it?

In this video, we’ll explore three key themes from Metalhead and examine whether similar events have already unfolded—and if not, whether or not it’s still plausible. Let’s go!

Killer Instincts

Metalhead plunges us into a barren wasteland where survival hinges on outsmarting a robotic “dog”. Armed with advanced tracking, razor-sharp senses, and zero chill, this nightmare locks onto Bella, after her supply mission takes a hard left into disaster. 

The robot dog’s tracking systems are similar to current military technologies. Autonomous drones and ground robots use GPS-based trackers and infrared imaging to locate targets. Devices like Lockheed Martin’s Stalker XE drones combine GPS, thermal imaging, and AI algorithms to pinpoint enemy movements even in dense environments or under cover of darkness. 

With AI-driven scanning systems that put human eyesight to shame, it can spot a needle in a haystack—and probably tell you the needle’s temperature, too. Think FLIR thermal imaging cameras, which let you see heat signatures through walls or dense foliage, or Boston Dynamics’ Spot using Light Detection and Ranging (aka Lidar) and pattern recognition to map the world with precision. 

Lidar works by sending out laser pulses and measuring the time it takes for them to bounce back after hitting an object. These pulses generate a detailed 3D map of the environment, capturing even the smallest features, from tree branches to building structures.

One of the most unsettling aspects of the robot in Metalhead is its superior auditory abilities. In the real world, acoustic surveillance technology, such as ShotSpotter, uses microphones and AI to detect and triangulate gunfire in urban areas. While it sounds impressive, its effectiveness is debated, with critics including a study by the University of Michigan pointing to false positives and uneven results. 

Still, technology is quickly advancing in recognizing human sounds, and some innovations are already in consumer products. Voice assistants like Alexa and Siri can accurately respond to vocal commands, while apps like SoundHound can identify music and spoken words in noisy environments. While these technologies offer convenience, they also raise concerns about how much machines are truly able to “hear.”

This is especially true when advanced sensors—whether auditory, visual, or thermal—serve a darker purpose, turning their sensory prowess into a weapon.

Take robotics companies like Ghost Robotics, which have developed machines equipped with sniper rifles, dubbed Special Purpose Unmanned Rifles (SPURs). These machines, designed for military applications, are capable of autonomously identifying and engaging targets—raising profound ethical concerns about the increasing role of AI in life-and-death decisions.

Built for Speed

In this episode, the robot’s movement—fast, deliberate, and capable of navigating uneven terrain—resembles Spot from Boston Dynamics. 

Spot can sprint at a brisk 5.2 feet per second, which translates to about 3.5 miles per hour. While that’s fairly quick for a robot navigating complex terrain, it’s still slower than the average human running speed. The typical human can run around 8 to 12 miles per hour, depending on fitness level and sprinting ability. 

So while Spot may not outpace a sprinter, DARPA’s Cheetah robot can — at least on the treadmill. Nearly a decade ago, a video was released of this robot running 28.3 miles per hour on a treadmill, leaving even Usain Bolt in the dust.

But while the treadmill is impressive, the current record holder for the fastest land robot is Cassie—and she’s got legs for it! Developed by Oregon State University’s Dynamic Robotics Lab, Cassie sprinted her way into the record books in 2022, running 100 m in 24.73 seconds. 

While today’s robots may not yet match the speed, adaptability, and relentless pursuit seen in the episode, the rapid strides in robotics and AI are quickly closing the gap. Like the tortoise slowly gaining ground on the overconfident hare, these technological advances, though not yet flawless, are steadily creeping toward a reality where they might outrun us in ways we hadn’t anticipated.

Charged to Kill

At a pivotal point in the story, Bella’s survival hinged on exploiting the robot’s energy source. By forcing it to repeatedly power on and off, she aims to drain its battery. Advanced machines, reliant on sensors, processors, and actuators, burn through significant energy during startup.

Today’s robots, like Spot or advanced military drones, run on rechargeable lithium-ion batteries. While these batteries offer excellent energy density, their runtime is finite—high-demand tasks like heavy movement or AI processing can drain them in as little as 90 minutes

However, the latest battery innovations are redefining what’s possible and the automotive industry is leading the charge. Solid-state batteries, for example, offer greater capacity, faster charging, and longer lifespans than traditional lithium-ion ones. Companies like Volkswagen and Toyota have invested heavily in this technology, hoping it will revolutionize the EV market.

Self-recharging technologies, like Kinetic Energy Recovery Systems (KERS), are moving from labs to consumer products. KERS, used in Formula 1 cars, captures and stores kinetic energy from braking to power systems and reduce fuel consumption. It’s now being explored for use in consumer and electric vehicles.

Battery innovation is challenging due to several factors. Improving energy density often compromises safety and developing new batteries requires expensive materials and complex manufacturing processes.

Modern robots are pretty good at managing their power, but even the smartest machines can’t escape the inevitable—batteries that drain under intense demands. While energy storage and self-recharging tech like solar or kinetic systems may help, robots will always face the dreaded low-battery warning. After all, as much as we’d love to plug them into an infinite, self-sustaining energy source, the laws of physics will always say, “Nice try!”

Information Flow

When Bella throws paint to blind the robot’s sensors and uses sound to mislead it, her plan works—briefly. But the robot quickly adapts, recalibrating its AI to interpret new environmental data and adjust its strategy. Similarly, when Bella shoots the robot, it doesn’t just take the hit—it learns, retaliating with explosive “track bullets” that embed tracking devices in her body. This intelligent flexibility ensures that, even when temporarily disabled, the robot can still alter its approach and continue pursuing its objective.

In real life, robots with such capabilities are not far-fetched. Modern drone swarms, such as those tested by DARPA, can coordinate multiple drones for collective objectives. In some instances, individual drones are programmed to act as decoys or to deliberately draw enemy fire, allowing the remaining drones in the swarm to carry out their mission.

In October 2016 at China Lake, California, 103 Perdix drones were launched from three F/A-18 Super Hornets. During this test, the micro-drones exhibited advanced swarm behaviors, including collective decision-making, adaptive formation flying, and self-healing.

While the events in Metalhead are extreme, they are not entirely outside the realm of possibility. Modern robotics, AI, and machine learning are progressing at a staggering rate, making the robot’s ability to adapt, learn, and pursue its objective all too real. 

The advancements in sensors, energy storage, and autonomous decision-making systems could one day allow machines to operate with the same precision seen in the episode. 

So, while we may not yet face such an immediate threat, the seeds are sown. A future dominated by robots is not a matter of “if,” but “when.” As we step into this new frontier, we must proceed with caution, for once unleashed, these creations could be as relentless as any natural disaster—except that nothing about this will be natural.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

Hang the DJ: Black Mirror, Can It Happen?

Before we talk about the events in Hang the DJ, let’s flashback to when this episode was first released: December 29, 2017

On September 25, 2017, Prince Harry and Meghan Markle made their debut as a couple at the Invictus Games in Toronto. Their relationship broke new ground for the British royal family, sparking discussions on cross-cultural relationships and the challenges of maintaining privacy in the spotlight.

Meanwhile, dating apps surged in popularity, with a Stanford study revealing that 39% of couples are meeting online on platforms like Tinder and Bumble. The Tinder Gold’s “Likes You” feature allowing users to see who already swiped right on them, pushed the app’s popularity even further. 

At the same time, Bumble expanded beyond romance into professional networking and friendship with Bumble BFF and Bumble Bizz. Yet, the rise of digital matchmaking wasn’t without critique. Studies highlighted its impact on mental health, with terms like “ghosting” and “breadcrumbing” capturing the emotional toll of algorithmic dating.

In 2017, Elon Musk and Mark Zuckerberg clashed over the future of artificial intelligence, with Musk warning about AI’s potential existential risks and advocating for proactive regulation, fearing AI could evolve beyond human control. Zuckerberg on the other hand was optimistic about AI’s potential to improve lives, emphasizing that responsible innovation would outweigh its risks. 

The idea that reality could be a simulated construct gained significant media coverage in 2017, partly due to some high-profile endorsements. Elon Musk and other prominent figures suggested that the odds of us living in a “base reality”— the original, unaltered reality from which all other realities might stem — are minimal, given the rapid advancement of simulations and AI.

And that’s what brings us to this episode of Black Mirror, Episode 4 of Season 4: Hang the DJ. 

As Frank and Amy navigate the rigidly controlled world of The System, their budding connection forces them—and us—to question the purpose of algorithms in matters of the heart. While The System claims to optimize matches and ensure “perfect” relationships, it also strips away autonomy, leaving users trapped in a cycle of dictated romances.

So we ask: Can technology truly understand the complexities of human connection? At what point does relying on algorithms to find love begin to undermine the very nature of intimacy and self-discovery? Are we, in our quest for compatibility, sacrificing the serendipity that makes relationships meaningful?

In this video, we’ll explore three key themes from Hang the DJ and examine whether similar events have happened—and if they haven’t, whether or not they are still plausible. Let’s go! 

Data and Dating

Hang the DJ unfolds within a seemingly idyllic yet tightly controlled dating system, where Frank and Amy are paired together for a pre-determined length of time, 12 hours. Their compatibility, like that of all users, is calculated through an extensive series of timed relationships, generating data to improve the algorithm. The goal? To find each user their ideal match.

The collection of emotional experiences and connections aims to reduce love to a science, yet it simultaneously raises doubts about the role of choice in human connection. 

The evolution of dating apps like Tinder has sparked debates around fairness, bias, and authenticity in matchmaking. Tinder’s once-secretive algorithm, “Elo score” ranked users by perceived attractiveness and desirability, sparking allegations of discrimination. Critics noted that minority users often received lower scores, reducing their visibility to potential matches—a practice accused of perpetuating systemic biases.

Relying on behavioral tracking, these platforms analyze user actions such as swiping patterns and response times to improve match recommendations. 

Research shows that women swipe right only 30% of the time, and 20% reject over 80% of male profiles. In a sample of 100 male profiles, just one was liked by more than 80% of women, while 38 were universally disliked. These statistics highlight the competitive nature of app-based dating, with women often feeling overwhelmed by message volume (54%), while men report frustration from receiving few responses (64%).

So how do you fight against an artificial intelligence that is giving you a disadvantage on the dating market? You use AI, of course. Tools like Rizz AI and Wing GPT help craft profiles and provide conversation tips. For example, Rizz AI is a chatbot that generates conversation starters or witty replies.

Photo-analysis platforms like PhotoFeeler suggest improvements to profile pictures, boosting user engagement rates. However, these systems only prioritize surface-level appeal, reinforcing beauty standards at the expense of authenticity.

The line between trust in humans and reliance on technology is increasingly blurred, especially as dating and intimacy evolve into processes mediated by digital tools. With online dating becoming more unpredictable and concerns about safety growing in the wake of movements like #MeToo, technology has stepped in to provide checks and balances.

One notable area is consent, where apps like We-Consent and LegalFling offer clear, timestamped records of agreements, securely stored on blockchain. 

Did she consent to intercourse? With technology now there is indisputable proof. But while these tools simplify the logistics of consent, they leave little room for the emotional complexity that often accompanies these situations.

Swiping apps and algorithmic matchmaking have left many feeling overwhelmed, uncertain, and even distrustful. Concerns about rejection, compatibility, and navigating the nuances of communication have led to a growing demand for tools that address these anxieties directly.

The anxiety extends beyond the initial stages of dating. Maintaining communication in a relationship can also be daunting, leading couples to turn to apps like Maia, which provides voice-guided emotional check-ins, offering real-time support during tense moments.

Then there are apps like Smitten that incorporate mini-games like “Lie Detector” or compatibility quizzes to break the ice and create memorable interactions. These playful elements mirror trends in broader tech—like how Duolingo gamifies language learning—and can make dating feel approachable.

Much like Spotify’s approach to curating playlists based on your listening patterns, dating apps analyze your preferences—whether it’s swiping habits or skipping songs—to refine their suggestions over time. 

However, just as Spotify occasionally suggests a song that doesn’t resonate, dating algorithms can misfire, presenting matches that feel disconnected or are derivatives.

In Hang the DJ, AI takes the concept of algorithmic matchmaking to an extreme. Our surrendering to algorithms reflects the growing trust—and trepidation—we place in technology to shape deeply personal experiences. Because of AI’s relentless ability to learn and curate, we may indeed find ourselves echoing the sentiment: Hang the DJ, for the algorithm knows better than we do, and will no longer take requests.

Expiration Date

Because every relationship in Hang the DJ comes with a set expiration date, instead of living in the moment, the characters are often consumed by the knowledge of how and when it will end. For Frank and Amy, this creates vastly different but equally isolating experiences.

Frank endures a long-term relationship that feels like a prison sentence, with no connection or joy to sustain it. Meanwhile, Amy is caught in a revolving door of short-lived partnerships. By imposing strict limits, the system denies its participants the ability to fully engage, leaving them waiting—not for love, but for the clock to run out.

This theme mirrors modern dating dynamics, particularly the incorporation of time-sensitive features in dating apps. For instance, apps like Happn, Hinge, and Tinder employ mechanisms such as expiring matches, boosts, or time-sensitive notifications to create urgency. 

Happn’s location-based model even introduces real-world encounters into the mix, encouraging users to act swiftly before potential connections vanish. Similarly, Tinder’s “Boost” feature amplifies a profile’s visibility for a limited window, leveraging scarcity to drive engagement. Additionally, eHarmony introduced an AI-driven feature that suggested optimal times for users to communicate.

These tools aren’t implementing anything innovative per se, after all, human behavior is influenced by deadlines. For example, studies show that time constraints in speed dating foster initial attraction by prioritizing first impressions. 

Albeit they are manufactured for drama, reality shows like Married at First Sight and Love is Blind are interesting samples of these experiments as they test the concept of expedited relationships. However, success rates vary. 

Across 17 completed seasons of Married at First Sight, 69 couples have been matched. On “Decision Day,” 38 couples (55%) agreed to stay married. However, over two-thirds of those couples later divorced, filed for divorce, or publicly announced their separation. By August 2024, only 11 couples remained married, resulting in a long-term success rate of 15.9%.

The “seven-year itch,” backed by U.S. Census Bureau data, highlights that marital dissatisfaction peaks around the eight-year mark. About half of all first marriages end in divorce, and roughly 46% of marriages don’t last 25 years. On average, couples who divorce separate after seven years of marriage and finalize the divorce about a year later. For those who remarry, it typically happens around four years after their previous marriage ends.

During the COVID-19 pandemic, divorce rates spiked as couples grappled with the challenges of extended time together. In early 2020, divorce consultations increased by 50%, underscoring how prolonged proximity and external pressures can escalate conflicts and make relationships feel stifling.

Interestingly, studies on short-term sexual relationships suggest the awareness of a time limit reduces emotional attachment but can intensify physical intimacy. A survey by SELF magazine asked over 2,000 single women aged 18 to 64 about their experiences with casual sex. The results showed that 82% had at least one casual encounter, and only 19% expressed regret about it.

Modern relationships are often shaped—and strained—by invisible deadlines. These pressures, whether from dating apps, cultural milestones, or societal expectations to marry by a certain age, intensify the tension between savoring the present and bracing for the end.

Such time-bound systems can guide us toward action or trap us in hurried choices that lead to regret. Dating apps, for instance, don’t just facilitate connection—they frame it, shaping how and when we fall into or out of sync with others. Meanwhile, the fear of impermanence and unmet milestones feeds a cycle where love and time feel forever at odds.

Dangerous Devotion

In Hang the DJ, the matchmaking System promises a 99.8% success rate.

As other couples leave the System in blissful unions, the contrast deepens Frank and Amy’s growing skepticism about the algorithm’s efficacy. Their shared frustrations eventually lead them to rebel against the rigid rules, culminating in their decision to challenge the System’s authority and flee. Perhaps concluding the final test to demonstrate their compatibility. 

In modern relationships, we are often encouraged to surrender to a process—whether guided by a system, a coach, or a higher power. Before making a vow in marriage, we first commit to the process itself. However, this openness also exposes us to risks, making us susceptible to bad actors who may exploit our trust, accumulate power, and cause harm.

Among the most notable relationship coaches and frameworks is the Gottman Method, developed by Drs. John and Julie Gottman. This method emphasizes communication, conflict resolution, and building trust through tools like the “Sound Relationship House,” which consists of seven levels: building love maps (understanding each other deeply), sharing fondness and admiration, turning toward each other for support, maintaining a positive perspective, managing conflict constructively, making life dreams come true, and creating shared meaning through rituals and goals. 

Contrasting this research-backed methodology are controversial figures like Andrew Tate and Karla Elia. Tate’s teachings promote hyper-masculinity and dominance, often criticized as toxic and harmful, while Elia’s advice on TikTok advocates for transactional relationships that prioritize financial support over emotional connection by addressing personal wants on the first date. The rise of these figures is partly fueled by algorithms on platforms like TikTok and YouTube, which favor engagement over content quality.

Cults like NXIVM and OneTaste exploit these same vulnerabilities under the guise of empowerment. NXIVM’s promises of self-improvement concealed abusive practices, while OneTaste’s focus on “orgasmic meditation” led to allegations of manipulation and exploitation. 

Similarly, the Twin Flames Universe preyed on its followers’ desire for love, encouraging obsessive behaviors in pursuit of “destined soulmates.” These examples underscore how systems of control can distort genuine emotional connections, much like the matchmaking System in Hang the DJ.

When Frank and Amy are given a second chance at romance, they decide to avoid looking at the expiration date, allowing their relationship to flourish organically.

However, Frank, consumed by curiosity and doubt, breaks the promise. In doing so, he alters their timeline, turning what might have been a chance for something meaningful into a doomed, shortened experience.

Technology increasingly governs how people commit to higher powers by reinforcing accountability through data and automation. However, this reliance on technology often creates pressure to maintain consistency, with lapses leading to feelings of neglect or failure. 

This episode paints a picture of love reduced to data points. In the real world, dating apps already deploy algorithms to analyze preferences, calculate compatibility, and influence decisions. Innovations like simulations, gamified matchmaking, and AI companions hint at a future where love feels both eerily orchestrated and profoundly uncertain. Yet, unlike the utopian undertones of Hang the DJ, where rebellion against the system sparks genuine connection, real-life algorithms often lack the nuance to capture human complexity.

As we inch closer to that future, the question lingers: will these tools guide us toward deeper intimacy or imprison us in an endless loop of swipes and time limits? But perhaps, as the episode reminds us, defying the rules and trusting our humanity may still lead us to our most meaningful connections.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Crocodile: Black Mirror, Can It Happen?

Before we talk about the events in Crocodile, let’s flashback to when this episode was first released: December 29, 2017

On October 1, 2017, avid gambler, Stephen Paddock fired from his room in Mandalay Bay Casino, killing 60 people and injuring over 400 concertgoers, marking the Las Vegas shooting as the deadliest mass shooting in U.S. history. Although his motives remain unknown, eyewitness accounts and hotel surveillance footage played key roles in reconstructing events and tracking Paddock’s actions.

In February 2017, the Delphi Murders shocked Indiana when two teenage girls, Abigail Williams and Liberty German, were found dead following a hike. Liberty had managed to capture a photograph and audio recording of a man they encountered on the trail just before the tragedy, leaving behind crucial evidence that became central to the investigation.

One notable case involving the importance of witness testimony and technology was the 2016 Philando Castile Shooting, which gained national attention when Castile’s girlfriend, Diamond Reynolds, livestreamed the aftermath of the shooting on Facebook. Her video testimony went viral, contributing to the debate about police brutality and racial profiling. While Officer Jeronimo Yanez was ultimately acquitted, the case illustrates how digital witnesses can influence public discourse and investigations.

And that’s what brings us to this episode of Black Mirror, Episode 3, Season 4: Crocodile. 

According to critics and the creator of Black Mirror, Charlie Brooker — the title holds two significant meanings. Originally, the episode’s concept revolved around a virtual safari, where some passengers experience a serene ride while others are attacked by a crocodile, leaving them traumatized. This reflects how different people carry their past experiences through life, even though they are going through seemingly similar events. 

The title also refers to “crocodile tears,” symbolizing feigned remorse while continuing on a destructive path. This duality captures the episode’s central theme of guilt and deceit, where technology and memory tracking uncover hidden truths, showcasing the devastating consequences of evading accountability.

In this video, we’ll explore three key themes from Crocodile and examine whether similar events have happened—and if they haven’t, whether or not they are still plausible. Let’s go! 

The Illusion of Escape

In “Crocodile,” the episode opens with Mia and her boyfriend Rob navigating the aftermath of a tragic accident. What begins as a night of reckless fun turns into a nightmare when they accidentally kill a cyclist. Panicked and desperate to avoid prison, they make a chilling decision—to hide the body and move on with their lives.

Years later, Mia has built a successful career and family, but the weight of guilt lingers just beneath the surface. When Rob reappears, intent on confessing to clear his conscience, Mia’s instinct for self-preservation takes over, leading her down a darker path. The illusion of escape, so carefully constructed through denial and deceit, begins to unravel as Mia resorts to increasingly desperate measures to cover her tracks.

According to the National Highway Traffic Safety Administration, from 2017 to 2021, an average of 883 cyclists per year were killed in police-reported traffic crashes in the U.S. 

The conversation around a tragic cycling accident immediately brings to mind the death of Columbus Blue Jackets player Johnny Gaudreau and his brother Matthew, who were struck by a drunk driver, 43-year-old Sean Higgins on Aug 29, 2024. According to records obtained by NBC Philadelphia, Higgins had a lengthy history of unsafe driving.

While culprits like Higgins stayed at the scene of the crime, many didn’t… resulting in a great effort to find the suspect and bring them to prosecution. 

Hit-and-run incidents significantly contribute to fatalities among vulnerable road users. In 2021, 23% of cyclist deaths involved a hit-and-run driver. Pedestrians are even more at risk, with 62%.

In February 2021, Robert Maraj, the father of rapper Nicki Minaj, was killed in a hit-and-run accident in Long Island, New York. The driver, Charles Polevich, fled the scene. In an attempt to evade responsibility, Polevich hid his car in his garage. Despite his efforts, police were able to track him down using surveillance footage and he was arrested and later pleaded guilty to charges related to leaving the scene of a fatal accident.

According to numerous studies, it is estimated that only 8-10% of hit and run cases are solved. With the number of hit-and-runs increasing in the US annually by 7.2% since 2009. 

The Vorayuth Yoovidhya hit-and-run case gained widespread attention in Thailand in 2012 when Yoovidhya, heir to the Red Bull fortune, fatally struck a police officer with his Ferrari

After fleeing the scene, he avoided prosecution for years, fueling public outrage over his wealth and privilege. The case was reopened in 2020, leading to an eventual arrest warrant. In April 2022, Yoovidhya was reportedly apprehended, underscoring how wealth and influence can delay but not necessarily prevent accountability.

Yes, while the wealthy and powerful can use their status to evade justice, what about those less fortunate? They must act quickly, devise elaborate plans to outsmart technology tracking them, and weave intricate lies without becoming ensnared in their own deception.

In 2018, Chris Watts murdered his pregnant wife, Shanann, and their two daughters in Colorado. Initially, he reported them missing and made public pleas for their return. However, inconsistencies in his story led investigators to suspect foul play. Watts eventually confessed to the murders and was sentenced to life in prison without parole. 

Similarly, Jodi Arias was convicted of murdering her ex-boyfriend, Travis Alexander, in 2008. Arias initially denied involvement, then claimed self-defense after photos and DNA evidence placed her at the scene. Despite her manipulation of the narrative, she was convicted of first-degree murder in 2013.

Although it might seem impossible for anyone to evade the law after a crime as gruesome as murder, according to the 2017 FBI’s Uniform Crime Reporting (UCR) data, approximately 62% of homicides in the U.S. are solved. Which means that 38% of cases remain unsolved, but advances in DNA and forensic technology can still lead to convictions in cases years later.

The Golden State Killer case, which had been cold for over 40 years, was finally solved in 2018 with the arrest of Joseph James DeAngelo. Between 1974 and 1986, DeAngelo committed at least 13 murders, 50 rapes, and over 100 burglaries across California. 

In 2018, investigators uploaded DNA from the Golden State Killer’s crime scenes to GEDmatch, a public genetic database used by individuals seeking to trace their ancestry.

Using this database, authorities were able to identify distant relatives of the killer. By building a family tree and cross-referencing with other details (such as locations where crimes occurred), they eventually narrowed the search down to Joseph DeAngelo.

His arrest was a landmark moment in forensic science, demonstrating how advancements in DNA technology can solve even the longest-standing cases. DeAngelo later pleaded guilty and was sentenced to life in prison.

Just like in Crocodile, where Mia’s actions lead to more crimes in an attempt to cover up the initial one, real-world cases show that the more someone tries to escape responsibility, the more entangled they become. Each new lie or action increases the risk of leaving behind evidence.

Yes, while there may be a 40% chance of getting away with murder, a more precise way to frame this is that there’s a 40% chance of getting away with it today. Advances in forensic science, like DNA technology and digital surveillance, continuously reduce the window of opportunity for criminals to evade justice, meaning that over time, the likelihood of getting caught increases significantly.

Layers of Investigation

In Crocodile, Shazia, an insurance investigator, is on a mission to establish who’s responsible for an accident involving a man and a pizza vending machine. Using the “Recaller” device, which retrieves memories from witnesses, she goes deeper into their recollections, unearthing details about the seemingly minor incident. 

Like digital forensics, authorities use a range of advanced technologies to catch suspects trying to evade justice. 

The first is surveillance footage from CCTV cameras, especially in urban areas, highways, and near businesses. This tool is critical in capturing vehicles or individuals fleeing crime scenes.

After the twin bombings during the Boston Marathon on April 15, 2013, authorities combed through hours of footage from cameras near the race’s finish line. A breakthrough came when two brothers were spotted placing backpacks at the scene just before the explosions. The FBI released images of the suspects to the public, which helped confirm their identities.

According to the Insurance Information Institute (III), dashcams provide clear, indisputable evidence, helping to resolve conflicts quickly. In Russia, where fraudulent claims are prevalent, dashcam use is widespread, reducing fraud by over 50%

For example, some scammers deliberately throw themselves onto car hoods or cause rear-end collisions, hoping to extort money from the driver or win a fraudulent insurance claim. Dashcam footage serves as critical proof to defend against such scams.

Installed on police vehicles or fixed locations such as traffic lights or toll booths, Automatic License Plate Readers (ALPRs) are a powerful tool for law enforcement, allowing them to scan and record the license plates of passing vehicles. 

A routine stop at a gas station in Indianapolis quickly escalated into a frantic hunt when a car thief sped off with a six-month-old baby still in the back seat. As panic set in, law enforcement scrambled to track the stolen vehicle. Using automatic license plate readers (ALPRs), officers were able to trace the car’s movements across the city. Hours later, the vehicle was found abandoned, and to everyone’s  relief, the baby was safely reunited with the family, unharmed. 

Law enforcement agencies frequently rely on cell phone data and GPS tracking to pinpoint the whereabouts of suspects and connect them to crime scenes. Phone records provide critical timestamps, while GPS tracking logs exact locations, creating a digital trail that’s nearly impossible to erase.

The case of Timothy Carpenter centers around a series of armed robberies that took place in Michigan between 2010-2011. Carpenter was convicted largely based on cell tower location data, which tracked his movements and placed him near the scenes of the crimes. This data was obtained by law enforcement without a warrant, leading to significant legal debates regarding privacy rights and the Fourth Amendment.

Not only can law enforcement scan your license plate or track your cell phone signals, they have the capability to recognize your face. In China, facial recognition technology has become widespread and integrated into daily life, making it a critical tool for catching criminals. 

A famous case occurred in 2018 when a man wanted for economic crimes, identified as Mr. Ao, was caught at a Jacky Cheung concert attended by over 60,000 people in Nanchang. 

Facial recognition cameras at the event identified him as a suspect as he was entering the stadium, leading to his immediate arrest by local police. The use of this technology in public spaces, combined with China’s vast network of surveillance cameras, has enabled authorities to catch fugitives even in large crowds.

The list of tools available to investigators is growing, and many of them weren’t even originally designed for law enforcement. These are platforms already in use and accessible to the public.

In 2015, a Canadian woman named Nathalie Blanchard was on long-term disability leave due to depression, which her insurance company was covering. However, when she posted pictures of herself vacationing in sunny destinations, attending parties, and engaging in leisure activities on her Facebook profile, her insurer became suspicious. 

Her insurance provider, Manulife investigated her claim and subsequently cut off her benefits, citing her social media posts as evidence that her disability was not as severe as claimed. Blanchard sued, stating that these activities were part of her doctor’s advice to improve her mental health. But this case showed how insurers are using every means in their arsenal to investigate fraud claims.

Alibis crumble, liability expands, and the more layers an investigation uncovers, the harder it becomes for criminals to evade justice. Whether through digital records, forensic analysis, or social media investigations, law enforcement are using every technique available to identify, locate, and apprehend suspects.

The Witness Effect

When Shazia uses the “Recaller” on Mia, her past crimes come dangerously close to being exposed. In a desperate bid to silence anyone who could implicate her, Mia kills the investigator, her husband, and her infant child. However, her downfall comes when she overlooks Codger, the family guinea pig, whose memories are later harvested by authorities to uncover the truth.

The Recaller brings to mind a certain machine used during investigations — the lie detector test — polygraph machines. Invented in the 1920s the polygraph test has been a staple of modern investigations and played pivotal roles in television crime shows. 

But unlike the “Recaller”, polygraphs are unreliable because they measure physiological responses like heart rate and perspiration, which can be triggered by emotions such as anxiety rather than deception. This leads to false positives, where truthful individuals are flagged as deceptive, and false negatives, where liars go undetected. Courts often exclude polygraph evidence due to these issues.

Much like polygraphs, photographic memory, aka “eidetic memory,” is a controversial concept. While some people claim to have the ability to recall images, sounds, or objects in great detail after only brief exposure, scientific evidence supporting the existence of true photographic memory is limited.

Most researchers agree that while some individuals may have exceptional memory skills, they don’t possess a literal photographic memory. Many people who claim or appear to have “photographic memory” usually focus on specific areas they’ve practiced or are interested in, like detailed visual scenes, numbers, or structured information like music or maps.

One well-known person who claims to have photographic memory is Stephen Wiltshire, a British architectural artist. Wiltshire, who is on the autism spectrum, demonstrates his ability by memorizing vast cityscapes after brief observations, then accurately reproducing them in intricate detail. 

In a famous example, he viewed the skyline of Tokyo from a helicopter for a short period and then created an enormous, precise drawing of the entire landscape on a large canvas without further references. 

In Crocodile, we see Shazia opening a bottle of beer and playing some background music during her interview to help activate the witness’s sensory recall and jog their memories. While this tactic may seem odd, there has been numerous evidence of investigation using this approach. 

The reason this approach is effective is because sensory experiences often evoke emotions. A song might remind you of a significant life event, such as a first dance or a breakup, because it carries emotional weight, making the memory more vivid.

The Hillsborough disaster occurred on April 15, 1989, during an FA Cup semi-final match between Liverpool and Nottingham Forest at Hillsborough Stadium in Sheffield, England. A crush in the overcrowded standing pens resulted in the tragic deaths of 97 people, with hundreds more injured. This disaster, caused by poor crowd control and inadequate safety measures, became one of the worst stadium tragedies in British history. 

During a re-investigation years later, investigators employed sensory recall techniques to help survivors and witnesses retrieve memories of that day. Survivors were encouraged to focus on sensory details like sounds, smells, and specific visual imagery, helping clarify the chaotic events. For instance, auditory triggers such as the crowd noise or the sound of the stadium were used to aid witnesses in piecing together a timeline of the disaster. 

If we think of ourselves as walking, talking cameras, with memories as data stored in a personal database, we might seem like surveillance devices open to unrestricted access by authorities. Although we’re not machines (yet), we carry multiple recording devices wherever we go, and legal precedents for accessing this personal data are already beginning to emerge.

In a high-profile case involving the FBI and Apple in 2016, the FBI sought access to the encrypted data on the iPhone of Syed Farook, one of the San Bernardino shooters. 

Without Apple’s assistance, the FBI faced difficulties in bypassing its security features, including a setting that would erase the phone’s data after too many incorrect password attempts.

Apple refused to create a backdoor or unlock the phone, arguing that it would compromise the security of all iPhone users, creating a precedent for future cases and potentially weakening encryption standards worldwide. 

While our memories can never be fully reliable… We may all soon be equipped with a little dash cam of our own such as the Meta Ray-Ban Smart Glasses. And what happens then? 

In 2013, a bystander wearing Google Glass was able to record part of a fight in New Jersey. The video, though not high quality, provided crucial evidence in the case, demonstrating the potential future use of wearable technology. Although Google Glass never became widely adopted, this case highlighted the possibilities of using real-time recording devices to assist in investigations

While mind-reading devices may be a long way in the future, modern technology—such as surveillance cameras, digital footprints, and increasingly sophisticated forensic tools—has made it nearly impossible for criminals to evade detection. The presence of witnesses, be they human or technological, often plays a critical role in uncovering the truth. 

Crocodile warns us that each layer of investigation can cut through even the most elaborate cover-ups. One might feel they’ve escaped, yet every step adds another thread to their web of lies. As each layer is peeled back, small traces—the faintest breadcrumbs—are left behind, drawing closer to the truth and the eventual unraveling of their deception.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Arkangel: Black Mirror, Can It Happen?

Before we talk about the events in Arkangel, let’s take a look back to when this episode was first released: December 29, 2017.

One of the most high-profile celebrity parenting moments came in June 2017 when Beyoncé gave birth to twins, Sir and Rumi Carter. This announcement went viral, showcasing how celebrities influence public discussions around pregnancy, motherhood, and parenting. 

Meanwhile, the ethical debates around gene editing intensified, particularly with CRISPR technology, “designer babies”, and parental control over genetics. According to MIT, more than 200 people have been treated to this experimental genome editing therapy since it dominated headlines in 2017. 

In December of that year, France enacted a landmark law banning corporal punishment, including spanking, marking a significant shift toward advocating for children’s rights and promoting positive parenting practices. With this legislation, France joined many of its European neighbors, following Sweden, which was the first to ban spanking in 1979, Finland in 1983, Norway in 1987, and Germany in 2000.

Earlier in the year, the controversial travel ban implemented by the Trump administration raised significant concerns, particularly regarding family separations among immigrants from several Muslim-majority countries. Later, the issue escalated with the separation of immigrant families at the U.S.-Mexico border, sparking heated discussions about children’s rights and the complexities of parenting in crisis situations. 

Moreover, the effectiveness of sex education programs came under scrutiny in 2017, particularly as some states continued to push abstinence-only approaches, potentially contributing to rising teenage pregnancy rates. This concern was again exacerbated by the Trump administration, specifically their cuts to Title X funding for teen pregnancy prevention programs.

In 2017, Juul e-cigarettes surged in popularity among teenagers. Social media played a significant role in this trend, with platforms like Snapchat and Instagram flooded with content depicting teens vaping in schools. This led to school bans and public health worries, particularly as Juul e-cigarettes, shaped like a conventional USB harddrive was capable of delivering nicotine nearly 3 times faster than other e-cigarettes. In the coming years, over 60 deaths of teenagers will follow as a direct result of smoking Juuls.

And that’s what brings us to this episode of Black Mirror, Episode 2 of Season 4: Arkangel. As Sara matures, her mother, Marie’s inability to overcome her fears and over-reliance on technology ends up stifling Sara’s growth. Leaving us all questioning our reality, as the prevalence of cameras, sensors, and monitors is now readily accessible — and strategically marketed — to the new generation of parents.

Can excessive control hinder a child’s independence and development? Where does one draw the line between protection and autonomy in parenting? What are the consequences of being overly protective, and is the resentment that arises simply a natural cost of loving a child? 

In this video we will explore three themes of this episode and determine whether or not these events have happened and if not, whether they’re still plausible.  Let’s go! 

Love — and Overprotection

In “Arkangel”, the deep bond between Marie and her daughter Sara is established from the very beginning. After a difficult birth, Marie’s attachment is heightened by the overwhelming relief that followed. However, when young Sara goes missing for a brief but terrifying moment at a playground, her protective instincts shift into overdrive. 

Consumed by fear of losing Sara again, Marie opts to use an experimental technology called Arkangel. This implant not only tracks Sara’s location but also monitors her vital signs and allows Marie to censor what she can see or experience. Driven by the anxiety of keeping her daughter safe and healthy, Marie increasingly relies on Arkangel. But as Sara grows older, the technology starts to intrude on her natural experiences, such as witnessing a barking dog or the collapse of her grandfather.

Perhaps the products that most relate to Arkangel the most are tracking apps like Life360, which have become popular, providing parents with real-time location data on their kids. However, in 2021, teens protested the app’s overuse, arguing it promoted an unhealthy culture of mistrust and surveillance, leading to tension between parents and children. In a number of cases, the parents will continue using Life360 to track their kids even after they have turned 18. 

Now let’s admit it, parenting is hard — and expensive. A 2023 study by LendingTree found that the average annual cost of raising a child in the U.S. is $21,681. With all the new technology that promises to offer convenience and peace of mind, it would almost seem irresponsible not to buy a $500 product as insurance. 

The latest innovation in baby monitors includes the Cubo AI which uses artificial intelligence to provide parents with features such as real-time detection of potential hazards, including facial obstruction, crying, and dangerous sleep positions. It includes a high-definition video feed, night vision, and the ability to capture and store precious moments. 

But these smart baby monitors and security cameras have created a new portal to the external world, and therefore, new problems. In 2020, for instance, iBaby monitors were hacked. Hackers not only accessed private video streams but also saved and shared them online. In some cases, horrified parents discovered strangers watching or even speaking to their children through these monitors.

For many years, manufacturers of smart baby monitors prioritized convenience over security, allowing easy access through simple login credentials that users often don’t change. Additionally, some devices use outdated software or lack firmware updates, leaving them open to exploitation. 

As technology advances, parenting methods evolve, with a growing trend towards helicopter parenting — a style marked by close monitoring and control of children’s activities even after they pass early childhood. 

Apps like TikTok introduced Family Pairing Mode in 2021 to help parents set screen time limits, restrict inappropriate content, manage direct messages, and control the search options. 

Child censorship and content blocking tools can be effective in protecting younger children from inappropriate content, however, they can also foster resentment if overused, and no system is foolproof in filtering content. 

However, many parents are not using iPads as simply entertainment for their children, they are relying on the iPad as a babysitter. Which hinders their children from learning basic skills like patience, especially when managing something that requires focus and attention. 

A 2017 study by Common Sense Media revealed that nearly 80 percent of children now have access to an iPad or similar tablet, making it more common for kids to be consistently online. 

Bark, Qustodio, and Net Nanny are just a few apps in a growing market that offer parents control over their children’s digital activities. While these tools provide protection by monitoring texts, emails, and social media, they also allow parents to intervene. But children, like hackers, are getting more savvy as well.

A recent survey by Impero Software, which polled 2,000 secondary school students, showed that 25 percent of them admitted to watching harmful or violent content online during class, with 17 percent using school-issued devices to do so. Additionally, 13 percent of students reported accessing explicit content, such as pornography, while 10 percent used gambling sites—all while in the classroom.

Parental involvement, communication, and gradual freedom are crucial for ensuring these new technologies work as intended. However, we’ve seen from real-world events and this episode, how overreliance on technology like Arkangel, driven by a maternal fear of losing control, can become problematic. This natural impulse to protect a child hasn’t kept pace with the power such technology grants, ultimately overlooking the child’s need for emotional trust and autonomy, not just physical safety.

Sex — and Discovery

In Arkangel, as Sara enters adolescence, she begins a romantic relationship with her classmate, Trick. Unbeknownst to her, her mother, Marie, uses the Arkangel system to secretly monitor Sara’s intimate moments. 

The situation reaches a breaking point when Marie uncovers the shocking truth: Sara is pregnant. Overcome with maternal love and anxiety, Marie feels compelled to act by sneaking emergency contraceptive pills into Sara’s daily smoothie — the decisive move that will forever change her relationship.

This episode highlights the conflict between natural curiosity and imposed restrictions, emphasizing the risks of interfering or suppressing someone’s sexual experiences and personal choices. In today’s world, this mirrors the ongoing struggle faced by parents, educators, and regulators navigating the balance between sexual education, community support programs, and the natural discovery of personal identity.

Bristol Palin, daughter of Sarah Palin, was thrust into the spotlight at 17 when her pregnancy was announced during her mother’s 2008 vice-presidential campaign. As Sarah Palin had publicly supported abstinence-only education, Bristol’s pregnancy came across as somewhat hypocritical.

A year later, the tv-series Teen Mom premiered and stood as a stark warning about the harsh realities of teenage pregnancy. Beneath its cheery MTV-branding, the show was a depiction of sleepless nights, financial desperation, and mental health struggles. The hypocrisy of a society that glorifies motherhood but fails to support these young women is evident as innocences is ripped from their lives. This show doesn’t just reveal struggles; it exposes a broken system.

A 2022 study by the American College of Pediatricians found that nearly 54% of adolescents were exposed to pornography before age 13, shaping their early understanding of sex. With gaps in sex education, many adolescents turn to pornography to learn.

According to a report (last updated in 2023) by Guttmacher Institute, abstinence is emphasized more than contraception in sex education across the 39 US states and Washington D.C. that have mandated sex education and HIV education. While 39 states require teaching abstinence, with 29 stressing it, only 21 states mandate contraception information. 

Many argue that providing students with information about contraception, consent, and safe sex practices leads to better health outcomes. They cite lower rates of unintended pregnancies and sexually transmitted infections (STIs) in places with comprehensive programs. For example, countries like the Netherlands.

As of 2022, the U.S. had a birth rate of around 13.9 births per 1,000 teens aged 15-19, although this represents a significant decline from previous years. In contrast, the Netherlands with the lowest teen pregnancy rates globally, has just 2.7 births per 1,000 teens in the same age group. 

Yes, we can’t overlook the effectiveness of “Double Dutch,” which combines hormonal contraception with condoms. 

The provision of contraceptives, including condoms, for minors is a topic of significant debate. While some districts, such as New York City public schools, offer free condoms as part of their health service, many believe that such decisions should be left to the parents. 

However, many agree that teens who feel uncomfortable discussing contraception with their parents should still have the ability to protect themselves. A notable example is California’s “Confidential Health Information Act,” which allows minors who are under the insurance of their parents to access birth control without parental notification. 

On the other hand, critics contend that such programs may undermine parental authority and encourage sexual behavior. But such matters extend beyond teenagers. 

Globally, access to contraceptives is tied to reproductive rights, and therefore, women’s rights. In the U.S., following the Supreme Court’s 2022 decision to overturn Roe v. Wade, many states have enacted stricter abortion laws.  

In 2023, the abortion pill mifepristone faced legal challenges, with pro-life advocates seeking to restrict access to medication abortions in multiple states. 

The ongoing struggle to protect reproductive rights and the risks of sliding toward a reality where personal choices are dictated by external authorities is upon us. This episode shows us that, just as Marie’s overreach in Arkangel results in dire consequences for Sara, society must remain vigilant in safeguarding the right to choose to ensure that individuals maintain control over their own lives and bodies.

Drugs — and Consequences

Like sex and violence, this episode uses drugs as a metaphor for the broader theme of risky behavior and self-discovery, a process many teenagers go through. 

However, when Sara experiments with drugs, Marie becomes immediately aware of it through Arkangel’s tracking system.

By spying on her daughter, Marie takes away Sara’s chance to come forward on her own terms. Instead of waiting for Sara to open up when she’s ready, Marie finds out everything through surveillance. This knowledge weighs heavily on her, pushing her to intervene without considering what Sara actually needs.

But when it comes to drugs, is there really time for parents to wait? Does the urgency of substance abuse among teens demand immediate action? In a situation as life-threatening as drug use, doesn’t every second count? 

When rapper Mac Miller passed away from an accidental overdose in 2018, the shock rippled far beyond the music world. His death became a wake-up call, shining a harsh light on the silent struggles of teenage addiction. 

In 2022, a report from UCLA Health revealed that, on average, 22 teenagers between the ages of 14 and 18 died from drug overdoses each week in the U.S. This stark reality underscores a growing crisis, with the death rate for adolescents rising to 5.2 per 100,000, largely driven by fentanyl-laced counterfeit pills. 

This surge has led to calls for stronger prevention measures. Schools are expanding drug education programs to raise awareness of fentanyl in counterfeit pills, while many communities are making naloxone (Narcan), an opioid overdose reversal drug, more readily available in schools and public spaces.

The gateway drug theory argues that starting with something seemingly harmless and socially accepted, like marijuana or alcohol, may open the door to harder drugs over time. 

Teens who use e-cigarettes are more likely to start smoking traditional tobacco products, like cigarettes, cigars, or hookahs, within a short period. In a National Institute of Health study comparing ninth-grade students, 31% of those who had used e-cigarettes transitioned to combustible tobacco within the first six months, compared to only 8% of those who hadn’t used e-cigarettes. 

Developed by Chinese pharmacist Hon Lik, the first e-cigarette was patented in 2003 with the intention of aiding smokers in quitting by replicating the act of smoking while minimizing exposure to tar and other harmful substances. Yes, vaping was promoted as a safer choice, attracting a new market of non-smokers drawn in by enticing flavors.

In 2014, NJOY — a vaporizer manufacturer accused of infringing on Juul’s patents — launched a campaign with catchy slogans like “Friends Don’t Let Friends Smoke”.They strategically placed ads in bars and nightclubs, embedding vaping into social settings to help normalize the behavior, making it seem like a trendy choice.

Ten years later, this narrative has been significantly challenged, as vaping has become the most prevalent form of nicotine use among teenagers in the U.S. as of 2024.

But deep down, maybe we’re looking at drug use all wrong. Instead of just thinking about the risks, it’s worth asking why so many young people are turning to drugs in the first place. What drives them to make that choice? 

Nearly three-quarters (73%) of the 15,963 teenagers who participated in an online survey conducted by the National Addictions Vigilance Intervention and Prevention Program, about their motivations for drug and alcohol use from 2014 to 2022 reported that they used substances “to feel mellow, calm, or relaxed.” Additionally, 44% indicated they used drugs, such as marijuana, as sleep aids.

While drug use among teenagers is a growing concern, the primary challenges young people face might not be addiction, but rather anxiety, depression, and the crippling sense of hopelessness. It is possible that a parent’s overprotectiveness can sometimes misdirect focus towards the wrong problems, leading to a dangerous reliance on technology that fails to reveal the full picture.

Whether the threat is external or tied to self-exploration, this episode of Black Mirror demonstrates how parental fears can easily transform into controlling behaviors. It reflects real-life scenarios where teens, feeling trapped or misunderstood, may seek escape through drugs, sex, or even violence.

Parents, with their best intention, often believe they’re bringing home a protective shield for their children. However, instead the approach turns into a sword, cutting into their relationships and severing the bonds they’ve worked so hard to maintain. What they thought would keep them safe only deepened the divide, a poignant reminder that sometimes the tools meant to protect can backfire and be the ones that cause the most harm.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

USS Callister: Black Mirror, Can It Happen?

Before we talk about the events in USS Callister, let’s flashback to when this episode was first released: December 29, 2017

In March 2017, Nintendo shook up the gaming industry with the release of the Nintendo Switch, a hybrid console that could be used both as a handheld and a home system. Its flexibility and the massive popularity of games like The Legend of Zelda: Breath of the Wild catapulted it to success with over 2.74 million units sold in the first month. 

The same year, Nintendo also released the Super NES Classic, a mini version of their 90s console that left fans scrambling due to shortages.

In the realm of augmented and virtual reality, 2017 also marked important strides. Niantic introduced stricter anti-cheating measures in Pokémon GO, while Oculus revealed the Oculus Go—a more affordable, standalone VR headset designed to bring immersive experiences to more people. Games like Lone Echo pushed the limits of VR, showcasing futuristic gameplay with its zero-gravity world.

However, in the real world, there were significant conversations about the risks of excessive gaming, particularly in China, where new regulations were put in place to limit minors’ time and spending on online games. These shifts in culture raised awareness around the addictive potential of immersive digital environments.

No it was not all fun and games — in fact, there was a lot of work as well. The year was also defined by controversies in the workplace. In October 2017, the Harvey Weinstein scandal broke, igniting the #MeToo movement and leading to widespread discussions about abuse of power, harassment, and accountability. 

Uber was rocked by similar revelations earlier in the year, with a blog post by former engineer Susan Fowler shedding light on a toxic work environment, which ultimately led to the resignation of CEO Travis Kalanick. 

Google wasn’t exempt from these cultural reckonings either, with the firing of software engineer James Damore after his controversial memo questioning the company’s diversity efforts went viral. 

In his memo titled “Google’s Ideological Echo Chamber,” Damore argues that the underrepresentation of women in tech isn’t simply due to discrimination but is also influenced by biological differences between male and female. He further claims that Google should do more to foster an environment where conservative viewpoints, like his, can be freely expressed.

And that brings us to this episode of Black Mirror. Episode 1, Season 4 — USS Callister. This episode combines the excitement of virtual reality with a chilling exploration of power, control, and escapism. 

Much like the controversies of 2017, it asks hard questions: How do we balance the benefits of technology with the ethical implications of its use? What happens when someone with unchecked power has control to live out their darkest fantasies? And finally, how do we confront the consequences of our gradual immersion in digital worlds? 

In this video, we’ll explore three key themes from USS Callister and examine whether similar events have happened—and if they haven’t, whether or not they are still plausible. Let’s go! 

Toxic Workplace

In this episode, we follow Robert Daly, a co-founder and CTO of a successful tech company, Callister. Despite his critical role in the company, Daly is overshadowed by his partner, James Walton, the CEO. Daly’s lack of leadership skills is evident, creating a strained work environment where he is seen as ineffective.

However, in the modified version of the immersive starship game Infinity — a game developed by Callister — Daly lives out his darkest fantasy by assuming the role of a tyrannical captain in a replica of his favorite show, Space Fleet. Here, he wields absolute control over the digital avatars of his employees, who are trapped in the game and forced to obey his every command. This exaggerated portrayal of Daly’s need for power not only reflects his real-world impediments but also highlights his troubling intentions, such as his coercive demands and manipulative actions toward his employees.

USS Callister explores themes of resistance and empowerment as the avatars begin to recognize their situation and challenge Daly’s authority. Their collective struggle to escape the virtual prison serves as a powerful metaphor that underscores the broader issue of navigating workplaces with domineering and unsympathetic employers.

When Elon Musk took over Twitter (now rebranded as X) in October 2022, his management style quickly drew criticism for its harshness and lack of consideration for employees. Musk implemented mass layoffs, cutting about half of the company’s workforce abruptly. By April 2023, Musk confirmed he had fired roughly 80%.

He also implemented a demanding work culture, requiring employees to submit one-page summaries outlining their contributions to the company in order to retain their jobs. This expectation, coupled with long hours and weekend shifts under intense pressure, reflected a disregard for work-life balance and contributed to a high-stress environment.

The rapid and drastic changes under Musk’s tenure not only led to legal and operational challenges but as of January 2024, Fidelity reports that X has seen a 71% decline in value since Elon Musk acquired the company.

In 2020, former staff members accused Ellen DeGeneres and her management team of creating a workplace culture marked by bullying, harassment, and unfair treatment—contradicting her public persona of kindness. Following the backlash and tarnished reputation, Ellen ended her 19 season run and aired her final episode on May 26, 2022 with guests, Jennifer Aniston, Billie Eillish, and Pink.

In November 2017, Matt Lauer, a longtime host of NBC’s “Today” show, was fired after accusations of sexual harassment surfaced. Following his termination, more allegations emerged from female colleagues, revealing a pattern of misconduct. Perhaps the most damning detail was Lauer’s use of a secret button to lock his office door — from the outside—to keep other employees from walking in. 

As harassment in the physical world continues to receive widespread attention, it has also found new avenues in digital spaces. 

According to an ANROWS (Australia’s National Research Organisation for Women’s Safety) report from 2017, workplace harassment increasingly moved online, with one in seven people using tech platforms to harass their colleagues. Harassment via work emails, social media, and messaging platforms became a rising issue, showing the darker side of digital communication in professional environments.

In the same year, concerns about workplace surveillance and management practices emerged, particularly at tech companies. 

Amazon was a prime example of invasive productivity tracking, where employees’ movements and actions were constantly monitored. If their performance drops below their expected productivity rate, they risk being fired.

These challenges extended to remote work, where platforms like Slack encouraged a culture of constant availability, even after hours. 

The rise of automated tools, like HireVue’s AI-powered hiring platform and IBM’s AI-driven performance reviews, raised concerns about bias, unfair evaluations, and the lack of human empathy in the hiring and management processes.

These developments highlight broader trends in workplace dynamics, where toxic environments and power imbalances are increasingly magnified by the misuse of technology. This theme is echoed in USS Callister, where personal grievances and unchecked authority in a digital world allow one man to dominate and manipulate his employees within a disturbing virtual playground. The episode serves as a cautionary tale, illustrating how the abuse of power in both real and digital realms can lead to harmful consequences.

Stolen Identity

In USS Callister, Robert Daly’s method of replicating his colleagues’ identities in Infinity involves a disturbing form of theft. Daly uses biometric and genetic material to create digital clones of his coworkers. Specifically, he collects DNA samples from personal items, such as a lollipop discarded by a young boy and a coffee cup used by his colleague, Nannette Cole.

Daly’s access to advanced technology enables him to analyze these DNA samples and extract the personal information necessary to recreate his victims’ digital identities. These avatars, based on the DNA he collected, are trapped within the game, where Daly subjects them to his authoritarian whims.

The use of DNA in this context underscores a profound invasion of privacy and autonomy, turning personal genetic material into tools for exploitation.

Digitizing DNA involves converting genetic sequences into digital formats for storage, analysis, and interpretation. This process begins with sequencing the DNA to determine the order of nucleotides, then converting the sequence into binary code or other digital representations. The data is stored in databases and analyzed using advanced software tools. 

These technologies enable personalized medicine, genetic research, and ancestry analysis, advancing our understanding of genetics and its applications. Key players in this field include companies like Illumina and Thermo Fisher Scientific, as well as consumer services like 23andMe and Ancestry.com

As more of our genetic data is stored in databases, our personal information becomes increasingly vulnerable. Hackers, scammers, and malicious actors are constantly seeking new ways to exploit data for profit. 

One example is the 2020 Twitter hack, which saw the accounts of major public figures like Elon Musk and Joe Biden hijacked to promote a cryptocurrency scam. The breach not only caused financial losses for unsuspecting followers but also raised alarms about the security of our most-used platforms. 

In 2022, a phishing attack targeted Microsoft Office 365, employing a tactic known as consent phishing to exploit vulnerabilities in multi-factor authentication. In some cases, the attackers impersonated the US Department of Labor and tricked users into granting access to malicious applications and exposing sensitive data such as emails and files. 

In 2024, a BBC investigation revealed an almost 60% increase in romance scams, where individuals used fake identities to form online relationships before soliciting money under false pretenses. 

Similarly, there has also been a rise in sextortion scams targeting college students, where scammers manipulated their victims into compromising situations and demanded ransoms, threatening to release the sensitive material if they didn’t comply.

Jordan DeMay, a 17-year-old high school student from Michigan, died by suicide in March 2022 after being targeted in a sextortion scam that can be traced to two Nigerian brothers, Samuel and Samson Ogoshi, who were later arrested and extradited to the U.S. on charges of conspiracy and exploitation. 

These instances of identity exploitation mirror another concerning trend: the misuse of genetic data. In 2019, GEDmatch—the database that helped catch the Golden State Killer—experienced a breach that exposed genetic data from approximately one million users who had opted out of law enforcement access. The breach allowed law enforcement to temporarily access private profiles without consent, raising significant privacy concerns about the security of sensitive personal data.

Some insurance companies — specifically those in Canada—  have been criticized for misusing genetic data to raise premiums or deny coverage, especially in areas like life or disability insurance. This highlights the importance of understanding your policy and legal rights, as insurance companies are not always complying to new regulations such as the Genetic Nondiscrimination Act (GNDA).

All this illustrates the terrifying possibilities shown in USS Callister, that our most intimate data — our identity — could be used against us in ways we never imagined. Whether through hacked social media accounts, phishing scams, or stolen genetic data, the digital age has given rise to new forms of manipulation.

Stuck in a Game

In USS Callister, the very avatars Daly dominates ended up outwitting him in a thrilling turn of events. Led by Nanette Cole, the trapped digital crew formulates a bold plan to break free. While Daly is preoccupied, the crew triggers an escape through a hidden wormhole in the game’s code that forces an upgrade. They outmaneuver Daly by transferring themselves to a public version of the game and locking him out for good. As the avatars seize their freedom, Daly, once the ruler of his universe, is left trapped in isolation — doomed.

For anyone who has ever been drawn into the world of video games, “trapped” feels like a fitting description.

Some games, such as Minecraft or massively multiplayer online games (MMOs), have an open-ended structure that allows for infinite play. Without a defined ending, players can easily become absorbed in the game for hours at a time.

Games also tap into social connectivity. Multiplayer games like Fortnite and World of Warcraft foster relationships, forming tight-knit communities where players bond over shared experiences. Much like social media, this sense of connection can make it more difficult to disengage, as players feel a part of something bigger than themselves.

In both USS Callister and real-world video games, a sense of progression and achievement is built into the experience. Daly manipulates his world to ensure a constant sense of control and success that fails to replicate real life, where milestones and mastery can take weeks, months, and years. 

Video games are highly effective at captivating players through well-designed reward systems, which often rely on the brain’s natural release of dopamine. This neurotransmitter, associated with pleasure and motivation, plays a key role in the cycle of gratification. This behavioral reinforcement is seen in other addictive activities, such as gambling.

Game developers employ a multitude of psychological techniques to keep players hooked — trapped. 

The World Health Organization’s (WHO) recognition of “gaming disorder” in 2022 underscores the growing concern surrounding video game addiction. Lawsuits against major gaming companies like Call of Duty, Fortnite, and Roblox have shown serious efforts to hold companies accountable for employing addictive algorithms similar to those found in casinos.

Real-world tragedies have also shed light on the dangers of excessive gaming. In Thailand, for instance, 17-year old Piyawat Harikun died following an all-night gaming marathon in 2019, sparking debates over the need for better safeguards to protect young gamers. Cases like this hammer home the need for stronger regulations around how long players, especially minors, are allowed to engage in these immersive experiences.

The financial aspects of gaming, such as esports, has created incentives for players to commit to their addiction as a vocation. Players who make money through competitive gaming or virtual economies may find themselves stuck in cycles of excessive play to maintain or increase their earnings. 

This phenomenon is evident in high-profile cases like Kyle “Bugha” Giersdorf, who won $3 million in the Fortnite World Cup, or Anshe Chung, aka the Rockefeller of Second Life, a virtual real estate mogul. 

Then there is the rise of blockchain-based games like Axie Infinity, a colorful game powered by Ethereum-based cryptocurrencies, which introduces financial speculation into the gaming world. These play-to-earn models push players to engage excessively in the hopes of earning monetary rewards. However, they also expose players to significant financial risks, as in-game currency values fluctuate unpredictably, often leading to a sunk-cost fallacy where players feel compelled to continue investing despite diminishing returns.

This episode reminds us that we can often find ourselves imprisoned by our work. Yet, the cost of escapism can be high. While technology may seem to open doors to new worlds, what appears to be an endless realm of freedom can, in reality, be a staircase leading to an inevitable free-fall. USS Callister highlights the abyss that technology can create and the drain it has on our most valuable resource — time. This episode serves as a warning: before we log in at the behest of those in power, we should remember that what happens in the virtual world will ultimately ripple out into the real one.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Hated in the Nation: Black Mirror, Can It Happen?

Before we talk about Hated in the Nation, let’s flashback to when this episode was first released: October 21, 2016

In 2016, the European Union teamed up with big tech companies like Facebook, Twitter, YouTube, and Microsoft to launch the “Code of Conduct on Countering Illegal Hate Speech Online.” This voluntary agreement aimed to fight the spread of illegal hate speech. The platforms committed to reviewing and removing such content within 24 hours of being notified. While it was a significant step forward, challenges remained regarding its effectiveness, accountability, and balancing free speech with regulation.

Smartphone manufacturers integrated facial recognition features into their devices in 2016. This allowed users to unlock phones, authorize payments, and access secure apps using facial biometrics, adding an extra layer of security and convenience.

Also in 2016, the Robird, developed by Clear Flight Solutions, emerged as an innovative solution for bird control, particularly in environments where pest birds could cause significant damage or pose safety risks like the airport. 

The Mirai botnet attacks of 2016 were a series of cybersecurity incidents that targeted Internet of Things (IoT) devices such as security cameras, home routers, and smart home devices. The hackers exploited common vulnerabilities like default passwords and insecure configurations to infect a large number of IoT devices and launch massive distributed denial-of-service (DDoS) attacks. These attacks highlighted the security flaws in IoT devices, leading to more efforts to improve network defenses against such threats.

In 2016, costume companies and retailers got into trouble for selling offensive Halloween costumes. Some faced backlash for designs that were seen as perpetuating harmful stereotypes. Celebrities like Chris Hemsworth and Hillary Duff were also called out for their costume choices. However, they weren’t the most hated figures of the year.

In February 2016, Martin Shkreli, a businessman, appeared before a congressional hearing to testify about drug pricing practices. The hearing focused on his company, Turing Pharmaceuticals’ pricing of Daraprim, which had been raised from $13.50 to $750 per pill, and its impact on patient access to affordable medication. Shkreli’s unapologetic demeanor and evasive answers during the hearing only made the public angrier, cementing his status as one of the most hated people in the world.

And that brings us to episode 6, season 3 of Black Mirror, “Hated in the Nation.” This episode isn’t just a crime drama; it dives into the dark side of social media and technology. It makes us think about how tech is used in climate change and law enforcement and how online anonymity, mob mentality, and viral outrage impact society. Who is responsible for the fallout from viral trends and public shaming? What are the ethical and moral implications of our actions online? And how do we handle the backfire of our good intentions? 

 In this video, we’ll explore 3 themes of this episode and determine whether similar events have happened — and if not, whether they are still plausible. 

The Right to Offend

In this episode, detective Karin Parke found herself plunged into one of the most disturbing cases of her career. It all started with the mysterious death of a journalist who had been targeted online. The hashtag #DeathTo seemed like a cruel joke, but it quickly became clear that this was no coincidence. Each victim had been publicly shamed and vilified on social media, and now they were turning up dead.

Most of the people who used the hashtag #DeathTo didn’t think it was serious. They saw it as just another way to vent their frustrations or join in on the latest online mob. They believed joking online shouldn’t have real-world consequences, and they never imagined their actions could lead to someone’s death. This mindset highlighted a disturbing hypocrisy.

The inspirations for this episode were plenty. The creator of Black Mirror, Charlie Brooker had firsthand experience with public backlash after writing a satirical article for The Guardian in 2004. In the article, Brooker wrote, “John Wilkes Booth, Lee Harvey Oswald, John Hinckley Jr, where are you now that we need you?” — implying the assassination of then-U.S. president George W. Bush.

This led to a torrent of violent messages directed at Brooker, prompting him to apologize and The Guardian to remove the article from their website. Although this experience occurred before the rise of Twitter, with early social media, people no longer needed an authoritative platform to share their unsavory thoughts anymore. That was… until 2013, when the public had enough and the right to do whatever you want on the Internet officially ended. The warning signs came in the form of Cancels.

Kevin Hart faced significant backlash over homophobic tweets from nearly a decade earlier. These tweets resurfaced after he was announced as the host for the 2019 Oscars. The controversy led to Hart stepping down from the gig and issuing multiple apologies.

Roseanne Barr posted a racist tweet about Valerie Jarrett, a former advisor to President Obama. The tweet was widely condemned, leading to ABC canceling the 2018 reboot of her show “Roseanne” and relaunching it without her as “The Conners”.

Kathy Griffin faced intense backlash after posting a photo of herself holding a prop that looked like the severed head of President Donald Trump. The photo sparked outrage across social media and led to Griffin being fired from her role as co-host of CNN’s New Year’s Eve broadcast, as well as receiving death threats.

And the list continues. 

But this is not limited to public figures. When normal people do something that the public feel is disrespectful, they are often called out now. 

In 2014, a teenager from Brecksville, Ohio, Breanna Mitchell posted a selfie on Twitter smiling at Auschwitz. The photo was widely condemned. But the phototaker wondered if in fact the mob against her had gone too far. Was she really not allowed to smile? 

Following the popularity of the TV series “Chernobyl,” there was an increase in tourism to the Chernobyl Exclusion Zone in Ukraine. Some visitors were criticized for taking playful or inappropriate photos at the site of the nuclear disaster.

In this episode, we are asked to question the rights people have to express themselves and to fully understand what the freedom of speech actually means. 

Freedom of speech is not absolute and is subject to certain limitations, such as when speech incites violence, promotes hate, or jeopardizes public safety. In democratic societies, there are often laws and regulations that balance freedom of speech with other societal values.

While freedom of speech protects the expression of diverse viewpoints and critical discourse, it also entails accountability for the impact of one’s words on others and society at large. “Hated in the Nation” reminds us to be mindful of what we say and do online, because while we may take full liberty of our freedom of speech, we never know who might be watching, or better yet, recording.

Unpopularity Contest 

As the investigation unfolds in Hated in the Nation, it is revealed that the ADIs, those robotic bees, have been hacked by an individual seeking to punish those who were nominated by using a social media hashtag, #DeathTo.

Trending topics are familiar, the internet is known to use hate and anger as clickbait through sensational headlines and polarizing content. Media outlets and content creators capitalize on that to boost engagement metrics like likes, shares, and comments. Additionally algorithms amplify this by promoting content that aligns with users’ beliefs, creating echo chambers that reinforce extreme viewpoints. 

While effective for driving traffic, these tactics undermine civil discourse and exacerbate societal tensions. For instance, during the 2020 U.S. presidential election, many right-wing users were frequently exposed to posts and news articles supporting claims of election fraud. This led to a stronger belief in misinformation and contributed to events like the Capitol riot on January 6, 2021. 

But it doesn’t begin and end in politics. Echo chambers are also prevalent in other communities as well, including health and wellness, sports, lifestyle and hobbies, and entertainment. 

Gamergate was a 2014 controversy that started in the gaming community but quickly became a broader cultural phenomenon. It involved harassment campaigns and online abuse targeting women in the gaming industry and facilitating the spread of misogynistic rhetoric and coordinated attacks against those perceived as threats to the status quo in gaming culture.

Trends are essentially the heartbeat of the Internet. It keeps it alive and active. With an endless scroll of influential idiots to follow, it’s hard to predict what people will get caught up with next. 

“Momo” was a controversial and widely discussed online trend that emerged in 2018. It involved a creepy-looking sculpture of a woman with exaggerated features, initially created by a Japanese artist. The image was used in online challenges on social media platforms, where users were reportedly encouraged to contact “Momo” and engage in dangerous tasks that could lead to self-harm or harm to others. The trend spread globally, causing panic among parents, educators, and Kim Kardashian.

Many experts and authorities suggested that the trend was largely a hoax or urban legend, with no confirmed cases of direct harm linked to it. Despite the ambiguity surrounding “Momo,” it highlighted broader concerns about the influence of online trends and challenges, particularly those targeting vulnerable individuals such as children and teenagers. 

But online challenges have long existed: 

The Tide Pod Challenge gained infamy in 2018, with participants, primarily teenagers, posting videos of themselves biting into or consuming laundry detergent pods. 

Originating in Russia between 2015 and 2016, the Blue Whale Challenge reportedly encouraged participants to complete a series of tasks over 50 days, culminating in self-harm such as carving “F57” into their writs or suicide. Russian independent media, Novaya Gazeta reports that about 130 children have killed themselves after participating in this game. 

And let’s not forget the ever fateful, Choking Game. Although not exclusive to social media, the choking game is as it sounds. It involves self-strangulation or suffocation to induce a temporary high or euphoria. In the U.S., 82 children aged 6 to 19 died from playing the Choking Game between 1995 and 2007, according to the Centers for Disease Control and Prevention. This ignited fear as the game gained popularity online in 2016, with over 36 million YouTube results, many of which providing instructions for this activity.

I can go on… but let’s stop there.

The reasons for following trends are as basic as human connection and community, but social media and digital platforms often play a role in what trends we follow, promoting content through algorithms. While humor and creativity can trigger the algorithm, nothing sparks engagement like content that promotes fear and loathing.

Fake Animals, Real Hacks 

RoboBees are tiny, insect-inspired robots developed by researchers at Harvard University’s Wyss Institute for Biologically Inspired Engineering. The project, which began in 2009, aims to create autonomous flying microrobots capable of performing tasks typically carried out by bees, such as pollination. 

Much like the ADIs (Autonomous Drone Insects) in “Hated in the Nation,” RoboBees are incredibly small, weighing just a fraction of a gram. They are constructed with components made from lightweight materials like carbon fiber. 

Early versions of RoboBees were tethered, receiving power and control signals through a wire. However, researchers are developing untethered versions with onboard power sources and advanced sensors for autonomous navigation and operation.

Developing fully autonomous RoboBees requires advanced sensors, control algorithms, and efficient onboard power systems. Current research includes creating lightweight micro-batteries and energy harvesting technologies. Another area of focus is developing swarm behavior, allowing RoboBees to work together, similar to how real bees operate in a hive, which involves sophisticated communication and coordination algorithms. 

 “Hated in the Nation” illustrates the dangers of such technologies being misused for surveillance and targeted attacks. But real-world researchers must also assess their impact on ecosystems and ensure responsible use. 

Overall, RoboBees represent a fascinating intersection of biology and engineering, with the potential to address critical challenges in agriculture, environmental conservation, and disaster response. Despite the significant technical and ethical hurdles that remain, the ongoing research and development efforts hold promise for a future where RoboBees could play a vital role in various fields, just as ADIs were intended to do in “Hated in the Nation.” 

The stark comparison underscores the importance of careful consideration and regulation to prevent potential misuse and unintended consequences. As we saw in the episode, failure to do so will be dire. But the real world is also full of alarming examples.

In 2011, a cybersecurity breach at Creech Air Force Base, Nevada, compromised systems controlling US Predator and Reaper drones. Malware — including a keylogger, a virus used to capture passwords — infected both classified and unclassified networks, raising security alarms — and ultimately impacting the control over the drones operating in Afghanistan. This incident spurred efforts to bolster defenses against cyber threats targeting essential defense systems.

But alas, no technology can be considered completely unhackable, as the potential for exploitation often exists due to the evolving nature of cybersecurity threats. 

Governments allocate substantial portions of their budgets to cybersecurity. For example, the United States federal government proposed a $12.33 billion budget for fiscal year 2025 to secure federal networks and combat nation-state cyber threats and hacking campaigns. This represents a 10% increase from the previous year.

However, all the security didn’t prevent a global tech outage on July 19, 2024, which grounded airlines, knocked news channels off the air, brought banks offline, and disrupted 911 operators. People worldwide couldn’t boot up their computers due to a faulty software update from cybersecurity firm CrowdStrike, causing Windows computers to crash with the Blue Screen of Death. World wide chaos, all due to a simple error — not a malicious hack. 

So it seems, when it comes to our technological infrastructure, we are only as strong as our weakest link.

Whether we are part of the swarm or trapped in it, “Hated in the Nation” reminds us of our personal and collective mistakes. As technology advances and our communication expands, so does our isolation and the power of the mob. Safety in numbers is an illusion, and acting alone is impossible. What happens to your neighbor will likely happen to you. We are playing with unstable toys — new and shiny, but precarious dominoes ready to fall. We are right on track for events in this episode to unfold as it’s harmful to stay alone and terrifying to join the crowd. 

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

Men Against Fire: Black Mirror, Can It Happen?

Before we discuss the events in “Men Against Fire,” let’s rewind to the day it was released: October 21, 2016.

In 2016, unmanned Aerial Vehicles, commonly known as drones, underwent significant advancements worldwide, enhancing capabilities for reconnaissance, surveillance, and strike missions.

ISIS employed drones equipped with cameras to gather intelligence on enemy positions, troop movements, and strategic targets, including military installations and civilian areas. By late 2016, ISIS started weaponizing drones, namely against Peshmerga and Western soldiers in Northern Iraq. 

This increased fear, as the terrorist group was responsible for over 40 major attacks in 2016, such as those in Brussels, Istanbul, Baghdad, and Nice, resulting in numerous casualties and highlighting their ability to carry out terrorism on a global scale.

But the US army had been using drones for over a decade, and they had some pretty powerful ones, including the MQ-9 Reaper. Equipped with advanced sensors and targeting systems, this $31million aircraft is capable of conducting long-endurance missions and precision airstrikes against ground targets, particularly in counterterrorism operations.

These drones played a crucial role in disrupting ISIS’s operations and eliminating key leaders, such as Hafiz Saeed Khan, stopping him from expanding the terrorist group’s capabilities in Afghanistan and Pakistan. However, drone strikes have also led to the deaths of countless innocent lives. Transparency and accountability are lacking in drone operations, raising concerns about adherence to international law and human rights standards. 

Speaking of war and lack of transparency, we must speak of Russia. In December 2016, with Russian support, the Syrian government achieved substantial gains in eastern Aleppo, leading to the evacuation of rebel fighters and civilians to northern Syria, marking a pivotal moment in the conflict. 

Of course, Russia was involved in other conflicts, most namely in eastern Ukraine, which was initiated by the annexation of Crimea in 2014. While the Western world was focused elsewhere, Russia was supporting separatist forces despite ceasefire attempts and diplomatic negotiations. 

In Afghanistan, the U.S. continued Operation Freedom’s Sentinel in 2016, focusing on counterterrorism efforts against groups like the Taliban and al-Qaeda, while supporting Afghan government stabilization efforts.

The Afghanistan War, beginning in 2001 in response to the 9/11 attacks, stands as the longest war in American history. Under the Biden Administration, the United States officially withdrew its military forces from Afghanistan on August 31, 2021, marking the end of its nearly 20-year military involvement. 

Estimates by the Costs of War project at Brown University’s Watson Institute for International and Public Affairs indicate the war’s total cost exceeded $2 trillion from 2001 to 2021, encompassing direct military spending, interest payments on war-related debt, and long-term care for veterans. 

And that’s what brings us to this episode of Black Mirror, episode 5 of season 3, Men Against Fire. This episode doesn’t only trigger discussions about the use of technology in combat, and the psychological toll on soldiers — it makes us question the reason for the war itself. Who is leading us down these paths? Why is the momentum of war so hard to stop? And in the end ask, what does victory even look like? 

 In this video, we’ll explore 3 themes of this episode and determine whether similar events have happened — and if not, whether they are still plausible. 

The Optimization of War

In “Men Against Fire,” we’re thrust into a war-torn landscape where soldiers like Stripe are deployed against a mysterious enemy known as the Roaches. The vague reasoning to exterminate them was because they allegedly carried contagious sickness. It’s not until Stripe’s Mass malfunctions that the facade of duty crumbles, revealing a grim reality, the Roaches were people too.

This often overlooked episode resonates with today’s global uncertainties. We’re teetering on the edge of World War III and the enemy is increasingly ambiguous. Warfare is a recurring TV series, renewed each season as long as there’s public support. Governments, whether democratic or dictatorship, rely on propaganda to justify military interventions, stirring fear and rallying citizens to perceived threats from across the border. 

Propaganda appeals to patriotism, nationalism, and loyalty as a duty to defend one’s country and uphold its values. Currently in the midst of an invasion, Russia employs a multifaceted approach to garner support for its military actions, both domestically and internationally. Through tightly controlled state-owned media outlets, the government disseminates narratives supportive, framing attacks as necessary measures to protect national interests, counter external threats, and restore stability. 

By portraying Russia as a defender of traditional values and a bulwark against perceived Western aggression posed by NATO and geopolitical rivals, Vladimr Putin justifies military actions as preemptive measures to safeguard its interests. 

In 2022, the Kremlin spent approximately $1.9 billion in propaganda. It is not cheap to manipulate public opinion, spread misinformation, and undermine trust in Western institutions. But as we will learn, nothing about war is. 

According to the U.S. Department of Veterans Affairs , about 11-20% of veterans who served in Operations Iraqi Freedom and Enduring Freedom are diagnosed with PTSD in a given year. Additionally, approximately 12% of Gulf War veterans experience PTSD at some point in their lives, and about 30% of Vietnam veterans have had PTSD.

As of 2022, there are currently 16.2 million veterans in the United States, which makes 20%, 3.24 million people.

In 2023, $139 million of US government spending was invested in Veterans Affairs research programs and $16.6 billion in the Medical Care program to improve access to mental healthcare. Furthermore, the budget allocates $559 million to preventing veteran suicide. There’s no argument about the need for resources, but it’s also a big price to pay to keep the business running. 

In 1961, President Dwight D. Eisenhower coined the term the military-industrial complex, which encompasses the symbiotic relationship between the military establishment, defense contractors, and government agencies involved in defense and national security affairs. It perpetuates wars through a combination of profit incentives, political influence, institutional inertia, and security imperatives. Defense contractors benefit financially from ongoing conflicts and arms sales, exerting pressure on policymakers to prioritize military interventions and increase defense budgets. 

In this episode, the psychologist Arquette tells Stripe that in the wars of the past, most soldiers don’t fire their weapons on duty, thus extending the war — decreasing he demand for bullets. The Mass was introduced to speed up the process of killing. 

The optimization of war is an ongoing development, however, it’s also a double-edged sword, no wait, more like an atomic bomb.

Want to 10x war? Nuke them. 

The atomic bomb’s development intensified the arms race between the United States and the Soviet Union during the Cold War era, driving both superpowers to expand their nuclear arsenals and develop more advanced weapons systems. Fight fire with fire. 

Quite a conundrum. How can we optimize war without destroying ourselves? How do we find this perfect equilibrium? Surely great leadership will be required. Because the weapon is available and should any of the nuclear powers become backed into a corner, then in order to save itself, there hardly seems to be a reason not to use it. And so it goes with any new weaponry developments. 

The challenge remains to hold governments and political powers accountable for their actions and reactions. The momentum of war can leave many victims and there’s not much that public protest can do when the enemies, out for revenge, are knocking at the gates. 

This episode of Black Mirror reminds us that when it comes down to it, we don’t get to pick our enemies. They are hardwired into our very existence. We are programmed to hate, fear, and be repulsed by them. But even when we see the light. Even if we become woke, so to speak, it’s too late to stop the waves from crashing. Because even if you are no longer controlled, your perceived enemies are. And in dystopia and war, it’s not a battle of bees. It’s a battle of hives. And sacrifice is necessary to make a return on investment.

Constructing the Others

When Stripe encounters a Roach family, the MASS implant distorts his perception, presenting them as savage creatures instead of individuals pleading for mercy. To protect himself, he slaughters them. This manipulation reduced the soldiers’ sense of guilt or remorse when committing violent acts against the Roaches, resembling real-world tactics where psychological conditioning is employed to rationalize atrocities.

Politicians regularly cultivate an “us vs. them” mentality to create a sense of solidarity among the populace. They use dehumanizing language, stereotypes, and negative portrayals to depict the adversary as inferior, barbaric, or morally reprehensible. 

During a speech in Wisconsin on April 2, 2024, Presidential candidate Donald Trump perfectly demonstrates this by using such language to describe migrants from Mexico: “The Democrats say, ‘Please don’t call them animals. They’re humans.’ I said, ‘No, they’re not humans, they’re not humans, they’re animals.” 

America is not alone. Europe has also been engulfed by social tensions, xenophobia, and intergroup conflicts. 

Perhaps the most dramatic example is the weaponization of migrants at the Belarus-Poland border in 2021. Migrants, predominantly from the Middle East and North Africa, have amassed at the border seeking entry into EU territory, meanwhile Belarusian authorities are accused of enticing or coercing these migrants, aiding their movement to the border under false promises of easier access to the EU. This strategy, attributed to Belarus’s President Alexander Lukashenko, is viewed as a retaliatory measure against EU sanctions imposed on Belarus.

Fear is a great motivator. And diseases are scary. When we see another culture as a disease that can stain generations, then we begin to understand how followers of dictators think. 

Before any guns are fired, before any bombs are dropped, it starts as a battle of ideas:  fire against fire. Ideas too can spread like diseases. We don’t have to look far, just remember all the arguments we had during the height of COVID-19. 

Remember the politicians deflecting blame and responsibility for the spread of the corona virus by scapegoating certain groups? This has included blaming foreign countries, immigrants, ethnic minorities, religious communities, or political opponents for the outbreak or for failing to contain the virus effectively. By portraying these groups as the “other” responsible for the pandemic, politicians gained public support through fear and xenophobia. 

Blame is a powerful way of constructing the other. We saw that during COVID, and we saw that on March 22, 2024.

After a Moscow concert hall was attacked by a group of terrorists which led to the death of 137 people, Vladimir Putin pointed the finger to Ukraine, despite the off-shoot of ISIS claiming responsibility. We see this clearly from afar that Putin was relishing in another attempt to maintain his invasion in Ukraine. 

However, the West is not above using the blame game for ulterior motives either. An example of this is how the USA used public anger after 9/11 to justify the war in Iraq in search of weapons of mass destruction. 

There are few motivations more powerful than revenge. We can trace every critical event through history as one domino piece falling upon another. We saw this after the Oct 7, 2023 attack in Israel from the Islamic militant group, Hamas which has escalated the war in Gaza. We saw this in the decade long war in the Central African Republic. We saw this in the Rohingya Crisis in Myanmar. We see the cause and effect of revenge taking form in all parts of the globe.  

Yes, blame is easy when the enemy looks a certain way, but what do we do when the enemies look like us? This is a notable question that the episode poses. 

The resurgence of anti-semitism is a concerning one in America, as it encapsulated the cultural descent into darkness. 

Despite vows to never allow the atrocities of WWII to happen again, 1 in 5 Americans don’t believe that 6 million Jewish people were murdered during the holocaust. Some thought the number was lower. Others don’t think it happened at all. 

On Oct 8, 2022, a sleepy Kanye West went on Twitter and posted about going Defcon 3, a military combat command, on the Jewish people, and then confusingly implies that Jewish people have used their power to bring down anyone who opposes them. 

While Kayne’s comment was another piece of evidence of his mental breakdown and personal short-comings, it also emboldened anti semitic groups that seemed to have been living in the shadows waiting in dormant until the time was right for them to rise again. At last, they can openly blame their scapegoat again for all their problems.  

Holocaust denial, minimization, and distortion contribute to the normalization of antisemitism and undermine efforts to combat prejudice and intolerance. Pro-Palestine protests across US university campuses are now receiving criticism for walking that fine line. 

Facts and lies are both ideas. And in a battle of ideas that is going to last generations, both have an equal chance of winning. 

There is no problem in this world that is caused by one group of people. None. Therefore, any solution based around getting rid of or holding captive a group of people will at the end fail to resolve anything more than some personal or political gain. 

A strategy built upon blame is often nothing more than a distraction tactic, diverting public attention away from domestic issues, governance failures, or systemic problems within the country.

This episode of Black Mirror is revealing. If you find yourself blaming others or attacking others for life not going your way, you must ask what the governing powers are saying. Are they blaming others to maintain power? Are they using hate to fuel personal gains? As long as we are fighting enemies, we don’t have time to challenge ourselves. We won’t be able to see the monsters we’ve become. We fail to realize that we have only been destroying ourselves the whole time. 

Pleasure Dreams and Tortured Memories   

In sleep, Stripe is rewarded with a dream sequence depicting intimate moments with his lover as positive reinforcement for his performance in combat. However, after his Mass got infected, it becomes evident that these dreams are tools to maintain soldiers’ loyalty and compliance, blurring the lines between reality and manufactured fantasies.

Sigmund Freud, often regarded as the father of psychoanalysis, made significant contributions to the study of dreams with his groundbreaking work “The Interpretation of Dreams,” published in 1899. 

One of Freud’s central concepts in dream analysis is the idea of wish fulfillment. He proposed that dreams serve as a way for the unconscious mind to fulfill repressed wishes or fantasies that are unacceptable or unattainable in waking life. In that, there is great reward for one to tap into the ability to control their dreams. 

The term “lucid dreaming” was first coined by Dutch psychiatrist and writer Frederik van Eeden in 1913. Van Eeden used the term “lucid” to describe dreams in which the dreamer is aware of being in a dream state while the dream is occurring. He documented his own experiences with lucid dreaming and described various aspects of this phenomenon in his writings, one of which was a vivid flying experience.

One key aspect of lucid dreaming is reality testing, where individuals question their waking reality by performing checks like looking at their hands or trying to read text. This habit can extend into dreams, enabling lucid dreamers to recognize when they are dreaming and take control of their dream experiences. 

If we can learn to control our dreams, then what is the likelihood that machines can control our dreams? 

Research into the effects of non-invasive brain stimulation techniques, such as transcranial electrical stimulation (tES) and transcranial magnetic stimulation (TMS), on dreaming has shown potential modulation of dream recall, intensity, and emotional content. 

Organizations like The Dream Science Foundation and The Max Planck Institute of Psychiatry are actively involved in these studies, aiming to uncover the precise mechanisms and long-term implications of brain stimulation on dreaming. 

Electroencephalography (EEG) technology, used by institutions like The Society for Neuroscience (SfN), allows researchers to monitor sleep-related brain activity, providing insights into sleep stages and patterns, although it does not directly reveal dream content. 

Government-funded organizations such as the United States Army Medical Research and Development Command (USAMRDC) and the Defense Advanced Research Projects Agency (DARPA) focus on researching various aspects of sleep, including sleep disorders, sleep optimization, and the impact of sleep on cognitive and physical performance in military personnel. 

Sleep is critically important for soldiers due to its multifaceted impact on their physical recovery, cognitive function, emotional resilience, situational awareness, physical performance, and overall health. While the military wants to reward good sleep for their soldiers, they can also deprive sleep as a weapon. 

Sleep deprivation is a powerful torture technique. Prolonged sleep deprivation leads to physical and mental exhaustion, weakening resistance to stress and increasing vulnerability to coercion. Hallucinations and psychosis may occur, exacerbating the individual’s distress and compromising their ability to discern reality. 

During the Cold War, Soviet KGB agents were known to use sleep deprivation extensively as a method of breaking down detainees’ resistance and extracting information. Such methods to keep detainees awake include continuous interrogation, physical discomfort, noise and light exposure, temperature manipulation, threats, food and water deprivation, physical stress, and psychological manipulation.

Similarly, in more recent times, reports have emerged of sleep deprivation being used as a tactic in Guantanamo Bay and other detention facilities during the War on Terror.

If one can control our dreams to instill pleasure, then they can use nightmares for punishment. 

Nightmares can disrupt sleep, cause emotional distress upon waking, and may be associated with underlying psychological issues, stress, trauma, or anxiety disorders. While nightmares can be triggered by PTSD, the most plausible method a third-party can apply to control your nightmares would be with drugs. 

Some medications, including antidepressants, antipsychotics, and beta-blockers, may have side effects that contribute to the occurrence of nightmares or oneiroid syndromes, where a person is trapped in a dream-like experience often unable to move or distinguish between what is real and what is a hallucination. 

One example was of a 67-year-old woman with a history of prolonged depression, untreated for over a year, was prescribed a daily dose of 20 mg paroxetine. However, after 16 days, she required hospitalization due to behavioral disruptions and delusional beliefs that she was being pursued by malicious individuals. Further evaluation revealed that these delusions stemmed from recurring nightmares and an oneiroid state.

This episode is a great reminder that our brains are easily influenced. How we interact with others, what we see in the media, and other ways we feed our minds in wakefulness and rest may lead us to salvation or doom. 

While it may be a scary thought that technology can tap into our brains and manipulate our senses, looking at the world around us, and seeing the horrors committed on a daily basis, one can say that advanced technology isn’t even necessary. There are already many existing methods of controlling a person, radicalizing them to turn against their own. 

While Men Against Fire is an often forgotten episode, it is so relevant because it touches on the theme of dehumanization. In today’s context, where misinformation, social media echo chambers, and targeted content can distort reality and fuel polarization, the message about the power of perception and manipulation is particularly poignant. 

It serves as a cautionary tale about the dangers of unchecked influence, the importance of critical thinking, and the ethical considerations surrounding technology and perception management in our increasingly digital and interconnected world.

So before you sign the next contract, before you get a loan, before you vote, or make a commitment, ask yourself, who’s pushing you to do so? 

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

San Junipero: Black Mirror, Can It Happen?

Before we discuss the events of San Junipero, let’s first take a look back to when this episode was released: Oct 21, 2016.

In 2016, consumer-grade VR headsets like Oculus Rift, HTC Vive, and PlayStation VR became widely available, making virtual reality more accessible to the masses. 

In the same year, VR found applications in healthcare, particularly for pain management during medical procedures. This was often called “Virtual Reality Distraction” or “VR Distraction Therapy,” where VR headsets created immersive experiences to distract patients from pain.

Mobile dating apps became the dominant platform for online dating in 2016, making the dating process more convenient. The concept of casual dating and hookup culture gained popularity, with apps like Tinder associated with short-term, non-committal relationships.

However, Tinder and its algorithm faced criticism for allegedly perpetuating racial and gender biases in online dating, which raised concerns about fairness and inclusivity.

2016 was a notable year for global equality, with countries like Colombia legalizing same-sex marriage. However, in the U.S., debates on transgender rights and bathroom access intensified due to North Carolina’s “bathroom bill”.

Tragically, the Pulse nightclub shooting in Orlando, Florida on June 12, 2016, targeted the LGBTQ community, resulting in 49 deaths and numerous injuries, making it one of the deadliest mass shootings in America.

Brittany Maynard’s 2014 case continued to influence the euthanasia and right-to-die conversation as it inspired discussions on end-of-life autonomy. In June 2016, California’s “End of Life Option Act” went into effect, allowing terminally ill adults to request medical aid in dying, making California the fifth U.S. state to legalize physician-assisted suicide.

2016 marked a transition year, with technology becoming integrated into various aspects of life, offering opportunities and challenges. This year set the stage for a more connected and digital society, impacting dating, healthcare, and our ability to cope with loss.

And that’s what brings us to this episode of Black Mirror: Episode 4 of Season 3: San Junipero, an iconic episode that invites us to contemplate the implications of a digital afterlife. 

In this video, we’ll explore 3 themes of this episode and determine whether similar events have happened — and if not, whether they are still plausible. 

Til Death Do Us Part

The episode begins with Yorkie navigating the bustling Tucker’s nightclub, where she crosses paths with Kelly, who encourages her to dance. This encounter sets the stage for a deeper connection beyond the surface allure of San Junipero’s neon-lit nightlife.

The 1980s was an interesting time. While there was progress in the women’s rights movement, traditional gender roles still persisted in many areas. Women were often expected to balance a career with homemaking, and men faced pressure to conform to traditional masculinity. Those who didn’t conform to societal norms often faced stigmatization. This included individuals with alternative lifestyles, like the LGBTQ community.

The American Psychiatric Association declassified homosexuality as a mental disorder in its diagnostic manual in 1973, but the emergence of the HIV/AIDS epidemic in the 1980s sparked fear and amplified existing stigma. 

While HIV and AIDS remain a concern, medical advances and a better understanding of the virus have improved the outlook for those affected. HIV today is no longer a death sentence, with approximately 39 million people globally living with the virus in 2022.

Acceptance is a major theme in this episode. Acceptance is also the final stage of grief. While Yorkie’s family never accepted her after she came out and after her accident, Kelly struggles to accept her husband’s death and the end of their marriage, despite her enduring love for him.

Aging, loss, injustice, and differing viewpoints are all factors we must accept. While we may start with denial, anger, and depression, we can’t grow without eventually finding a way to accept and live with these realities.

In the 1980s, mental health issues were not openly accepted, and individuals facing such challenges were sometimes viewed as weak or even dangerous. People often believed that those with depression could overcome it by “cheering up” or “snapping out of it.” 

In the same vein, conversion therapy, also known as “reparative therapy” or “ex-gay therapy” aims to change the sexual orientation of a person who identifies as LGBTQ. It was based on the belief that like being depressed, someone’s sexual orientation was a choice and could be “cured.” 

There is evidence now to suggest that genetics may play a role. Studies of identical twins have shown a higher likelihood of shared sexual orientation compared to fraternal twins. 

Today, while not universal, many LGBTQ individuals experience greater acceptance and support from their families and social circles, with same-sex marriage legally recognized in much of the Western world.

In 2000, Vermont became the first U.S. state to introduce civil unions for same-sex couples, offering legal recognition and benefits but not full marriage rights. It took four more years before Massachusetts made history by legalizing same-sex marriage, granting equal rights and privileges to same-sex couples.

But the war for acceptance still continues. In 2023, the battleground is the education system, where acceptance and inclusion are key to a new generation of LGBTQ feeling safe. Across North America, protestors and counter-protesters clashed at the steps of suburban elementary schools over the teachings of gender and sexual orientation. Should such topics be excluded from school curricula, leaving such education solely to parents? One side demands it. But what if those parents hold intolerant beliefs, similar to Yorkie’s parents? Where can children find support?

Approximately 41% of transgender individuals have reported attempting suicide at some point in their lives. Additionally, the suicide attempt rate among LGBTQ adults is nearly 12 times higher than that of the general population.
The rise of social media has enabled us to share messages, raise awareness, and learn from others, but it has also made it more challenging to find contentment in our own lives due to constant comparisons. Research conducted by the Pew Research Center revealed that about 60% of social media users in the United States experience feelings of inadequacy when they see others’ posts showcasing their accomplishments.

Social media feeds are filled with idyllic depictions of flawless marriages, dream vacations, picture-perfect families, and enchanting love stories, creating an endless popularity contest. However, it is crucial to recognize that the notion of a flawless life is a fallacy; such flawlessness does not exist in the real world.

And that is something we need to accept. 

Ghosted 

During their dance, despite their contrasting personalities, Yorkie and Kelly share a joyful moment. However, when Yorkie becomes overwhelmed, she leaves the dance floor, and outside, Kelly makes a sexual proposition to her, to which Yorkie declines by telling her that she’s engaged.

The following week, Yorkie returns, and in the restroom, Kelly once again propositions her sexually. This time Yorkie accepts.

But on the third week, Kelly is nowhere to be found at Tucker’s. Yorkie searches for her at the Quagmire, a BDSM nightclub, and bumps into Wes, the man Kelly had been avoiding. Wes, like Yorkie, expected Kelly to adhere to their relationship, but it becomes evident that Kelly has chosen to avoid the pain and complexity of such commitments by ghosting them.

Ghosting, which involves suddenly cutting off communication without explanation, became more prevalent in the mobile dating app scene, often leaving users hurt and frustrated. This phenomenon is emblematic of modern dating culture, where the ease of online connection and reduced face-to-face interaction can lead to less personal and sometimes inconsiderate approaches to ending relationships.

According to a 2023 Forbes study, 76% of participants have experienced either ghosting or being ghosted in a dating context. Nearly 60% of individuals report having been ghosted, while 45% acknowledge ghosting someone else.

After their intimate night together, Kelly experiences complex emotions. Her initial encounter with Yorkie at the nightclub was more about enjoying the moment and having fun. While she has developed genuine feelings for Yorkie, she also grapples with her internal conflict. 

According to Business of Apps, as of 2022, over 337 million people worldwide are using dating apps. And not all of them are necessarily looking for a soulmate.  

The dating app market is nearly a $5 billion industry that caters to diverse needs. For instance, Grindr is there for the LGBTQ community, Bumble empowers women to initiate conversations, and OkCupid lets users specify their intentions, whether it’s casual flings, short-term dating, or long-term relationships.

Time was referenced often in this episode, and does indeed play a crucial role in relationships and life. The intense passion in the early stages of romance are evolutionary mechanisms to help individuals form connections and reproduce, but they typically fade as relationships mature, causing us to question whether they were real at all.

In San Junipero, at midnight, those who are trialing the platform must return to their physical bodies. This serves as a metaphor for how we must confront the physical and mental challenges that come with aging, as well as our capacity for love. Kelly’s hesitation in forming deep emotional connections is partly due to the guilt she carries from her years of devoted love to someone she lost. Letting go of that love is a daunting prospect for her. 

Meanwhile, Yorkie sees being with Kelly as a chance to finally experience the richness of life and love that had been withheld from her in her previous existence.

In this way, to love each other is to let their past lives die. 

People now have the freedom to choose their experiences and relationships. Apps like Tinder popularized the concept of swiping right to like or left to dislike profiles, turning the search for potential matches into a game. 

However, the fundamental human experience remains unchanged. We still grapple with the passage of time, knowing that every decision we make and every opportunity we miss may come back to affect us. And so, it raises questions: What if there’s something better out there? What if we never find something as good again? Dating apps are games, where we create our own characters and hope that the chosen one leads to a happy ending.

Second Life or After Life 

At the end of the episode, Kelly faces a significant choice. Her decision centers around joining her late husband and daughter in the afterlife or staying in San Junipero with Yorkie and embracing digital eternity. 

The Internet has made the preservation of memories complicated. For those still alive, managing their digital legacy is a growing concern, and various apps and platforms help them plan the distribution of their digital assets and online accounts after their passing. Services like Everplans and Cake offer such support.

Some people may choose to leave more than assets and accounts behind. They want to create digital versions of themselves. Technologies like Replika, an AI chatbot, engage in conversations with users, preserving their thoughts and stories for future generations.

But the notion of permanent existence in another world raises questions not just for the afterlife but for our present lives. Many living individuals now opt to spend time in entertainment realms, where they can create avatars that reflect their personality more than their physical appearance. This mirrors Yorkie’s born-again experience, transformed by the Tucker technology.

One popular form of this digital second life is the game with the namesake: Second Life, where players create avatars and explore a user-generated 3D environment with various activities, from socializing to designing virtual items. As of 2022, there are 64.7 million active users on Second Life.

Online platforms provide safe spaces for people to explore and express their identities, and LGBTQ+ communities on social media, games, and apps have shown to offer support and acceptance.

While the hype for the Metaverse had simmered down for consumers, industries are still bullish about its potential. We see this with technology companies like Nvidia designing “digital twins”, a virtual representation of a physical object for use in constructing automobiles, infrastructure, energy, and more. 

The Metaverse is not going away, despite companies like Disney and Microsoft shutting down projects. In 2023, we saw Apple joining the market by announcing their headset, Apple Vision Pro. While it may seem laughable that we would be wearing those giant goggles all day, tech companies are betting that soon people will buy in.

The hope is by then, governments and policymakers will have a better understanding of the regulatory and ethical aspects of the Metaverse, especially concerning digital identity, data privacy, and virtual economies.

This leads to the topic of a second life as an afterlife. While digital immortality is not yet possible, it sparks debates, especially around a speculative concept called mind uploading. This involves transferring a person’s mental state, including consciousness and memories, from a biological brain to a non-biological or digital form. 

There are a number of companies already embarking on this venture, such as Nectome and Alcor Life Extension. But as of 2023, there doesn’t seem to be any advancement beyond preserving the bodies and brains of the deceased. 

If mind uploading ever becomes possible, its development will hinge on scientific progress, societal acceptance, and ethical frameworks, making it a complex and multifaceted journey.

James Hughes, American sociologist and bioethicist, raises a fundamental question: “The pursuit of digital immortality opens up a realm of ethical concerns. Who owns the digital copies of our minds, and what rights do they have?”

San Junipero is an emotional ride. The episode beautifully explores themes of love, identity, and the nature of existence. It’s known for its captivating blend of nostalgia, romance, and thought-provoking questions about life and death in a digital age. 

While rewatching this episode, I was surprised by how moved I got at the end. Perhaps now, I have gotten older. The past few years have revealed the potential bleakness of the world. Although I have made many commitments, the fear is not that time will stop, but that I may squander it by clinging to something fleeting.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.