Joan is Awful: Black Mirror, Can It Happen?

Before we dive into the events of Joan Is Awful, let’s flash back to when this episode first aired: June 15, 2023.

In 2023, the tech industry faced a wave of major layoffs. Meta cut 10,000 employees and closed 5,000 open positions in March. Amazon followed, letting go of 9,000 workers that same month. Microsoft reduced its workforce by 10,000 employees in early 2023, while Google announced its own significant layoffs, contributing to a broader trend of instability in even the largest, most influential tech companies.

Netflix released Depp v. Heard in 2023. This three-part documentary captures the defamation trial between Johnny Depp and Amber Heard. The series explored the viral spectacle that surrounded it online, showing how social media, memes, and influencer commentary amplified every moment. 

Meanwhile, incidents of deepfakes surged dramatically. In North America alone, AI-generated videos and audio clips increased tenfold in 2023 compared to the previous year, with a 1,740% spike in malicious use

In early 2023, a video began circulating on YouTube and across social media that seemed to show Elon Musk in a CNBC interview. The Tesla CEO appeared calm and confident as he promoted a new cryptocurrency opportunity. It looked authentic enough to fool thousands. But the entire thing was fake.

That same year, the legal system began to catch up. An Australian man named Anthony Rotondo was charged with creating and distributing non-consensual deepfake images on a now-defunct website called Mr. Deepfakes. In 2025, he admitted to the offense and was fined $343,500.

Around the world, banks and cybersecurity experts raised alarms as AI manipulation began to breach biometric systems, leading to a new wave of financial fraud. What started as a novelty filter had become a weapon capable of stealing faces, voices, and identities.

All of this brings us to Black Mirror—Season 6, Episode 1: Joan Is Awful.

The episode explores the collision of personal privacy, corporate control, and digital replication. Joan’s life is copied, manipulated, and broadcast for entertainment before she even has a chance to tell her own story. The episode asks: How much of your identity is still yours when technology can exploit and monetize it? And is it even possible to reclaim control once the algorithm has taken over?

In this video, we’ll unpack the episode’s themes, explore real-world parallels, and ask whether these events have already happened—and if not, whether they are still plausible in our tech-driven, AI-permeated world. 

Streaming Our Shame

In Joan is Awful, we follow Joan, an everyday woman whose life unravels after a streaming platform launches a show that dramatizes her every move. But the show’s algorithm doesn’t just imitate Joan’s life; it distorts it for entertainment. Her friends and coworkers watch the exaggerated version of her, and start believing it’s real. 

The idea that media can reshape someone’s identity isn’t new—it’s been happening for years, only now with AI, it happens faster, cheaper, and more convincingly.

Reality television has long operated in this blurred zone between truth and manipulation. Contestants on shows like The Bachelor and Survivor have accused producers of using editing tricks to create villains and scandals that never actually happened.

One of the most striking examples comes from The Bachelor contestant Victoria Larson, who accused producers of using “Frankenbiting”, a technique of editing together pieces of dialogue from different times to make her appear like she was spreading rumors or being manipulative. She said the selective editing destroyed her reputation and derailed her career.

Then there’s the speed of public judgment in the age of social media. In 2020, when Amy Cooper—later dubbed “Central Park Karen”—called the police on a Black bird-watcher, the footage went viral within hours. She was fired, denounced, and doxxed almost overnight.

But Joan is Awful also goes deeper, showing how even our most intimate spaces are no longer private. 

In 2020, hackers breached Vastaamo, a Finnish psychotherapy service, stealing hundreds of patient files—including therapy notes—and blackmailing both the company and individuals. Finnish authorities eventually caught the hacker, who was sentenced in 2024 for blackmail and unauthorized data breaches.

In this episode, Streamberry’s AI show thrives on a simple principle: outrage. They turn Joan’s humiliation into the audience’s entertainment. The more uncomfortable she becomes, the more viewers tune in. It’s not far from reality.

A 2025 study published in ProMarket found that toxic content drives higher engagement on social media platforms. When users were shielded from negative or hostile posts, they spent 9% less time per day on Facebook, resulting in fewer ads and interactions.

By 2025, over 52% of TikTok videos featured some form of AI generation—synthetic voices, avatars, or deepfake filters. These “AI slop” clips fill feeds with distorted versions of real people, transforming private lives into shareable, monetized outrage.

Joan is Awful magnifies a reality we already live in. Our online world thrives on manipulation—of emotion, of data, of identity—and we’ve signed the release form without even noticing.

Agreeing Away Your Identity

One of the episode’s most painful scenes comes when Joan meets with her lawyer, asking if there’s any legal way to stop the company from using her life as entertainment. But the lawyer points to the fine print—pages of complex legal language Joan had accepted without a second thought. 

The moment is both absurd and shockingly real. How many times have you clicked “I agree” without reading a word?

In the real world, most of us do exactly what Joan did. A 2017 Deloitte survey conducted in the U.S. shows that over 90% of users accept terms and conditions without reading them. Platforms can then use that data for marketing, AI training, or even creative content—all perfectly legal because we “consented.”

The dangers of hidden clauses extend far beyond digital services. In 2023, Disneyland attempted to invoke a controversial contract clause to avoid liability for a tragic allergic reaction that led to a woman’s death at a Disney World restaurant in Florida. The company argued that her husband couldn’t sue for wrongful death because—years earlier—he had agreed to arbitration and legal waivers buried in the fine print of a free Disney+ trial.

Critics called the move outrageous, pointing out that Disney was trying to apply streaming service terms to a completely unrelated event. The case exposed how corporations can weaponize routine user agreements to sidestep accountability.

The episode also echoes recent events where real people’s stories have been taken and repackaged for profit.

Take Elizabeth Holmes, the disgraced founder of Theranos. Within months of her trial, her life was dramatized into The Dropout. The Hulu mini-series was produced in real time alongside Holmes’s ongoing trial. As new courtroom revelations surfaced, the writers revised the script. The result was a more layered, unsettling portrayal of Holmes and her business partner Sunny Balwani—a relationship far more complex and toxic than anyone initially imagined.

In Joan is Awful, the show’s AI doesn’t care about Joan’s truth, and in our world, algorithms aren’t so different. Every click, every “I agree,” and every trending headline feeds an ecosystem that rewards speed over accuracy and spectacle over empathy.

When consent becomes a view or a checkbox and stories become assets, the line between living your life and licensing it starts to blur. And by the time we realize what we’ve signed away, it might already be too late.

Facing the Deepfake

In Joan Is Awful, the twist isn’t just that Joan’s life is being dramatized; it’s that everyone’s life is. What begins as a surreal violation spirals into an infinite mirror. Salma Hayek plays Joan in the Streamberry series, but then Cate Blanchett plays Salma Hayek in the next layer. 

The rise of AI and deepfake technology is reshaping how we understand identity and consent. Increasingly, people are discovering their faces, voices, or likenesses used in ads, films, or explicit content without permission.

In 2025, Brazilian police arrested four people for using deepfakes of celebrity Gisele Bündchen and others in fraudulent Instagram ads, scamming victims out of nearly $3.9 million USD. 

Governments worldwide are beginning to respond. Denmark’s copyright amendment now treats personal likeness as intellectual property, allowing takedown requests and platform fines even posthumously. In the U.S., the 2025 TAKE IT DOWN Act criminalizes non-consensual AI-generated sexual imagery and impersonation.

In May 2025, Mr. Deepfakes, one of the world’s largest deepfake pornography websites, permanently shut down after a core service provider terminated operations. The platform had been online since 2018 and hosted more than 43,000 AI-generated sexual videos, viewed over 1.5 billion times. Roughly 95% of targets were celebrity women, but researchers identified hundreds of victims who were private individuals.​

Despite these legal advances, a fundamental gray area remains. As AI becomes increasingly sophisticated, it is getting harder to tell whether content is drawn from a real person or entirely fabricated. 

An example is Tilly Norwood, an AI-generated actress created by Xicoia. In September 2025, Norwood’s signing with a talent agency sparked major controversy in Hollywood. 

Her lifelike digital persona was built using the performances of real actors—without their consent. The event marked a troubling shift. As producers continue to push AI-generated actors into mainstream projects.

Actress Whoopi Goldberg voiced her concern, saying, “The problem with this, in my humble opinion, is that you’re up against something that’s been generated with 5,000 other actors.”

“It’s a little bit of an unfair advantage,” she added. “But you know what? Bring it on. Because you can always tell them from us.”

In response to the backlash, Tilly’s creator Eline Van der Velden shared a statement:
“To those who have expressed anger over the creation of our AI character, Tilly Norwood: she is not a replacement for a human being, but a creative work – a piece of art.”

When Joan and Salma Hayek sneak into the Streamberry headquarters, they overhear Mona Javadi, the executive behind the series, explaining the operation. She reveals that every version of Joan Is Awful is generated simultaneously by a quantum computer, endlessly creating new versions of real people’s lives for entertainment. Each “Joan,” “Salma,” and “Cate” is a copy of a copy—an infinite simulation. And it’s not just Joan; the system runs on an entire catalog of ordinary people. Suddenly, the scale of this entertainment becomes clear—it’s not just wide, it’s deep, with endless iterations and consequences.

At the 2025 Runway AI Film Festival, the winning film Total Pixel Space exemplified how filmmakers are beginning to embrace these multiverse-like AI frameworks. Rather than following a single script, the AI engine dynamically generated visual and narrative elements across multiple variations of the same storyline, creating different viewer experiences each time.

AI and deepfake technologies are already capable of realistically replicating faces, voices, and mannerisms, and platforms collect vast amounts of personal data from our everyday lives. Add quantum computing, algorithmic storytelling, and the legal gray areas surrounding consent and likeness, and the episode’s vision of lives being rewritten for entertainment starts to feel less like fantasy.

Every post, every photo, every digital footprint feeds algorithms that could one day rewrite our lives—or maybe already are. Maybe we can slip the loop, maybe we’re already in it, and maybe the trick is simply staying aware that everything we do is already being watched, whether by the eyes of the audience or the eyes of the creators that is still seeking inspiration. 

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Metalhead: Black Mirror, Can It Happen?

Before we talk about the events in Metalhead, let’s flashback to when this episode was first released: December 29, 2017

In 2017, Boston Dynamics founder Marc Raibert took the TED conference stage to discuss the future of his groundbreaking robots. His presentation sparked a mix of awe and unease.

Boston Dynamics has a long history of viral videos showcasing its cutting-edge robots, many of which were mentioned during the talk:

Big Dog is a four-legged robot developed by Boston Dynamics with funding from DARPA. Its primary purpose is to transport heavy loads over rugged terrain.

Then there’s Petman, a human-like robot built to test chemical protection suits under real-world conditions. 

Atlas, a 6-foot-tall bipedal robot, is designed to assist in search-and-rescue missions. 

Handle is a robot on wheels. It can travel at 9 mph, leap 4 feet vertically, and cover about 15 miles on a single battery charge.

And then there was SpotMini, a smaller, quadrupedal robot with a striking blend of technical prowess and charm. During the talk, SpotMini played to the audience’s emotions, putting on a show of cuteness. 

In November 2017, the United Nations debated a ban on lethal autonomous weapons, or “killer robots.” Despite growing concerns from human rights groups, no consensus was reached, leaving the future of weaponized AI unclear.

Simultaneously, post-apocalyptic themes gained traction in 2017 pop culture. From the success of The Walking Dead to Blade Runner 2049’s exploration of dystopian landscapes, this pre-covid audience seemed enthralled by stories of survival in hostile worlds, as though mentally preparing for the worst to come. 

And that brings us to this episode of Black Mirror, Episode 5 of Season 4: Metalhead.

Set in a bleak landscape, Metalhead follows Bella, a survivor on the run from relentless robotic “dogs” after a scavenging mission goes awry. 

This episode taps into a long-standing fear humanity has faced since it first began experimenting with the “dark magic” of machinery. Isaac Asimov’s Three Laws of Robotics were designed to ensure robots would serve and protect humans without causing harm. These laws state that a robot must not harm a human, must obey orders unless it conflicts with the first law, and must protect itself unless this conflicts with the first two laws. 

In Metalhead, however, these laws are either absent or overridden. This lack of ethical safeguards mirrors the real-world fears of unchecked AI and its potential to harm, especially in situations driven by survival instincts. 

So, we’re left to ask: At what point does innovation cross the line into an existential threat? Could machines, once designed to serve us, evolve into agents of our destruction? And, most importantly, as we advance technology, are we truly prepared for the societal consequences that come with it?

In this video, we’ll explore three key themes from Metalhead and examine whether similar events have already unfolded—and if not, whether or not it’s still plausible. Let’s go!

Killer Instincts

Metalhead plunges us into a barren wasteland where survival hinges on outsmarting a robotic “dog”. Armed with advanced tracking, razor-sharp senses, and zero chill, this nightmare locks onto Bella, after her supply mission takes a hard left into disaster. 

The robot dog’s tracking systems are similar to current military technologies. Autonomous drones and ground robots use GPS-based trackers and infrared imaging to locate targets. Devices like Lockheed Martin’s Stalker XE drones combine GPS, thermal imaging, and AI algorithms to pinpoint enemy movements even in dense environments or under cover of darkness. 

With AI-driven scanning systems that put human eyesight to shame, it can spot a needle in a haystack—and probably tell you the needle’s temperature, too. Think FLIR thermal imaging cameras, which let you see heat signatures through walls or dense foliage, or Boston Dynamics’ Spot using Light Detection and Ranging (aka Lidar) and pattern recognition to map the world with precision. 

Lidar works by sending out laser pulses and measuring the time it takes for them to bounce back after hitting an object. These pulses generate a detailed 3D map of the environment, capturing even the smallest features, from tree branches to building structures.

One of the most unsettling aspects of the robot in Metalhead is its superior auditory abilities. In the real world, acoustic surveillance technology, such as ShotSpotter, uses microphones and AI to detect and triangulate gunfire in urban areas. While it sounds impressive, its effectiveness is debated, with critics including a study by the University of Michigan pointing to false positives and uneven results. 

Still, technology is quickly advancing in recognizing human sounds, and some innovations are already in consumer products. Voice assistants like Alexa and Siri can accurately respond to vocal commands, while apps like SoundHound can identify music and spoken words in noisy environments. While these technologies offer convenience, they also raise concerns about how much machines are truly able to “hear.”

This is especially true when advanced sensors—whether auditory, visual, or thermal—serve a darker purpose, turning their sensory prowess into a weapon.

Take robotics companies like Ghost Robotics, which have developed machines equipped with sniper rifles, dubbed Special Purpose Unmanned Rifles (SPURs). These machines, designed for military applications, are capable of autonomously identifying and engaging targets—raising profound ethical concerns about the increasing role of AI in life-and-death decisions.

Built for Speed

In this episode, the robot’s movement—fast, deliberate, and capable of navigating uneven terrain—resembles Spot from Boston Dynamics. 

Spot can sprint at a brisk 5.2 feet per second, which translates to about 3.5 miles per hour. While that’s fairly quick for a robot navigating complex terrain, it’s still slower than the average human running speed. The typical human can run around 8 to 12 miles per hour, depending on fitness level and sprinting ability. 

So while Spot may not outpace a sprinter, DARPA’s Cheetah robot can — at least on the treadmill. Nearly a decade ago, a video was released of this robot running 28.3 miles per hour on a treadmill, leaving even Usain Bolt in the dust.

But while the treadmill is impressive, the current record holder for the fastest land robot is Cassie—and she’s got legs for it! Developed by Oregon State University’s Dynamic Robotics Lab, Cassie sprinted her way into the record books in 2022, running 100 m in 24.73 seconds. 

While today’s robots may not yet match the speed, adaptability, and relentless pursuit seen in the episode, the rapid strides in robotics and AI are quickly closing the gap. Like the tortoise slowly gaining ground on the overconfident hare, these technological advances, though not yet flawless, are steadily creeping toward a reality where they might outrun us in ways we hadn’t anticipated.

Charged to Kill

At a pivotal point in the story, Bella’s survival hinged on exploiting the robot’s energy source. By forcing it to repeatedly power on and off, she aims to drain its battery. Advanced machines, reliant on sensors, processors, and actuators, burn through significant energy during startup.

Today’s robots, like Spot or advanced military drones, run on rechargeable lithium-ion batteries. While these batteries offer excellent energy density, their runtime is finite—high-demand tasks like heavy movement or AI processing can drain them in as little as 90 minutes

However, the latest battery innovations are redefining what’s possible and the automotive industry is leading the charge. Solid-state batteries, for example, offer greater capacity, faster charging, and longer lifespans than traditional lithium-ion ones. Companies like Volkswagen and Toyota have invested heavily in this technology, hoping it will revolutionize the EV market.

Self-recharging technologies, like Kinetic Energy Recovery Systems (KERS), are moving from labs to consumer products. KERS, used in Formula 1 cars, captures and stores kinetic energy from braking to power systems and reduce fuel consumption. It’s now being explored for use in consumer and electric vehicles.

Battery innovation is challenging due to several factors. Improving energy density often compromises safety and developing new batteries requires expensive materials and complex manufacturing processes.

Modern robots are pretty good at managing their power, but even the smartest machines can’t escape the inevitable—batteries that drain under intense demands. While energy storage and self-recharging tech like solar or kinetic systems may help, robots will always face the dreaded low-battery warning. After all, as much as we’d love to plug them into an infinite, self-sustaining energy source, the laws of physics will always say, “Nice try!”

Information Flow

When Bella throws paint to blind the robot’s sensors and uses sound to mislead it, her plan works—briefly. But the robot quickly adapts, recalibrating its AI to interpret new environmental data and adjust its strategy. Similarly, when Bella shoots the robot, it doesn’t just take the hit—it learns, retaliating with explosive “track bullets” that embed tracking devices in her body. This intelligent flexibility ensures that, even when temporarily disabled, the robot can still alter its approach and continue pursuing its objective.

In real life, robots with such capabilities are not far-fetched. Modern drone swarms, such as those tested by DARPA, can coordinate multiple drones for collective objectives. In some instances, individual drones are programmed to act as decoys or to deliberately draw enemy fire, allowing the remaining drones in the swarm to carry out their mission.

In October 2016 at China Lake, California, 103 Perdix drones were launched from three F/A-18 Super Hornets. During this test, the micro-drones exhibited advanced swarm behaviors, including collective decision-making, adaptive formation flying, and self-healing.

While the events in Metalhead are extreme, they are not entirely outside the realm of possibility. Modern robotics, AI, and machine learning are progressing at a staggering rate, making the robot’s ability to adapt, learn, and pursue its objective all too real. 

The advancements in sensors, energy storage, and autonomous decision-making systems could one day allow machines to operate with the same precision seen in the episode. 

So, while we may not yet face such an immediate threat, the seeds are sown. A future dominated by robots is not a matter of “if,” but “when.” As we step into this new frontier, we must proceed with caution, for once unleashed, these creations could be as relentless as any natural disaster—except that nothing about this will be natural.

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

USS Callister: Black Mirror, Can It Happen?

Before we talk about the events in USS Callister, let’s flashback to when this episode was first released: December 29, 2017

In March 2017, Nintendo shook up the gaming industry with the release of the Nintendo Switch, a hybrid console that could be used both as a handheld and a home system. Its flexibility and the massive popularity of games like The Legend of Zelda: Breath of the Wild catapulted it to success with over 2.74 million units sold in the first month. 

The same year, Nintendo also released the Super NES Classic, a mini version of their 90s console that left fans scrambling due to shortages.

In the realm of augmented and virtual reality, 2017 also marked important strides. Niantic introduced stricter anti-cheating measures in Pokémon GO, while Oculus revealed the Oculus Go—a more affordable, standalone VR headset designed to bring immersive experiences to more people. Games like Lone Echo pushed the limits of VR, showcasing futuristic gameplay with its zero-gravity world.

However, in the real world, there were significant conversations about the risks of excessive gaming, particularly in China, where new regulations were put in place to limit minors’ time and spending on online games. These shifts in culture raised awareness around the addictive potential of immersive digital environments.

No it was not all fun and games — in fact, there was a lot of work as well. The year was also defined by controversies in the workplace. In October 2017, the Harvey Weinstein scandal broke, igniting the #MeToo movement and leading to widespread discussions about abuse of power, harassment, and accountability. 

Uber was rocked by similar revelations earlier in the year, with a blog post by former engineer Susan Fowler shedding light on a toxic work environment, which ultimately led to the resignation of CEO Travis Kalanick. 

Google wasn’t exempt from these cultural reckonings either, with the firing of software engineer James Damore after his controversial memo questioning the company’s diversity efforts went viral. 

In his memo titled “Google’s Ideological Echo Chamber,” Damore argues that the underrepresentation of women in tech isn’t simply due to discrimination but is also influenced by biological differences between male and female. He further claims that Google should do more to foster an environment where conservative viewpoints, like his, can be freely expressed.

And that brings us to this episode of Black Mirror. Episode 1, Season 4 — USS Callister. This episode combines the excitement of virtual reality with a chilling exploration of power, control, and escapism. 

Much like the controversies of 2017, it asks hard questions: How do we balance the benefits of technology with the ethical implications of its use? What happens when someone with unchecked power has control to live out their darkest fantasies? And finally, how do we confront the consequences of our gradual immersion in digital worlds? 

In this video, we’ll explore three key themes from USS Callister and examine whether similar events have happened—and if they haven’t, whether or not they are still plausible. Let’s go! 

Toxic Workplace

In this episode, we follow Robert Daly, a co-founder and CTO of a successful tech company, Callister. Despite his critical role in the company, Daly is overshadowed by his partner, James Walton, the CEO. Daly’s lack of leadership skills is evident, creating a strained work environment where he is seen as ineffective.

However, in the modified version of the immersive starship game Infinity — a game developed by Callister — Daly lives out his darkest fantasy by assuming the role of a tyrannical captain in a replica of his favorite show, Space Fleet. Here, he wields absolute control over the digital avatars of his employees, who are trapped in the game and forced to obey his every command. This exaggerated portrayal of Daly’s need for power not only reflects his real-world impediments but also highlights his troubling intentions, such as his coercive demands and manipulative actions toward his employees.

USS Callister explores themes of resistance and empowerment as the avatars begin to recognize their situation and challenge Daly’s authority. Their collective struggle to escape the virtual prison serves as a powerful metaphor that underscores the broader issue of navigating workplaces with domineering and unsympathetic employers.

When Elon Musk took over Twitter (now rebranded as X) in October 2022, his management style quickly drew criticism for its harshness and lack of consideration for employees. Musk implemented mass layoffs, cutting about half of the company’s workforce abruptly. By April 2023, Musk confirmed he had fired roughly 80%.

He also implemented a demanding work culture, requiring employees to submit one-page summaries outlining their contributions to the company in order to retain their jobs. This expectation, coupled with long hours and weekend shifts under intense pressure, reflected a disregard for work-life balance and contributed to a high-stress environment.

The rapid and drastic changes under Musk’s tenure not only led to legal and operational challenges but as of January 2024, Fidelity reports that X has seen a 71% decline in value since Elon Musk acquired the company.

In 2020, former staff members accused Ellen DeGeneres and her management team of creating a workplace culture marked by bullying, harassment, and unfair treatment—contradicting her public persona of kindness. Following the backlash and tarnished reputation, Ellen ended her 19 season run and aired her final episode on May 26, 2022 with guests, Jennifer Aniston, Billie Eillish, and Pink.

In November 2017, Matt Lauer, a longtime host of NBC’s “Today” show, was fired after accusations of sexual harassment surfaced. Following his termination, more allegations emerged from female colleagues, revealing a pattern of misconduct. Perhaps the most damning detail was Lauer’s use of a secret button to lock his office door — from the outside—to keep other employees from walking in. 

As harassment in the physical world continues to receive widespread attention, it has also found new avenues in digital spaces. 

According to an ANROWS (Australia’s National Research Organisation for Women’s Safety) report from 2017, workplace harassment increasingly moved online, with one in seven people using tech platforms to harass their colleagues. Harassment via work emails, social media, and messaging platforms became a rising issue, showing the darker side of digital communication in professional environments.

In the same year, concerns about workplace surveillance and management practices emerged, particularly at tech companies. 

Amazon was a prime example of invasive productivity tracking, where employees’ movements and actions were constantly monitored. If their performance drops below their expected productivity rate, they risk being fired.

These challenges extended to remote work, where platforms like Slack encouraged a culture of constant availability, even after hours. 

The rise of automated tools, like HireVue’s AI-powered hiring platform and IBM’s AI-driven performance reviews, raised concerns about bias, unfair evaluations, and the lack of human empathy in the hiring and management processes.

These developments highlight broader trends in workplace dynamics, where toxic environments and power imbalances are increasingly magnified by the misuse of technology. This theme is echoed in USS Callister, where personal grievances and unchecked authority in a digital world allow one man to dominate and manipulate his employees within a disturbing virtual playground. The episode serves as a cautionary tale, illustrating how the abuse of power in both real and digital realms can lead to harmful consequences.

Stolen Identity

In USS Callister, Robert Daly’s method of replicating his colleagues’ identities in Infinity involves a disturbing form of theft. Daly uses biometric and genetic material to create digital clones of his coworkers. Specifically, he collects DNA samples from personal items, such as a lollipop discarded by a young boy and a coffee cup used by his colleague, Nannette Cole.

Daly’s access to advanced technology enables him to analyze these DNA samples and extract the personal information necessary to recreate his victims’ digital identities. These avatars, based on the DNA he collected, are trapped within the game, where Daly subjects them to his authoritarian whims.

The use of DNA in this context underscores a profound invasion of privacy and autonomy, turning personal genetic material into tools for exploitation.

Digitizing DNA involves converting genetic sequences into digital formats for storage, analysis, and interpretation. This process begins with sequencing the DNA to determine the order of nucleotides, then converting the sequence into binary code or other digital representations. The data is stored in databases and analyzed using advanced software tools. 

These technologies enable personalized medicine, genetic research, and ancestry analysis, advancing our understanding of genetics and its applications. Key players in this field include companies like Illumina and Thermo Fisher Scientific, as well as consumer services like 23andMe and Ancestry.com

As more of our genetic data is stored in databases, our personal information becomes increasingly vulnerable. Hackers, scammers, and malicious actors are constantly seeking new ways to exploit data for profit. 

One example is the 2020 Twitter hack, which saw the accounts of major public figures like Elon Musk and Joe Biden hijacked to promote a cryptocurrency scam. The breach not only caused financial losses for unsuspecting followers but also raised alarms about the security of our most-used platforms. 

In 2022, a phishing attack targeted Microsoft Office 365, employing a tactic known as consent phishing to exploit vulnerabilities in multi-factor authentication. In some cases, the attackers impersonated the US Department of Labor and tricked users into granting access to malicious applications and exposing sensitive data such as emails and files. 

In 2024, a BBC investigation revealed an almost 60% increase in romance scams, where individuals used fake identities to form online relationships before soliciting money under false pretenses. 

Similarly, there has also been a rise in sextortion scams targeting college students, where scammers manipulated their victims into compromising situations and demanded ransoms, threatening to release the sensitive material if they didn’t comply.

Jordan DeMay, a 17-year-old high school student from Michigan, died by suicide in March 2022 after being targeted in a sextortion scam that can be traced to two Nigerian brothers, Samuel and Samson Ogoshi, who were later arrested and extradited to the U.S. on charges of conspiracy and exploitation. 

These instances of identity exploitation mirror another concerning trend: the misuse of genetic data. In 2019, GEDmatch—the database that helped catch the Golden State Killer—experienced a breach that exposed genetic data from approximately one million users who had opted out of law enforcement access. The breach allowed law enforcement to temporarily access private profiles without consent, raising significant privacy concerns about the security of sensitive personal data.

Some insurance companies — specifically those in Canada—  have been criticized for misusing genetic data to raise premiums or deny coverage, especially in areas like life or disability insurance. This highlights the importance of understanding your policy and legal rights, as insurance companies are not always complying to new regulations such as the Genetic Nondiscrimination Act (GNDA).

All this illustrates the terrifying possibilities shown in USS Callister, that our most intimate data — our identity — could be used against us in ways we never imagined. Whether through hacked social media accounts, phishing scams, or stolen genetic data, the digital age has given rise to new forms of manipulation.

Stuck in a Game

In USS Callister, the very avatars Daly dominates ended up outwitting him in a thrilling turn of events. Led by Nanette Cole, the trapped digital crew formulates a bold plan to break free. While Daly is preoccupied, the crew triggers an escape through a hidden wormhole in the game’s code that forces an upgrade. They outmaneuver Daly by transferring themselves to a public version of the game and locking him out for good. As the avatars seize their freedom, Daly, once the ruler of his universe, is left trapped in isolation — doomed.

For anyone who has ever been drawn into the world of video games, “trapped” feels like a fitting description.

Some games, such as Minecraft or massively multiplayer online games (MMOs), have an open-ended structure that allows for infinite play. Without a defined ending, players can easily become absorbed in the game for hours at a time.

Games also tap into social connectivity. Multiplayer games like Fortnite and World of Warcraft foster relationships, forming tight-knit communities where players bond over shared experiences. Much like social media, this sense of connection can make it more difficult to disengage, as players feel a part of something bigger than themselves.

In both USS Callister and real-world video games, a sense of progression and achievement is built into the experience. Daly manipulates his world to ensure a constant sense of control and success that fails to replicate real life, where milestones and mastery can take weeks, months, and years. 

Video games are highly effective at captivating players through well-designed reward systems, which often rely on the brain’s natural release of dopamine. This neurotransmitter, associated with pleasure and motivation, plays a key role in the cycle of gratification. This behavioral reinforcement is seen in other addictive activities, such as gambling.

Game developers employ a multitude of psychological techniques to keep players hooked — trapped. 

The World Health Organization’s (WHO) recognition of “gaming disorder” in 2022 underscores the growing concern surrounding video game addiction. Lawsuits against major gaming companies like Call of Duty, Fortnite, and Roblox have shown serious efforts to hold companies accountable for employing addictive algorithms similar to those found in casinos.

Real-world tragedies have also shed light on the dangers of excessive gaming. In Thailand, for instance, 17-year old Piyawat Harikun died following an all-night gaming marathon in 2019, sparking debates over the need for better safeguards to protect young gamers. Cases like this hammer home the need for stronger regulations around how long players, especially minors, are allowed to engage in these immersive experiences.

The financial aspects of gaming, such as esports, has created incentives for players to commit to their addiction as a vocation. Players who make money through competitive gaming or virtual economies may find themselves stuck in cycles of excessive play to maintain or increase their earnings. 

This phenomenon is evident in high-profile cases like Kyle “Bugha” Giersdorf, who won $3 million in the Fortnite World Cup, or Anshe Chung, aka the Rockefeller of Second Life, a virtual real estate mogul. 

Then there is the rise of blockchain-based games like Axie Infinity, a colorful game powered by Ethereum-based cryptocurrencies, which introduces financial speculation into the gaming world. These play-to-earn models push players to engage excessively in the hopes of earning monetary rewards. However, they also expose players to significant financial risks, as in-game currency values fluctuate unpredictably, often leading to a sunk-cost fallacy where players feel compelled to continue investing despite diminishing returns.

This episode reminds us that we can often find ourselves imprisoned by our work. Yet, the cost of escapism can be high. While technology may seem to open doors to new worlds, what appears to be an endless realm of freedom can, in reality, be a staircase leading to an inevitable free-fall. USS Callister highlights the abyss that technology can create and the drain it has on our most valuable resource — time. This episode serves as a warning: before we log in at the behest of those in power, we should remember that what happens in the virtual world will ultimately ripple out into the real one.

Join my YouTube community for insights on writing, the creative process, and the endurance needed to tackle big projects. Subscribe Now!

For more writing ideas and original stories, please sign up for my mailing list. You won’t receive emails from me often, but when you do, they’ll only include my proudest works.