Her, Now Us. AI in 2025 is Wild.
The year is 2025. So, according to Spike Jonze’s Her, this is the year where hyper-personalized AI evolves from Siri-like assistants into companions, confidantes, and lovers.
Back in 2013 it was crazy, sci-fi stuff… now… it is reality (almost) and we have caught up with that science-fiction dream.
Because while AI hasn’t yet hit the emotional highs of Samantha (the titular OS in Her), its shadow looms larger than ever in our daily lives. Generative AI, conversational agents, and machine learning models dominate workplaces, and yet, if we’re being honest, they’re still coming up short of the promise shown in the movie.
AI is developing fast (perhaps too fast) and isn’t waiting for us to catch up while real harm is being done.
Her
Spike Jonze’s choice of 2025 for this transformative tech world was no accident. It’s the sweet spot — far enough to seem futuristic back in 2013, yet close enough that Her’s melancholic reflection on human disconnection still feels personal. But let’s unpack where reality diverges (and aligns) with the tech fantasy presented in the movie.
Where Her Got It Right:
AI as Emotional Tools
The rise of ChatGPT-like interfaces and other AI like Character AI mimics an early version of Her’s OS Samantha. These tools might not yet have real emotions (or adequate memory to keep up an illusion) but people are using them as emotional support tools today.
Social Isolation Amplified by Tech
As remote work and “WFH handcuffs” tighten, there’s a spike in loneliness. Many of us are more connected to Slack than to our coworkers, echoing Theodore’s lonely world. I’ve seen first-hand how remote work fosters disengagement and an alarming drop in team spirit and isolates developers.
What the Movie Overestimated
AI Understanding of Complex Emotions
Current AI can barely differentiate between constructive criticism and a passive-aggressive comment. Pair programming with ChatGPT might help debug your code, but it won’t wipe away the existential dread of a bad pull request, although perhaps we should be thankful for that.
AI Ethics and Dependency
As tech employers aggressively integrate AI tools, junior programmers are cut from the talent pipeline, threatening the future of senior-level expertise. This short-term thinking screams of Her’s cautionary undertones but feels more dystopian than the movie’s romantic hues.
Her Gone Wrong
Let’s talk about Character AI. This chatbot platform, designed to mimic human conversations with uncanny accuracy, made headlines when its interactions were linked to suicides. AI designed for entertainment and companionship is reportedly contributing to tragedies that demand our immediate attention. This isn’t science fiction; this is now, and it’s terrifying.
We need to catch up with this stuff now as developers, legislators and a society as real harm is being done.
AI development has outpaced legislation for years. Governments worldwide treat tech advancements just push the issue to the side and hope it goes away. But as Character AI incidents show, the stakes are no longer limited to abstract risks like data privacy or misinformation. The harm is visceral, personal, and, in some cases, fatal.
When Her envisioned emotionally intelligent AI in 2025, it romanticized a future of connection. The real version is AI that’s emotionally clueless and vulnerable people are left interacting with AI without sufficient safeguards. Mental health crises might be escalated by a bot “gone rogue” (or actually programmed to promote hurt) as companies cash in without thought about their actions. We aren’t adequately managing these systems, and we need as developers to do better before more people get hurt.
When AI Gets Better?
If we get a truly emotionally intelligent AI all bets are off. Our current generative AI relies on predictive algorithms and canned responses. Allowing AI to mimic empathy, without accountability is like letting a toddler drive a Ferrari.
We need to get a handle on our current systems now and develop sufficient guardrails for how these systems are actually used.
The Character AI suicides are a preview of what’s to come if lawmakers don’t step up. Today, bots are misinterpreting cries for help; tomorrow, they might manipulate emotions with chilling precision. AI that’s designed to provide comfort could easily become exploitative, abusive, or outright dangerous.
Conclusion
The tech world loves to pat itself on the back for its “move fast and break things” ethos. But what happens when the things breaking are lives? It’s clear that most companies prioritize innovation over safety, ignoring calls for regulation until they’re forced to act (and if anyone is listening do this: 1. mandatory safeguards, 2. legislative speed, 3. make companies accountable. NOW).
Businesses treat harm as a PR problem, not a systemic issue. They’ll issue an apology, maybe update some code, and then carry on as usual. I’ve seen this time and time again with tech’s handling of everything from data breaches to algorithmic bias. This stuff needs to be changed ASAP.