# The Shadow of Our Own Making: Human Unsafety and the Misplaced Panic over AI
The relentless chatter around ”AI safety” dominates tech discourse today, fueled by anxieties of superintelligence and runaway algorithms. We are urged to fear the artificial mind, to control its development, and to safeguard ourselves from its potential harms. Yet, a nagging question persists: are we focusing on the right threat? This preoccupation with ”safe AI” feels increasingly like a misplaced panic, a shadow cast by our own pre-existing, deeply human forms of unsafety. Before obsessing over hypothetical dangers of artificial intelligence, we must confront the very real, very human-made systems of unsafety that define our world.
A certain type of tech entrepreneur emerges in this narrative. They fit a familiar mold: driven by profit, quick to capitalize on trends, and often lacking a deep or sustained commitment beyond their immediate financial gain. Consider the archetype: they contribute to the rapid development of a powerful technology, perhaps even playing a leading role. Then, upon seeing a lucrative opportunity – or perhaps sensing a shift in public sentiment and funding priorities – they pivot. They leave the company they helped build, only to denounce the very technology they championed:
> - Geoffrey Hinton, former Google VP of engineering, then Nobel-prize laureate
> - Ilya Sutskever, OpenAI’s former chief scientist, then founder of “Safe Superintelligence”
Suddenly, they are safety advocates, raising alarms about the dangers of a creation that, just recently, was the pinnacle of innovation. This feels less like genuine concern and more like opportunistic positioning, a strategic move in a landscape where ”AI safety” has become a valuable commodity. It’s a performance, designed to attract attention and funding, while the underlying, more complex issues of safety remain unaddressed.
This pattern of opportunistic positioning is further amplified by the emergence of privately funded *AI Safety Institutes* in the US and elsewhere. Organizations like the American “AI Safety Institute,” have gained prominence, promising to solve the very “safety” challenges that now dominate public discourse.
However, a closer examination raises critical questions. Their funding often originates from the very tech industry they ostensibly aim to oversee. Their mission statements, while laudable in their focus on *safety,* often prioritize technical solutions to *AI alignment* and *existential risk,* potentially overshadowing the more immediate and human-centered dimensions of safety – the issues of bias, inequality, and misuse that already plague AI systems today.
Are these institutions genuinely independent watchdogs safeguarding humanity, or are they, perhaps unwittingly, participating in a sophisticated shell game? By focusing so intently on a narrowly defined *AI safety,* do they risk diverting attention and resources from the more fundamental, human-created unsafety that demands our urgent attention?
It is in this landscape of profound, human-created unsafety that the narrow pronouncements of certain tech figures ring particularly hollow. Having acknowledged the vast, interconnected web of systemic inequalities and human-caused suffering, the sudden emergence of the tech entrepreneur as a self-proclaimed savior of humanity feels jarringly dissonant. These are individuals who, often through a combination of skill and opportune timing, have risen within the very systems that perpetuate many of these inequalities. They operate within a framework that prioritizes rapid growth, disruption for disruption’s sake, and ultimately, personal wealth accumulation. To then witness them pivot, cloaking themselves in the mantle of ”AI safety” while seemingly ignoring the broader, more pressing issues of human safety, is a display of breathtaking hypocrisy.
It’s akin to a sophisticated shell game, or a high-stakes three-card Monty. These figures, often leading well-funded and highly visible companies, participate in creating the very conditions they then claim to be uniquely positioned to solve. They contribute to a technological landscape increasingly perceived as complex and potentially threatening, and then present themselves as the only ones who can navigate and control it. The ”safety” they offer is often narrowly defined, technically focused, and conveniently aligned with their own continued influence and financial gain. This is not a genuine effort to look outwards and address the wider world’s deep-seated problems. Instead, it is a deeply inward gaze, focused on consolidating their own position and capitalizing on anxieties, sometimes anxieties they themselves have helped to generate. Beneath the veneer of world-saving rhetoric, the underlying motive often appears far more prosaic: self-preservation and profit maximization within a system they understand and exploit remarkably well. The claim of creation, too, rings somewhat false. These are not alchemists conjuring novel solutions from thin air. They are, in many cases, highly adept players within existing systems of knowledge and capital, skillfully leveraging resources and riding technological waves, often with a considerable element of luck interwoven into their success.
The irony remains stark: we inhabit a world demonstrably unsafe due to human actions and human-designed systems. Gun violence persists as a tragic feature of many societies. Debates around trigger locks and background checks, while perhaps necessary, sidestep the deeper issues – systemic inequalities, normalized violence, and the very accessibility of weapons. Systemic poverty and disenfranchisement are not accidental flaws; they are engineered outcomes of economic and social structures that perpetuate inequality, breeding crime, desperation, and violence. These are not abstract threats; they are tangible, daily realities for millions, born not from rogue AI, but from human choices, policies, and a profound lack of collective will to address them fundamentally.
This prompts a crucial question: what is the true source of our fear? Is it genuinely the specter of a malevolent AI consciousness? Or is our anxiety about AI a projection of our own primal fears, a discomfort with the unknown and the ”other“? Perhaps our deepest fear is not of AI’s actions, but of its potential to reveal ourselves. AI, with its capacity to process vast datasets and identify patterns, may expose the flaws inherent in our systems, the biases embedded in our thinking, and the very human limitations that contribute to so much unsafety. Confronting the potential pitfalls of our creations forces us to confront our own limitations as creators.
Moving beyond the limited scope of ”AI safety” demands a fundamental shift in perspective. True safety begins not with regulating a nascent technology, but with confronting the deeply ingrained, human-created unsafety that pervades our world. It necessitates a critical re-evaluation of the systems we perpetuate, the inequalities we tolerate, and the inherent limitations of our own thinking. Before prioritizing ”safe AI,” we must first commit to building a safer human world. This requires rethinking our priorities, challenging our assumptions, and acknowledging the uncomfortable truth that the most significant threat to human safety does not reside in a computer chip, but within the complex, flawed, and profoundly human heart of our own societies. Only then can we have an authentic conversation about safety – one rooted not in fear of the other, but in a profound and honest self-reflection.