For decades, technology has carried with it a seductive promise. It was presented as neutral, objective, and above the messy fray of human politics. Engineers wrote code, scientists pushed research, corporations-built platforms, and governments regulated from a distance. If problems emerged, the defense was always the same: we just build the tools. What people do with them is not our responsibility.
That claim, once powerful, now looks like the last great lie of the tech world.
Every technology, no matter how abstract, carries with it political, cultural, and moral choices. The idea of neutrality is comforting, but it hides the reality that someone, somewhere, is always deciding whose values are encoded, whose interests are served, and whose risks are ignored.
Why neutrality is a myth
Take the most discussed case today: artificial intelligence. Training data is not just numbers. It is scraped from human history, full of bias, stereotypes, and cultural assumptions. When an AI system decides which résumé to push to the top of a hiring list or which photo matches a suspect, it is not acting as a neutral referee. It reflects the decisions of engineers who chose what data to include, what objectives to optimize, and what errors are acceptable. Claiming neutrality erases the fact that these are deeply moral judgments.
The same applies to quantum computing. On paper, it looks like pure science, untouched by politics. Yet the race to build quantum machines is framed as a geopolitical competition. Governments pour billions into research not to satisfy curiosity, but to achieve breakthroughs in codebreaking, surveillance, and military advantage. Every funding decision tilts development toward certain applications and away from others. Even before a single device is commercialized, the values of security, secrecy, and control are woven into the fabric of the field.
Biotechnology offers another example. Gene editing technologies such as CRISPR are celebrated as medical miracles, but who decides what counts as an acceptable use? Enhancing crops to resist drought might be seen as beneficial, but enhancing embryos to increase intelligence raises profound ethical questions. To say the tool is neutral is to ignore the regulatory, cultural, and economic battles that shape how it is applied. The line between therapy and enhancement is not a scientific boundary. It is a social and moral one.
The corporate shield of “just building tools”
Why does neutrality persist as a narrative if it is so clearly a myth? One reason is that it functions as a shield for corporations. If a social media platform enables disinformation campaigns, the company can claim it is just a channel, not a publisher. If a software company sells facial recognition tools that lead to wrongful arrests, it can insist the police are misusing the technology. Neutrality allows powerful actors to profit while disclaiming accountability.
The tobacco industry once used a similar defense, insisting that cigarettes were a personal choice. Fossil fuel companies argued they simply provided energy, while governments and consumers decided how to use it. Tech firms have inherited this script, but in a world where platforms shape elections and algorithms decide who gets medical treatment, the pretense is collapsing.
Governments also benefit from the myth. By framing technology as neutral, they can fund research in sensitive areas without having to answer hard questions about ethical consequences. Military investments in AI are described as defensive, even when they enable autonomous weapons. Public investments in data infrastructure are described as efficiency projects, even when they expand surveillance. Neutrality, in other words, allows policymakers to act without fully owning the political dimensions of their choices.
Why neutrality is dangerous
The myth is not just false; it is dangerous. It blinds society to the real stakes of technological decisions. When we pretend that tools are neutral, we fail to ask whose voices are excluded from design rooms, whose communities bear the risks, and who reaps the rewards.
Consider predictive policing algorithms. They are marketed as neutral systems that analyze crime patterns. In practice, they reinforce over-policing in communities that already experience heavy surveillance. By treating these systems as objective, cities ignore the cycle of harm they perpetuate.
Or take content moderation. Platforms argue they are neutral hosts, but every decision about what speech is allowed reflects values. Choosing to allow violent propaganda in one country but not in another is not a technical choice. It is a political one. By claiming neutrality, platforms avoid accountability for decisions that affect democratic discourse worldwide.
Reframing technology as choice
If neutrality is a lie, then what is the alternative? We need to frame technology honestly: as a set of choices. Every stage, from research funding to product design to deployment, involves decisions about values and trade-offs. Acknowledging this does not mean rejecting innovation. It means owning responsibility for the moral and political weight of innovation.
For corporations, that means abandoning the shield of neutrality and facing the consequences of the systems they build. For governments, it means crafting regulation that treats technology as a matter of governance, not just commerce. For the public, it means demanding transparency about how technologies are shaped and whose interests they serve.
Toward a culture of accountability
The lie of neutrality thrived in an earlier era because technology seemed distant from everyday life. A semiconductor chip or a database could be imagined as a tool with no agenda. But today, when algorithms decide what we see, when genetic tools redefine health, and when quantum breakthroughs shift global power, neutrality is no longer believable.
We are living in an age where every major technology intersects with human values. The challenge is not to keep innovation “neutral,” but to guide it with intentionality. If we fail, the biggest decisions about our future will be made by a handful of corporations and governments hiding behind the last great lie.
The next time you hear “we just build the tools,” treat it as what it really is: an evasion. Technologies are never just tools. They are choices. And the responsibility for those choices cannot be outsourced to some abstract idea of neutrality.
Click here to read this article on Dave’s Demystify Data and AI LinkedIn newsletter.