The Existential AI Risk Nobody Is Talking About

Stephen Cobb
3 min readMar 31

--

Like every other instance of Artificial Intelligence, ChatGPT is a computer, and all computers can be hacked, abused, and disabled.

Image showing Artificial Intelligence is made of chips, code, data, and power

Public debate about whether or not Artificial Intelligence poses an existential risk to human existence seems to ignore the existential risk to AI posed by humans.

Lest we forget, AI is made out of chips and code, fed by electricity and data; and all four of these ingredients are highly vulnerable to abuse and sabotage. But nobody seems to be talking about this; not the “AI is our future” folks, nor the “AI-will-kill-us-all” crowd. Yet the fundamental technical vulnerability of AI should be giving comfort to those who fear AI, and serious pause to those who think AI is our ticket to a bright and shining future.

To be clear, unless the trajectory of AI diverges from that of all prior digital technology, the closer humans get to relying upon AI, the more AI will be targeted by bad actors for selfish ends, with consequences that range from trivial to fatal, for both humans and AI itself.

In recent years we have seen computer-reliant hospitals turning away patients and cancelling treatments because of ransomware attacks exploiting coding vulnerabilities. Human societies, as currently configured, seem to be powerless to prevent this, and I see no reason to think criminals will not want to ransom AI systems, ranging from those being used for diagnostic purposes in medicine to those telling self-driving cars how to drive.

I say this as someone who has spent four decades at the nexus of digital technology and ethically-questionable behavior. The relentless waves of pro-AI hype to which we have been subjected in recent years have dulled us to this reality: both the positive dreams of AI’s benefits, and the nightmare threat of AI taking over, are subject to AI avoiding the fate of all previous digital technology.

As a technologist with a focus on risk I have spent decades watching new tech get heralded, fêted, and hyped, then hacked, attacked, and exploited. The result is the current landscape of tech products that don’t always work as well or easily as they should; digital services that are abused and misused and exploited for gain by the ethically-challenged; rampant cybercrime, tech-enabled fraud, and a general mistrust of technology and its vendors.

Unless I am missing something—and please let me know if you think I am—there is no precedent for AI avoiding the malicious exploitation of vulnerabilities in its code or the chips on which that code runs; no grounds for thinking AI won’t be fed twisted data or deprived of electricity.

Of course, it is quite possible I am not the only person with this perspective, but so far I have not found any public discussion of AI’s technical vulnerabilities, other than those surrounding the data on which AI systems are trained. That suggests most people are assuming AI’s code and chips and power supply are not at risk. I can assure you they are, but I will give ChatGPT the last word:

“It’s unlikely that AI systems can achieve 100% protection against hacking or sabotage. While AI can help improve security measures, no security system is completely foolproof … the security of AI systems relies not only on the AI technology itself but also on the people and processes involved in their design, development, and deployment. Human error, insider threats, and other vulnerabilities can also contribute to security breaches … There are different types of attacks that could be used to compromise an AI system. For example, an attacker could use a malware or a virus to gain unauthorized access to the AI system, steal sensitive data, or alter the system’s behavior.” — ChatGPT, Mar 14 Version, 2023

--

--

Stephen Cobb

Independent researcher into risk, tech, gender, ethics, healthcare, and public policy. A life spent learning, writing, and teaching. Now based in Coventry, UK.