The Existential AI Risk Nobody Is Talking About

Stephen Cobb
3 min readMar 31, 2023

Artificial Intelligence is made from code running on chips powered by electricity that process data across connections; and all five components are vulnerable to hacking and abuse.

Diagram showing that Artificial Intelligence is an information system composed of chips, code, data, connections, and electricity

Most public debate about whether or not Artificial Intelligence poses an existential risk to human existence ignores the existential risk to AI posed by humans.

(Note: this article was updated after original publication to include Connections as the fifth AI ingredient and provide links to examples cited.)

Lest we forget, an AI is an information system made out of chips and code, fed by electricity and data, communicating across connections; and all five of these ingredients are vulnerable to abuse and sabotage. Despite these facts about AI, and their obvious implications for the ability of AI to perform as intended, nobody seems to be talking about them, neither the “AI is our future” folks, nor the “AI-will-kill-us-all” crowd.

In my opinion, the fundamental technical vulnerability of AI should be giving comfort to those who fear AI, and serious pause to those who think AI is our ticket to a bright and shining future. Furthermore, unless the trajectory of AI diverges from that of all prior digital technology, the closer humans get to relying upon AI, the more AI will be targeted by bad actors for selfish ends, with consequences that range from trivial to fatal, for both humans and AI itself.

Remember, we have seen digitally-dependent hospitals turning away patients and cancelling treatments because of ransomware attacks exploiting coding vulnerabilities. We have seen malicious code deployed to turn off electricity. Critical digital infrastructure like air traffic control has a troubling history of chaos-causing coding errors. Human societies, as currently configured, seem unable to eliminate vulnerabilities from digital technology, or dissuade some people from abusing them for selfish ends.

Therefore, I see no reason to think criminals will not want to ransom or disable AI systems, ranging from those being used for diagnostic purposes in medicine, to those telling self-driving cars how to drive. I say this as someone who has spent four decades at the nexus of digital technology and ethically-questionable behavior. The relentless waves of pro-AI hype to which we have been subjected in recent years have dulled us to this reality: both the positive dreams of AI’s benefits, and the nightmare threat of AI taking over, are subject to AI avoiding the fate of all previous digital technology.

As a technologist with a focus on risk I have spent decades watching new tech get heralded, fêted, and hyped, then hacked, attacked, and exploited. The result is the current landscape of tech products that don’t always work as well or easily as they should; digital services that are abused and misused and exploited for gain by the ethically-challenged; rampant cybercrime, tech-enabled fraud, and a general mistrust of technology and its vendors.

Unless I am missing something—and please let me know if you think I am—there is no precedent for AI avoiding the malicious exploitation of vulnerabilities in its code or the chips on which that code runs; no grounds for thinking AI won’t be fed twisted data or deprived of electricity and connections.

Of course, it is quite possible I am not the only person with this perspective, but so far I have not found any public discussion of AI’s technical vulnerabilities, other than those surrounding the data on which AI systems are trained. That suggests most people are assuming AI’s code and chips and connections and power supply are not at risk. I can assure you they are, but I will give ChatGPT the last word:

“It’s unlikely that AI systems can achieve 100% protection against hacking or sabotage. While AI can help improve security measures, no security system is completely foolproof … the security of AI systems relies not only on the AI technology itself but also on the people and processes involved in their design, development, and deployment. Human error, insider threats, and other vulnerabilities can also contribute to security breaches … There are different types of attacks that could be used to compromise an AI system. For example, an attacker could use a malware or a virus to gain unauthorized access to the AI system, steal sensitive data, or alter the system’s behavior.” — ChatGPT, Mar 14 Version, 2023

--

--

Stephen Cobb
Stephen Cobb

Written by Stephen Cobb

Independent researcher into risk, tech, gender, ethics, healthcare, and public policy. A life spent learning, writing, and teaching. Now based in Coventry, UK.

No responses yet