AI problem awareness grew in 2020, but 46% still “not aware at all” of problems with artificial intelligence
The general public’s awareness of problems with AI grew in 2020, yet 46% of US adults are still not aware of those problems. At least, that’s what the survey results indicated in December of last year when I asked several hundred internet-using Americans to choose which of three statements best described their awareness of problems with artificial intelligence.
As you can see from the graph, when it comes to awareness of problems with AI, twice as many respondents said they were “not aware at all” compared to the number who answered “very aware.” Most interesting to me was that close to one third (32%) of those responding were “More aware now than a year ago.” (The survey was conducted the week beginning December 6, 2020, using Google Survey, n=387).
To be honest, I’m not sure if proponents of AI will see these numbers as good news or bad news. As a technologist with serious reservations about the way AI is being developed and deployed, I was pleased to see that AI problem awareness has grown over the last 12 months. Indeed, the initial impetus for the survey was to check my own observations, that:
a) not many people seem to be aware of problems with AI; but also
b) awareness of those problems is on the rise.
AI is the future, so what’s the problem?
Enthusiastic endorsements of AI, and headlines announcing AI successes persisted throughout 2020 (The Year Of Artificial Intelligence For Your Business, ‘The game has changed.’ AI triumphs at solving protein structures, The State of AI in 2020).
Some endorsements were triggered by the technology’s role in countering the COVID-19 pandemic (Using AI responsibly to fight the coronavirus pandemic); other plaudits were prompted by more mundane AI-enabled developments (Watch Tesla Full Self-Driving Beta in a full 30-minute realistic trip). But plenty of red flags were also raised.
This year was marked by ethical issues of AI going mainstream, including, but not limited to, gender/race bias, police and military use, face recognition, surveillance, and deep fakes.” — The State of AI in 2020
Concerns were also raised in 2020 about attacks on the way AI systems work and the the ways that AI can be used to commit harm, notably the report titled Malicious Uses and Abuses of Artificial Intelligence produced by Trend Micro Research in conjunction with the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol’s European Cybercrime Centre (EC3).
However, I didn’t manage to find much work in 2020 that went deeper into “attack-related” AI problems than two key reports that appeared in prior years:
- Attacks on AI: Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It, [summary with link to report PDF]Marcus Comiter, August 2019
- Attacks with AI: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, [PDF] Brundage and Avin et al. (2018)
From my perspective, those two document and the UN/Trend report should be required reading for everyone interested in, working on, or concerned about AI. While Comiter focused on input attacks and poisoning attacks, grouping them as “AI Attacks,” Brundage et al. elaborated a whole range of ways in which AI could be used for malicious ends by, for example, cybercriminals seeking more efficient ways of gaining unauthorized access to information systems.
The UNICRI report covers attacks both with and on AI. However, none of these reports directly address what strikes me as the most fundamental weakness of AI: it’s inherent vulnerability as a chip and code based technology (see: AI’s most troubling problem? It’s made of chips and code).
Who thinks AI poses a risk human health, safety, or prosperity?
Unfortunately, I don’t have the budget right now to dig into which AI problems “enjoy” the most awareness (I will report back here if that changes). I did find that in December, 2020, the World Economic Forum reported on “how opinions on the impact of artificial intelligence differ around the world.” Interestingly, 47% of Americans thought development of artificial intelligence was a bad thing (versus 44% saying it was a good thing — there’s more coverage here on Tech Monitor and on Pew’s announcement of the findings).
I am continuing to explore awareness of AI problems and ethical issues, as well as the underlying vulnerabilities of AI. And I may do this in combination with a deeper dive into demographic differences in the perception of the risks posed by AI.
For example, research by my colleague Lysa Myers and I back in 2017 showed that the so-called White Male Effect — WME — in risk perception does exist in the digital realm and that includes AI. First documented in the 1990s, WME refers to the phenomenon of white males consistently reporting — on aggregate — less concern about a wide range of health and technology risks than white females, non-white males, and non-white females.
In our survey of over 700 US adults, documented here and elsewhere, we used the language and format of previous risk perception studies to ask: “how much risk you think the following items pose to human health, safety, or prosperity. In each case respondents could answer from “No risk at all” to “Very high risk.” Along with risks like global warming, we included AI and other potential digital technology risks. That’s how we got answers to this question: “How much risk do you believe artificial intelligence poses to human health, safety, or prosperity?”
While AI was not seen to be as high a risk as network failures or companies accumulating personally identifiable information (PII), the demographic that perceived — back in 2017 — the most risk in AI was non-white females.
As you can see from the chart, females as a group saw more risk from AI than males. As far as I am concerned, these findings about AI risk perception— which were not the focus of our study back then — are very insignificant. They are also a potentially hopeful sign, given the number of female AI experts that were vocal awareness raisers on a range of AI issues in 2002, especially non-white female experts (also see: 100 Brilliant Women in AI Ethics and Black women in AI). Indeed, I am cautiously optimistic that AI will be the first digital technology to be deeply questioned before it is fully deployed.
If you’ve read what I have written elsewhere about successive waves of digital technology failure and the lack of awareness of their implications for AI, you know I have been very pessimistic about way things have gone over the last few decades. Here’s hoping that the female voices questioning the pace of AI adoption are heeded in 2021 and the white male effect averted.
Let me close this article by saying how pleased I am that more people seem to be are aware of AI problems now than they were a year ago. I thank you for reading this far and hope you found it worth your time.
Coming up: Next month I will present my findings from a companion AI problem awareness survey carried out in UK. Can you guess who is “more AI problem aware,” the US or the UK?
Want to help? I left my day job to produce truly independent research that is free of corporate bias and freely available to all. If you found this Medium article interesting and/or helpful, please take a moment to clap and share it
Also, a cup of coffee would help me continue my vendor- and institution-neutral research work, and be very much appreciated.