Not even 10% of us trust tech firms to protect our personal information

20 May 2020 — A recent Google survey suggests less than one in 10 adults in the US and UK trust tech firms to protect their personal information.

According to the survey’s author, UK-based independent researcher Stephen Cobb: “This could be a big problem for current efforts to recruit technology to solve a range of problems created by the COVID-19 pandemic.”

Cobb points out that some of the prime examples of how technology might help us, for example smartphone apps that track our location to let us know if we’ve been in close contact with someone who tests positive for COVID-19, “may be doomed to failure if not enough people trust the technology companies involved in enabling them.”

The survey, conducted May 10-19 using Google’s Consumer Survey tool, asked one simple question: “Do you trust tech firms to protect your personal information?” Respondents could answer Yes, No, or Not sure.

Less than 1 in 10 respondents answered Yes, they trusted tech firms to protect their personal information. More than half said No (55%). Just over one third said Not sure (36%). Who were these people? Adults in the US (n=756) and the UK (n=514).

“Frankly, I was shocked that so few people said yes,” says Cobb, who commissioned the survey because, as he put it: “I kept seeing all this anecdotal evidence that people had lost faith in tech company statements like “we value your privacy” and “we protect your personal information.””

According to Cobb, an award-winning technologist who has studied public attitudes to data privacy and cybercrime for many years, this latest research was prompted by British prime minister Boris Johnson’s decision to make a smartphone “track-and-trace” app a key part of the country’s strategy for recovering from COVID-19.

“I was worried that the British public would be too distrustful of the technology—and the companies behind it—for this to work,” says Cobb, “but I didn’t have any recent data.” The responses to Cobb’s survey, which asked the same question of Brits and Americans, are remarkably consistent between the two countries. For example, the percentage answering no was 55.8% in the UK and 55.4% in the US.

While acknowledging that these results are “just a snapshot,” Cobb thinks they justify more extensive research, saying, “This does not look good for tech firms who right now need all the trust and goodwill they can muster, especially those firms that are on the defensive over everything from questionable privacy practices to data breaches, monopolistic behaviour, and tax avoidance; I’m thinking Facebook (FB), Amazon (AMZN), Apple (AAPL), Netflix (NFLX), Alphabet (GOOG), and Microsoft (MSFT).”

Note to readers and editors: I decided to style the first half of this article like a press release because what it conveys is press-worthy, fact-based, and speaks to what I see as a seriously under-acknowledged and under-researched problem. However, I’m not a well-funded tech company with money to spend on press releases. I’m just an unfunded independent researcher trying to shed some light by means of a very modest self-funded survey. (I can be reached on Twitter where I am @zcobb, DMs are open.)

Context and counter arguments

So, it’s May, 2020, and we humans are struggling to cope with a global crisis of unprecedented scope and scale, despite having unprecedented levels of technology at our disposal. Also unprecedented, as far as I know: the finding that less than 10% of adults in the US and UK trust tech firms to protect their personal information. Yet the state of affairs that these findings reflect was entirely predictable.

In 2017, I persuaded my employer at the time, ESET, to fund a survey of 750 US adults that found “Americans Rank Criminal Hacking as Their Number One Threat,” as illustrated in this chart:

Chart shows “Americans Rank Criminal Hacking as Their Number One Threat.” Courtesy of ESET.

In 2018, I did another survey of 750 US adults that showed “Seven out of ten see criminal hacking as big risk to health, safety, prosperity, as seen in this chart:

Later in 2018, ESET funded my survey of 2,500 US adults that found nine out of 10 saw cybercrime as a bigger challenge to their country’s security than drug trafficking, money laundering, and several other serious crimes.

With 30% of respondents reporting that they had experienced identity theft, and 27% having experienced ransomware, it was perhaps not surprising that, when asked about their government’s response to cybercrime, fewer than half (45%) thought that the police and other law enforcement authorities were doing enough to fight cybercrime. Furthemore, a worrying 87% said that they thought the risk of becoming a victim of cybercrime was increasing. (A PDF of the report can be downloaded here, while this article provides background on the research.)

In light of these statistics, and the constant drumbeat of data breach headlines in the media, I’m inclined to think that one reason—maybe even the main reason—that people don’t trust tech companies to protect personal information is their apparent inability to do so.

Whether that lack of ability results from a shortage of concern or an overabundance of criminal activity may not be a question most people pause to ask. My own opinion is that the governments of the world have seriously failed their citizens when it comes combating and deterring malware-enabled cybercrime, a failure so great that it is now compounding the problems created by the pandemic (something I have written about in some depth here: The Covid Effect means we can no longer ignore the Malware Factor).

Another leading contender for why people don’t trust tech companies to protect personal information is the sad catalogue of deceptive practices around data protection that have dogged these firms since the US Federal Trade Commission’s settlement with Microsoft in 2002.

Since then, the FTC has fined both Google and Facebook—in dollar amounts ranging from tens of millions to thousands of millions—for failing to take the privacy of personal information seriously enough (c.f. this list of over 80 related cases). Hundreds of millions of dollars in FCC fines are also being sought from AT&T, Sprint, T-Mobile, Verizon for their failure to safeguard information about customers’ real-time locations.

It seems that many companies just can’t resist the temptation to gather as much personal information as they can, as opaquely as they can, then monetizing in as many ways as they can, all without spending as much as they should to defend it against outsiders—and let’s not forget insiders—who seek to misuse and abuse it at scale.

Of course, the impact of the lack of trust in tech firms revealed by this survey is moderated by several factors, not least of which is necessity. Everyone who participated in this survey — including those who expressed a lack of trust in tech firms — was using technology from such firms to engage in the survey.

Humans have a long history of engaging with technology about which they have mixed or even negative feelings. Many of us own and drive cars while distrusting car makers (not to mention stressing over the way cars are choking our planet). I certainly don’t trust my phone provider not to add unwarranted charges onto my monthly bill.

The history of human ambiguity toward technology is too long and complex to review and debate here, but it is my considered opinion that levels of trust in tech firms as low as these survey results indicate can and will undermine some applications of the technology that are potentially of great benefit.

I think we’re in an unprecedented situation, where less than one in 10 of us trust tech companies to protect our personal information, even as unprecedented levels of wealth are being accumulated by just a few thousand individuals, a significant number of whom made their money in or through or with technology that gathers, processes, and stores, personal information.

Ok, but what can we do about this?

My own opinion is that the overabundance of criminal activity, while not the whole problem, is a huge part of the problem. Yes, it’s true than many organizations could do better at cybersecurity, but it’s also true that the governments of the world have massively failed their citizens when it comes to malware-enabled cybercrime. This failure is so huge that it is now compounding the problems created by a deadly pandemic. Maybe, now that lives are very clearly on the line, more people in positions of power and influence with begin to take the Malware Factor more seriously.

But what would that look like? How does taking the Malware Factor more seriously at the highest levels translate into action? I’m going to list three suggestions. You may not like them. You may even scoff at some or all of them. But I’m already used to that, as I mentioned in this blog post and Medium article from 2017 (same story, two different places). FYI, I’m still fairly sure I’m right.

1. International cooperation and global treaties are the only way to make a serious dent in cybercrime and cyberconflict, and the citizens of the world should push their governments in this direction. I realize this going to be hard while three of the biggest malware making countries are still run by Trump, and Putin, and Xi, respectively — but that is no reason not to try.

2. Cybersecurity products and services should be made available at lower or no cost. As I’ve been saying for more than a decade now, information system security is the healthcare of IT/ICT. Just as profit-based healthcare is, in my opinion and practical experience, a bad idea, so is people making large fortunes from protecting the world’s digital infrastructure — as opposed to a decent wage. Besides, a profit-based approach to securing ICT has thus far failed to make any lasting dent in the cybercrime growth curves. (See graph above, from this IEEE blog post by Chey Cobb and myself).

3. We need to consider an end to broadcasting and bragging about new and interesting ways to gain illegal access to information systems. Justifying this as a way to improve security and reinforce the message that it needs to be taken more seriously might have been valid at some point in the past, but that validity has been seriously eroded. Fully open, freely accessible, in-depth research on things that enable ethically-challenged individuals or governments to seriously undermine our collective future is not, in my opinion, a good idea. (Think of someone making and distributing a version of COVID-19 that doesn’t give victims a tell-tale cough — cool?)

I’m happy to hear more suggestions, or your thoughts on what’s wrong with these. And I’m also happy to hear about any moves already being made in these three directions. (I am already familiar with the work of the Global Commission on the Stability of Cyberspace — and still hoping they will take up the idea of a Comprehensive Malware Test Ban Treaty.)

If you found this article interesting and/or helpful consider buying me a coffee to fuel more writing like this.
If you found this article interesting and/or helpful consider buying me a coffee to fuel more writing like this.

Notes:

(If you found this article interesting and/or helpful consider clicking the button above to buy me a coffee and fuel more writing like this.)

There are numerous technical reasons why a Bluetooth-based, smartphone-enabled contact tracing app might fail to provide meaningful help in the fight against COVID-19, even before the trust factor comes into play. (Here’s what Bluetooth’s inventors say.)

A narrated version of my article on The Malware Factor can be found here, and on my YouTube channel (where you can also find evidence of my claim to be an award-winning technologist).

This Wikipedia page seems to be doing a decent job of tracking the contact tracing apps. And this excellent article by Ross Anderson puts many of the issues around this technology in perspective.

#technology #trust #survey #cybercrime #dataprivacy #privacy #infosec #FTC #FCC #COVID19 #Covid19UK $FB $AMZN $AAPL $GOOG $MSFT

Independent researcher into risk, tech, gender, ethics, healthcare, and public policy. A life spent learning, writing, and speaking. Based in Coventry, England.