Privacy Fails, the Public Revolts – Companies Lack Response
I’m just back from the RSA conference where “privacy” was the hot topic. The kerfuffle over privacy at RSA is outlined nicely in a blog piece in the New York Times, so I won’t rehash that, but it is worth revisiting the whole “privacy is dead” mantra that seems to be given new life.
Note: Check out coverage of my RSA session
Privacy is big news these days and the news is full of fear, speculation and conspiracy. It is setting the public on edge and there is no shortage of coverage validating this statement, but that is only one side of the coin. Few are discussing how companies need to respond in this new era, one where the old mechanisms for protecting the public are no longer effective but regulators are still acting to protect the public.
In the late 1960’s and into the 1970’s, with the advent of computers, maintaining privacy was suddenly no longer about the ability to doing things without other people knowing it, but it grew to what we now call information privacy. New laws and regulations were formed to govern what could/should be done with personal information by the way those who gathered information on computers.
This era is dominated by the concept of notice and consent and to this day that most laws and regulations worldwide are still based on this concept. It is simple, before you use someone’s personal information you are obligated to let them know how you will use it and you must get there permission to do so. In a free market society you would think notice and choice would work. If the person gives over their information with a full understanding of what it is going to be used for then there should be no reason to prohibit that, right? This is ‘merica, the Land of the Free and all.
Add to that the whole smartphone and “internet of things” devices. These either have small, very small, or possibly no screens at all. How can you give notice to someone when there is no place to display it. And these are just the most obvious reasons why notice and consent is failing. There are many more reasons (which actually is a great topic for another post. Maybe I’ll write that next.).
So with laws based on an outdated concept, our regulatory mechanisms are failing, but it is not like the public doesn’t want privacy protections. A growing group of early adopters use Tor, anonymous VPNs, and a growing number of other tools to obfuscate their identity or block trackers to prevent being identified and categorized, but using credit cards, talking to our doctors and countless other ways are also being used to track us. Remaining anonymous is a near impossible task.
The FTC, and other regulatory bodies around the world, are trying their best to help the public feel secure but they are limited in what they can do, but that doesn’t mean they aren’t trying. They are using their authority under Section 5 of the FTC Act to bring legal action against those who are using personal information in an “unfair” manner.
The Commission is hereby empowered and directed to prevent persons, partnerships, or corporations, [except certain specified financial and industrial sectors] from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.
They are going after companies that are violating the public’s expected right to privacy by exercising their authority under the above sentence (being “unfair”). This is the authority they used against Google when they were caught capturing unsecured router traffic via their Street View cars, and it is that authority that they used to go after Wyndham Hotels and Resorts for lapse security practices and Aaron’s for installing key-loggers and spyware on the computers they leased.These are all cases in which there were no clear regulations but the regulatory authority, in order to protect the public, found a way to act.
There is also the ever present risk of bad publicity. Bad publicity from privacy violations can be damaging. Take a look at what bad PR did to Target in 2008 and more recently to RSA. No one wants bad publicity.
So increasingly aggressive regulatory action against otherwise “legal” actions that happen to surprise the consumer (being “unfair”), in combination with the risk of bad publicity is forcing companies to consider privacy as a serious risk. The big question becomes who is responsible for mitigating this risk?
You’re thinking, the needs to be paying attention to this. And that is most definitely the case, but someone needs to bring this matter to their attention, whether that be the Google engineer who made the decision to capture wifi traffic, or the Aaron’s employee who installed the spyware.
Can you see where this is leading? Who in the organization is best situated to uncover these types of potential privacy risks? Not the legal team. Not the compliance manager. It is the roles of those who are managing the streams of data for the organization. That may be the IT administrators, developers or information security role but it is clearly those in an information technology function.
And not only are they best situated to spot the issues, they have the skills, training and experience to understand the complexities of how data in a modern organization is collected, stored and processed. I could spend another 1000 words detailing this, and maybe I will in another post, but I hope I have made the case that as regulatory action moves away from traditional roles of following guidelines to a place where accountability for an organization’s actions rely on the organization to act in a reasonable manner, the responsibilities for mitigating the risk for regulatory action will move to those who best understand the technology that is creating the risk in the first place.
I predict we will see a huge expansion of opportunities in the information technology space as IT pros will expand into more strategic and less tactical roles.