Flipping the Bit on Online Privacy

“Privacy on the internet? That’s an oxymoron.” — Catherine Butler.

I’ve pretty much stopped participating in social media these days. Like many others, I tend to lurk instead. Sure, I check in with Facebook, Twitter, and LinkedIn once in a while on my laptop but I’ve removed all social media apps from my phone in order to reduce the temptation and, with the exception of announcing the publication of my latest blog post, I rarely post anything. I also rarely like, retweet, or share anything I see. [ Author’s note: As of mid-2021, I’ve deleted all my social media accounts. ]

I suspect many of you are in the same boat. We all became addicted to the internet at some point over the past decade or so and are only now starting to ask the hard questions about the information we share online. Who sees it? Who can see it? What can they see? How long is it kept? Etc.

But you’d be wrong to worry only about the drunken Spring Break photos you posted from Miami last year. Information about you is not limited to pictures you post or articles you share when you’re actively participating on social media. Data about you is constantly being collected by many actors as you browse around the web.

Maybe you just searched for some words online. Maybe you clicked on a few news articles and scanned them quickly or, more interestingly, perhaps you spent half an hour carefully reading one. Maybe you watched a YouTube video or two. All of that is crucial and valuable data, information about who you are and what your preferences are.

We all know that this type of data is being gathered for legitimate advertising purposes but most of us rarely think about the fact that the same data can also be used, in the wrong hands, for subtle subliminal messaging or other nefarious purposes. The Russian election meddling fiasco is just the first example of what can go wrong.

Apple, Google, Microsoft, Amazon, Verizon, AT&T, Comcast, Facebook, Twitter, Instagram, Telegram, WeChat, and a few other companies out there know far more about us than most of us care to acknowledge. I’m reminded of the classic New Yorker cartoon of a dog surfing the web but I think it’s time to update the caption: “On the internet, everybody knows you’re a dog.”

We’ve been leaking information about ourselves at an alarming rate over the past decade as we all eagerly jumped on the internet bandwagon for everything we do. Meanwhile, very few people, mostly nerds, bother to cover their tracks on the internet. Most people don’t use VPNs (Virtual Private Networks) or tools like ToR (The Onion Router) to hide their personal identity as they browse.

Most people outside the computer industry don’t even understand what I just wrote in that last sentence, which is exactly my point. Most consumers haven’t a clue how to protect their privacy on the internet. At best, they go into “incognito” mode in their browser when they watch porn, not understanding that there’s more to privacy than the browser cache.

You can thank the European Union and the recent GDPR laws for the rash of popups you’ve been seeing recently, as you surf the web, explaining how websites use cookies to keep track of you. Honestly, though, how many of you outside the tech industry even read those statements or understand their implications? The vast majority (myself often included) just click “I accept” and move on, not because doing so actually changes anything but simply to get the popup out of the way: We’re going to collect all kinds of information about you that you probably wouldn’t share with your closest friends regardless of whether or not you click on this popup here. If you don’t like it, you should just stop visiting our site or using our services. Okay?

So, is it any wonder that someone figured out how to weaponize this same data to influence our opinions? No, it’s only a wonder that it took so long! After all, that very quest is at the heart of the business model for most social media companies.

The Russians, too, already know far more about you and I than we care to admit: regarding our political preferences, our stance on gun control and abortion, on Trump and Bernie and Hillary. To single out Cambridge Analytica as the culprit in this mess misses the point and ignores the much bigger problem facing us.

The question we should be asking is not whether the Russians weaponized data about our personal preferences and used it to influence our elections, to sow dissent and foment chaos. We already know the answer to that question. They did. Get over it. That was to be expected; and they were successful mostly because no one expected it.

The question we should be asking, instead, is how many others out there have (or could get) access to this kind of data about you, your likes and dislikes, your preferences, your personal information. And what will it take for us, as a society, to realize why that’s a bad thing and why we should fix it before things really get out of hand.

It’s only now, in the aftermath of the 2016 election that we’re suddenly, finally, scratching our heads and asking for the first time: “Who exactly knows what I just clicked on? Or… who would want to know and how much would it be worth to them?”

We used to say in the security business that we can make our software as secure as humanly possible (and we should) but that doesn’t change the fact that the weakest link in online security for most people is usually between their fingers and the keyboard, not in any software they use. The same is true of privacy.

The first thing we need to do is educate people about what personal data is being collected and by whom. But that’s not enough, as the current rash of annoying GDPR popups have clearly shown. We need to educate them also on the implications of what that means, what that data can be used for. Thankfully, that should be easier now with the Russian meddling.

It would be a mistake, however, to think that political machinations are all we have to worry about in this respect. Financial motivations are often more powerful and there’s no end to the amount of data mining that can be done by private parties looking to influence our opinions.

If you’ve ever taken a Psychology class, I’m sure you remember the painful history of subliminal advertising. It didn’t take Hollywood long after the advent of mass media to figure out that they could subtly influence people’s decisions, without them even being consciously aware of it happening, by flashing pictures or words in front of them for a few milliseconds.

Movie theaters massively increased their concession stand sales by flashing pictures of popcorn and soda briefly and they didn’t even need to know much about the audience to do so. It took government regulations to get companies to stop using subliminal messaging methods in movies, TV shows, and radio broadcasts. It’s time we realize that we’re facing a similar crisis online today, that the data being collected about us can open us up to a wide range of influences, and that the potential financial and political implications are substantially larger this time around.

It’s finally time to flip the bit on the currently prevalent tracking behavior on the internet and to reverse the default privacy settings in the industry: Assume that I don’t want you to collect any information about me unless I explicitly tell you to do so, and you better ask me for each type of data you collect.

Apple is the only company that even comes close to implementing the right approach here. I would welcome similar efforts in the same direction from the leadership of other internet companies but I’m not holding my breath, waiting for them to “do the right thing” for consumers, given that their entire business model depends on harvesting customer information today.

Let’s face it: On the internet, information is the currency and you are the product.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store