Privacy is not about you (mostly)
Privacy is a political issue. The lack of privacy puts our societies in danger by allowing an unprecedented level of control in the hands of a few. In this article, we explore the consequences of mass surveillance and how privacy is essential to avoid them.
Key points:
- Advertising is the main source of revenue for most social media companies.
- Social media companies are excellent advertising platforms because they practice massive data collection, allowing them to profile their users and sell products by targeting them specifically.
- Political content is no exception, and users are targeted with extremely precise, on-purpose crafted political propaganda. The discretionary nature of targeted content opens the possibility for a de facto programming of an election.
- Mass surveillance is incompatible with a functioning democracy. This means that business models that rely on mass surveillance should be simply illegal, for they put in danger the societies from which they profit.
When talking about privacy, many people reply just by saying that "they do not have anything to hide". That "you should not worry if you have not done anything wrong", or that "all of our data is out there anyway, so no need to bother". While there are many ways to reply to this kind of argument, what I want to transmit in this article is that privacy, just like many other societal issues, is not about you. Privacy is a political matter. Probably one of the most crucial political issues of our time.
But wait. Isn't privacy precisely related to the individual in an obvious way? It's "my visibility", "my data". Well, it is, but just in the same sense as individual freedoms are that, individual, in the sense that they apply to individuals, but they are definitely not a mere private concern. Individual freedoms are a political matter.
Disclaimer: I will not cover all the instances of surveillance here, and I will focus on surveillance carried out by private companies. I will cover State surveillance in further writings.
Why surveillance?
We have surveillance because it is incredibly profitable for the companies that base their businesses on it. Plain and simple. That being said, what is surveillance to begin with? We can define surveillance as the practice of tracking, monitoring and harvesting data belonging to citizens by default, whether carried out by private or public entities. This means that we monitor somebody not because the State, for instance, considers that person to be a threat. It is also not based on any particular criterion. That is, demographic data is collected regardless of the demography that is at play. It is collected massively, by default. A good example of such practice is Meta. Meta has data on millions of people, including gender, race, sexual orientation, religious ideas, and political tendencies, among other things. This happens regardless of whether you use their platforms or not. And Meta, of course, is not the only company that has indulged in these kinds of practices. To give another example, ByteDance-owned TikTok also harvests massive amounts of data. What about Google? Probably you guessed it. Just to give you an idea, already in 2010, former Google CEO Eric Schmidt said in an interview that "we know where you are. We know where you've been. We can more or less know what you're thinking about". And if this was the case back in 2010, now Google's surveillance practices are way more invasive, for instance, by using fingerprinting, a technique that uses browser settings, tracking cookies, or device settings to identify and track users and their devices. Not even Apple, which tends to use privacy as a selling point, abstains from such activities. The inclusion of an AI agent has been a concern among privacy experts for some time now. Microsoft also collects vast amounts of data through its services. Microsoft Teams is a good example of that. In case this was not enough, the Windows 11 "Recall" feature, which basically screenshots everything you do on your computer, is installed by default in every computer - although it can be disabled or even completely uninstalled after being consistently criticised for its security flaws. These are, unfortunately, just a few examples.
You reap what you track
Now, the question is, why are these companies so invasive? And the answer is very simple and actually quite repetitive: because it is extremely profitable. To understand how this is possible, we must first understand the business model of companies like Meta. And what we find is that these companies, more than technological companies, as they are often branded and valued, are advertising companies. As an example, ads constitute the main revenue for Meta. By having an accurate depiction of its pool of users, Meta can offer on-purpose, crafted ads to its users. And this is done in an extremely effective way because all your tastes and preferences can be effectively registered and inferred. And this is the case for most of the online platforms we use daily. Take Spotify, for instance, which notably started to advertise Godiva chocolate to customers who were actively listening to heartbreak songs. Now, it could be argued that it is rather innocent to advertise tourist places to holiday seekers. Or even to advertise chocolate to desperate lovers. However, the advertisement business takes a more obscure turn when political advertisers enter into play.
Political content moves massive amounts of money. Between 2023 and the last year - in which the US Presidential elections were held - online promoted political content accounted for spending of 619 million dollars just on Google and Meta. Just as a ski shop will appeal to ski fans, which can be easily identified due to the large amounts of data available about pretty much everybody, what about those who are keen on conspiracy theories? And what about those who promote racism and xenophobia? Can they be converted into loyal customers, too?
We think that the massive consumption of political content, mainly through social media like Facebook, YouTube or X, has ended up in a highly politicised society. But nothing could be farther from reality. A consumerist attitude towards political issues only makes us more passive and reliant on which products - which videos we watch, to whom we follow - can be effectively sold to us. Therefore, if it is profitable for a company like Meta to promote hate-speech content, it will just do so. And unfortunately, we tend to engage more easily with content that includes extreme views, hate speech, and so on.
Play it again
I said that companies like Meta make their gains mainly through advertisement. In addition to this, a significant fraction of that advertisement consists of politically-related content. More than that, usually, the more such content can make us feel angry, the more likely we are to engage with it. If we combine these three things, it will be easy to notice that these companies have a huge incentive to keep us using their platforms for as much time as possible. Because the more time we spend consuming content, the more revenue we can help to generate. And how is this done?
Through a feed (if we were wondering about the passiveness of our political attitude, this name says everything). A feed can be on a landing page, or it can be an integrated series of suggestions (YouTube's lateral bar in their desktop version, for instance). TikTok is notorious for the addiction it causes among its users. Making such platforms engaging is such a crucial part of the business that we are talking already of an "attention economy", with Forbes reporting that only in 2023 "The value of attention has never been more apparent than in the staggering $853 billion in global net advertising revenue".
Again, the massive data collection carried out by these companies means that they have very accurate profiles of their users - and, as said, also non-users -to which then they can serve targeted ads. When this is mixed with a spiral of addictive feeds that promote political content just to make a profit, it is evident that these companies' profits start to threaten our societies. This is not entirely new. The mainstream press has often been accused of indulging in sensationalist practices and promoting the ideologies of those who own them. But the way in which the content can be tailored for each individual - with a feed at our disposal, we don't even have to "make an effort" of choosing a particular tabloid after all -and the precision through which this is achieved constitutes a specific difference of our times. So are the profits generated by this. As I mentioned in another article, Meta earned millions by monetising political violence.
There is something that must be clear from now on. Democracy and the rule of law are incompatible with mass surveillance. This is because surveillance allows profiling, which allows the serving of targeted content with astonishing precision. This means that elections can be de facto bought and programmed. You just have to place the product you want to promote into the feed of the people it has to appear to at the right time. And the "memefication" of politics makes this easier than ever. That is, propaganda and entertainment are no longer distinguishable because everything dissolves under the category of "content". And since these platforms have to be engaging, they end up confirming the views in which we are trapped; it attaches us to the products we may have accidentally consumed.
And the best part is that this is all legal. And it is actually not new. It has already been mentioned quite a few times here in The Debugger, the Cambridge Analytica - Facebook scandal, which was instrumental in the first election of Donald Trump in the US almost 10 ago. It might be objected that, after all, the Internet is just a part of our lives and that what has been said so far is an exaggeration. But the truth is that we rely on the Internet most of the time for our access to reality. Whether we like it or not, that's a fact. The Cambridge Analytica - Facebook case is a good example of how massive data collection was put to the service of political advertising, and it's just one among many.
If ads were not enough...
So far, I have said that the main reason for companies like Google to behave in such a way is basically because it is immensely profitable. Now, what happens if money is not the only driving force behind mass surveillance and tailored online content?
We tend to think about social media platforms as public spaces, virtual squares in which the citizens gather to discuss the most pressing issues of their societies. This is not true, though. Most social media platforms are private spaces, where everybody who participates in those platforms is subject to the rules established by their owners. And when I say rules, I say preferences, interests, and agendas. A notorious example is Elon Musk's X, which was instrumental in the second election of Donald Trump or the rise of AfD in Germany. Of course, someone like Musk would undeniably benefit from promoting parties that endorse a strong laissez-faire vision of the State. That being said, X has been reporting consistent losses since Musk acquired the platform for most of its time. Now, in defence of X, it has been pointed out that the recommendation algorithm is open-source and publicly available for anybody who wishes to check it. But actually, the last commit to the algorithm was made on July 13th, 2023. Now, on December 19th, 2024, it was announced that a new version of the recommendation algorithm was to be deployed, which means that we have no way of verifying the current algorithm. Moreover, what was released in 2023 has been described as a partial disclosure:
Twitter said it was withholding code dealing with ads, as well as trust and safety systems in an effort to prevent bad actors from gaming it. The company also opted to withhold the underlying models used to train its algorithm, explaining in a blog post last week that this was to “to ensure that user safety and privacy would be protected.” That decision is even more consequential, according to Messing. “The model that drives the most important part of the algorithm has not been open-sourced,” he tells me. “So the most important part of the algorithm is still inscrutable.”
[...] none of the code Twitter released tells us much about potential bias or the kind of “behind-the-scenes manipulation” Musk said he wanted to reveal. “It has the flavor of transparency,” Messing says. “But it doesn’t really give insight into what the algorithm is doing. It doesn't really give insight into why someone's tweets may be down-ranked and why others might be up-ranked.”
Actually, just before the German elections of 2025, the European Commission ordered X to provide its documentation regarding recommendation methods. More than that, Romania's last presidential election was involved in controversy because of Georgescu's exclusive - and extremely effective - campaign on TikTok, which allegedly also received foreign funding. Even TikTok recently acknowledged that its platform contributed to manipulate the elections. Again, if you have mass surveillance, and if you have platforms that can tailor a political preference for you, then democracy is simply not possible. It is not a matter of endorsing or criticising democracy. Democracy simply becomes materially unachievable under such circumstances.
Back to privacy
Nothing that has been reported here is a conspiracy or something of the sort. The story is actually more vulgar, and older. It is a story of profit, of how the interests of a few conflict with the rest of the society, and its prospects to maintain at least a relative balance. Of course, before the irruption of social media, we did not live in a civic paradise, in a Republican ideal - in the classic sense - where virtuous citizens engaged in rational discussions for the sake of their societies. But massive surveillance has taken us further, and has turned the population into an asset that can be exploited at ease. This is severe, and it has effects beyond the possibility of programming elections. It goes without saying that privacy is essential to keep investigative journalism alive, which is crucial to keep both governments and companies accountable, at least to some extent. Moreover, people tend to self-censorship themselves when we are all potentially being tracked and recorded. This is known as the "chilling effect", and as it can be imagined, the consequences in terms of political action are hard to overstate. If a society starts to systematically excerpt self-censorship upon itself, then political activity and mobilisation become unlikely, to say the least. We will explore this in a separate article.
All of the things described have been made possible thanks to mass surveillance. This is not something new. We can think about the notorious Snowden case, which barely needs any introduction and exposed, back in 2013, the US massive surveillance program, which stretched way beyond the US. When we say that we do not care for privacy or that we do not have anything to hide, we are basically giving our consent to some companies to keep profiting despite the tremendous social damage they cause. I wonder to what extent personal belief is even possible in a moment when belief is manufactured and sold with extreme efficiency. To say that we do not care for privacy is to be content with the fact that corporations and political agents can effectively remove the rule of law, which aims, above anything else, to avoid excessive concentration of power in the hands of a few. And none of this has anything to do with us "having something to hide". All of this is possible because we have collectively given up on privacy. Mass surveillance is the environment in which contemporary tyranny can thrive. The good news is that this is not the final word, nor it has to be.
The EU's regulatory efforts, such as the Data Act, the AI Act, or the General Data Protection Regulation, are, as said previously in The Debugger, valuable efforts. But they are not enough. Any mitigation will fall short as long as such business models are still legal. To use an analogy, the only effective way someone has to protect oneself from the harms of smoking is simply by not smoking. You can always add a better filter or use a better paper, but the damage is still there. Social media companies and advertisers knew the harm they were about to unleash, and they went for it anyway, making billions and billions in the process. The result is a more polarised society and more politically passive than ever. The problem is not social media per se. The problem is the businesses model behind most of them. That is why the only model of social media that should be legal is one that does not collect any data, does not offer any kind of recommended content, and that it is not possible to be owned. That is, the only solution is to have proper social media, and not advertising companies running on social media. I will talk about solutions in a separate article, which will act as an effective continuation of this one. But we can start by choosing privacy-respecting alternatives to the main services we tend to use and by encouraging others to do the same. Then, we can continue by pressuring our governments to back open-source, privacy-minded initiatives, and enable policies to protect privacy.
Privacy is not just a concern for a few specialists. It is one of the most pressing societal issues that we face since the lack of privacy is the condition of the possibility of mass surveillance and political manipulation. And privacy, or its absence, is one of the things that will determine to which kind of society we head towards. The prospects do not look great, but we are still on time to act.
Further reading:
- Google Knows You Better Than You Know Yourself. Carmichael, J. The Atlantic.
- Internet surveillance, regulation, and chilling effects online: a comparative case study. Penney, J. W. Internet Policy Review.
- How Political Campaigns Use Your Data to Target You. Klosowski, T. Electronic Frontier Foundation (EFF).
- Memes as an online weapon. An analysis into the use of memes by the far right. National Coordinator for Counterterrorism and Security. Ministry of Justice and Security, The Netherlands.
- Online Behavioral Ads Fuel the Surveillance Industry—Here’s How. Cohen, L. Electronic Frontier Foundation (EFF).
- Social media is polluting society. Moderation alone won’t fix the problem. Lubin, N. & Gilbert, T. K. MIT Technology Review.
- To Save the News, We Must Ban Surveillance Advertising. Doctorow, C. Electronic Frontier Foundation (EFF).