I recently watched The Social Dilemma, a new documentary-drama released on Netflix. I would highly recommend it as an excellent introduction to what social networking platforms have become, their current effect on society, and the extreme dangers brought by their monetisation models. I think adults and teenagers alike should urgently discuss these topics, the vast power large technology companies wield, how the digital advertising industry works, behavioural manipulation through machine learning, and the role we want these technologies and companies to have within our society’s future. Similar topics have been drifting around in my head for years, but I thought I’d take the opportunity to write down some of those relating to the film.
The old saying goes that there’s no such thing as a free lunch, and with the possible exception of some altruistic acts, I think it’s important to remember this with all the software and services we use. The Social Dilemma nicely covers the idea that with these social networks, we are the product effectively being sold. I found very interesting the mention that in fact it’s less our data itself, and more the potential to influence our actions, that is valuable. This certainly deserves more thought. Many people don’t seem to care too much about their data being sold—likely unaware of the scope of the data collected, its value, and how it’s being used. But perhaps people would think more seriously about what’s going on if it were framed in terms of social networks selling the ability to manipulate our behaviour. I myself have become increasingly wary of ‘free’ services, and am slowly beginning to opt for paid services where possible, as that not only shifts the balance in the monetisation strategy, but also allows me to reward smaller companies who are working hard to tackle these issues and manage my data responsibly.
It’s been known for years that some social networks not only aggressively target advertising based on a deep knowledge of our interactions, but also actively manipulate our feelings and actions. In 2014, Facebook experimented with users’ news feeds to manipulate their emotions—a large-scale scientific experiment affecting hundreds of thousands of people without their consent, and showing a total disregard for ethical guidelines. As recently as a couple of years ago, I was still surprised that this hadn’t caused widespread outrage—but I’ve been slowly coming to realise that there are many other factors at play that cause people to shrug this off and carry on using Facebook regardless. I certainly don’t think Facebook was unique in this regard, and seemingly this type of manipulation has happened a lot since, including profile-targeted advertising during the UK 2016 Brexit referendum, the 2019 UK general election, and the 2020 US presidential election. The Social Dilemma mentions our susceptibility to these behavioural manipulation techniques, even when we’re aware of them—the co-creators of those technologies, despite a detailed understanding of how they work and what they’re being used for, still find themselves struggling against them.
just the beginning
I think it’s a big mistake to assume that the numerous social-networking-related problems currently affecting society—whether social network addiction or political interference—are simply going to go away. Whilst I think there is merit in the argument that this is a new occurrence of an old problem previously affecting other media such as television, I think that also vastly underestimates the reach, accuracy, and cost-effectiveness of these new systems. Whilst it’s important to investigate cases such as foreign-state manipulation in elections, to a certain extent I think obsessing too much over the ‘who’ risks missing some of the point: these are cracks showing in the landscape of the digital society, deserving of a much broader conversation about our personal and online susceptibility. In just a few years, there have been numerous real-world effects of influence from social networks, ranging from extreme mental health disasters to inciting genocide—yet regulation remains sparse, and the profits of the industry continue to grow astronomically. Unchecked, I see no reason why these negative effects will not also continue to grow.
In order to solve a puzzle, it’s first necessary to understand it. We’re living in a brave new world, where the technological innovations of past decades give unparalleled potential for both good and harm. But in the wider society, there seems to be very little understanding about what’s going on, or even how these social platforms make money. The 2018 US Senate hearing comes to mind, in which Mark Zuckerberg was asked how Facebook would sustain a business model in which users don’t pay for the service; the immortal reply, Senator, we run ads., continues to amuse and scare me to this day. But although highly unfortunate, it’s perhaps understandable that an 84-year-old senator might not at first understand the monetisation strategy of a social media company operating in an age where everything is ‘free’. What I find more worrying is how many people decades younger—some even working in tech—seem to show a similar lack of understanding about who is ultimately paying for these services, and also how little people seem to care even when numerous scandals hit the international news. Since I have to draw the line somewhere, I myself haven’t had a Facebook account for years, as in my view, Facebook is a ‘bad’ company showing very little social responsibility, and a woeful lack of conscience. Nevertheless, I can probably count on one hand the people I’ve met personally who don’t have a Facebook account—or who have even seriously considered closing it.
a modern utility
I’m increasingly hearing the argument that the internet is a utility, much like water, gas, or electricity, to which all people should have unfettered access. I’m very supportive of this idea, particularly as numerous statistical analyses indicate that those without access to the internet, or with only low-speed access, are put at an educational and economic disadvantage. What I think is an interesting corollary, however, is that in considering the internet as some kind of public resource, we should also be having more of a say about the responsibilities large tech companies have. I’m aware that some platforms are now offering to attempt to reduce the spread of misinformation, but ultimately I don’t think that those efforts will go far enough. As things stand, it’s largely not in their financial interests, and the game is optimised against any meaningful changes. Because of this, I find my opinion shifting to be more in favour of responsible regulation—not misinformed, dangerous regulation like mandatory backdoors in encryption products—but regulation informed by meaningful public feedback and experts. As a minimum, I’d like such experts to include representatives from technology, mathematics, psychology, economics, healthcare, education, and ethics.
At the end of The Social Dilemma, there is talk of deleting our accounts from the social networking platforms. My feelings about this are mixed: whilst I recognise that this is basically voting with our feet, I think it’s unrealistic to assume that a sizeable number of people will see this route as feasible. I mentioned above that for me, Facebook crosses a line. However, I do have Twitter, despite some of the criticisms applying equally there. I also have LinkedIn, although I use that only minimally—not least because of the risk of finding myself penalised by an algorithm or forced to close the account. Instagram I recently left, mostly as a clean-up as I hadn’t used it much other than for posting running photos. On a broader scale, though, I’ve been thinking increasingly about how Instagram and similar platforms can activate and amplify narcissistic traits.
One thing worth mentioning is that pretty all of these social networking platforms have alternatives—many of them free and open-source—without behavioural manipulation or anti-privacy tracking. Some of them are actually pretty advanced and usable, which is a huge credit to those projects considering what comparatively small budgets they’ve been developed with. But for social networking, perhaps such alternatives are disadvantaged disproportionately: the social nature of the software means that it’s not easy for one person in a social group to switch, since almost everybody else they know is still using the other system. That creates a somewhat Catch-22 problem, where I’m not using an alternative social network because you’re not, and you’re not because I’m not. This is hugely problematic, and deserving of more consideration—especially when it comes to questions of market dominance and anti-competitive practices. Without significant progress in the possibility of bootstrapping viable alternatives, I’m not sure any real choice in the market will exist.