Following Monday’s six-hour outage that took down Facebook, Instagram and, rather more seriously, What’sApp we have a chance to reevaluate our relationship with some of the internet’s most powerful platforms.
Facebook is having a torrid time of it at the moment and the sense of schadenfreude surrounding its travails around the internet is a strong one. Under fire in Washington and desperately trying to run damage limitation on the Wall Street Journal’s incendiary Facebook Files, it borked its own software upgrade on Monday afternoon and effectively told the internet ‘we don’t exist, please stop trying to find us’.
It disappeared for six hours, taking Instagram and WhatsApp with it. This was a problem. Businesses that have ported over their online presence to Facebook have long been aware of how beholden their existence has become to it, dependent on the mysterious workings of its newsfeed algorithms to surface their presence to customers. But the disappearance of WhatsApp was perhaps the greatest shock to many, as everyone from family groups to, in some cases, national health services found they could no longer communicate when they needed to. The situation was worse in some countries too. In Brazil and India — and probably coming to you at some time in the future — WhatsApp even processes payments.
Facebook is estimated to have lost around $60 million in ad revenue alone over the course of the six hour outage. Add in the consequences of losing WhatsApp, and the global impact to world economies for the same length of time is probably much higher than that. And if that does not make you question the wisdom of putting too many eggs in Zuckerberg’s Basket, then not much else will.
That basket could be about to be further upset soon, however, with a new push to enable consumers to take back control of their data that will not only impact the plans of Facebook, but Google, TikTok, Netflix, and every other tech company that looks to scrape your data to either sell you things directly or lead advertisers to your door.
The great data heist
In her 2019 book The Age of Surveillance Capitalism, US academic Shoshana Zuboff relates an anecdote regarding some of the early research into the establishment of smart houses that took place a couple of decades ago. All the attributes of how a house can work now were in there, with centralised control and automation covering functions such as heating, opening and closing windows and blinds, lighting, and more. What was radically different about that house was that it wasn’t connected in any meaningful way to the internet. Yes, it was generating data about what happened so it could learn the habits of the residents, but that was all processed on site and considered private.
Consider the difference in such systems now. These habitually connect to data centres where every aspect of the house’s operation is catalogued and analysed. From this analysis, even if we have not told our system supplier anything about who lives in this house, it can sift through the data to work out how many people live in the house, what ages and genders they likely are, and more. And it can and does use that information to target us with new products and services.
It’s a simple example. In real life the likes of Facebook and Google (and China’s Social Credit System while we’re at it) are much more sophisticated than that. But the concept of what happens to our data is the key here: we have moved from an assumption of privacy to an acceptance of intrusion.
Yes, there is an increasing amount of legislation to protect us. The EU’s 2018 enactment of the GDPR (General Data Protection Regulation) has kickstarted a wave of similar lawmaking around the world and essentially means firms have to ask our permission to process our personal data. Enforcement is uneven, however, and timeframes for action are lengthy, and there is a definite sense that we would be better and more efficient guardians of our own data ourselves.
Which, as The Register points out, is where Tim Berners-Lee and the BBC come in.
The personal data store
A personal data store is built around the concept that we do want to effectively trade some of our data for, for example, better recommendations on Netflix, but that doesn’t mean external companies have the right to all of it.
With it, your data is kept on an edge device — via a phone app, say — and its use is governed by three main concepts: legibility, agency, and negotiation.
Legibility is about seeing what your data is and what is happening to it; agency gives you more granular control than simply opting in or opting out; and negotiability allows you to specify what you want by way of return.
This is all based on Berners-Lee’s open source tool Solid. Berners-Lee has referred to it as turning the world the right way round again. “The idea of surveillance capitalism depends on your data going, by default, to somebody else, and Solid’s default is that it goes to you,” he told The Guardian.
The BBC development team’s first experiment in the field is a recommendation engine that uses live personal data from Spotify, Netflix and the BBC’s own services to create a media profile for a user. This allows the user to view and edit their entire media viewing history in one place. The team then sent a profile derived from this data to their research version of BBC Sounds to provide “enriched recommendations and suggestions of relevant local events.”
In many ways it is similar to what happens now anyway. The difference is the users’ agency in the whole process. They have decided what is stored and what is shared rather than it being decided for them. The service, in this case BBC Sounds, gets enough to do its job effectively, no more
And of course it doesn’t stop at media services. The BBC team has developed what it calls “speculative prototypes” that sketch out a more comprehensive product it refers to as 'My PDS'. This has multiple profiles, including media, health, finance and social, and a central dashboard, allowing users to view, edit and manage data, as well as implement a selection of services that sit alongside it.
Sharing data rather than taking data
The result puts the relationship between people and companies on a far more even footing. It gives us as users the power to control what we share and what we expect to get in return, and it gives us as companies data which we have explicit permission to process in certain ways to provide services. The difference is there is no surplus, the least amount is used and everyone can always see precisely what is going on whenever they want to at all times.
As the Register puts it memorably: “We do need a revolution that puts the power in the hands of the people, but we probably don’t want to shoot the Tsar and his family.” Personalised services require a certain amount of data to operate and, as ongoing innovations in fintech and insurtech among others prove, there is a huge advantage to consumers in granting that data. But it should be limited and controlled by us as well. When we’re buying things we don’t just hand over our card and say charge us what you feel like, we want to know what the cost will be. And one of the first steps in taking back control is giving people tools so that they can see what is happening.
At the moment, few of these controls apply to Facebook and, as Frances Haugen’s testimony in Washington is making clear, the company values profit above all else. But with governments and legislatures around the world looking at some of the business practices of the big tech companies through steadily more jaundiced eyes, the days when they can make absolutely all the decisions themselves might soon be coming to a close.
Comments