Facebook’s struggle with abusive behavior today looks a lot like Microsoft’s struggles with malware 20 years ago: people take advantage of an open platform, and you have to work out how far you can close the holes, how much you can scan for bad things, and whether you need to change the whole concept from the ground up.
For Microsoft’s malware problem, however, this was not the long-term answer: instead the industry changed what security looked like by moving to SaaS and the cloud and then to fundamentally different operating system models (ChromeOS, iOS) that make the malware threat close to irrelevant.
Facebook’s pivot towards messaging and end-to-end encryption is (partly) an attempt to do the same: changing the model so that the threat is irrelevant. But where the move to SaaS and new operating systems happened largely without Microsoft, Facebook is trying to drive the change itself.
Way back in 1995, when there were just a hundred and fifty million PCs on Earth, somebody had a wonderful idea. Or, as the Grinch would say, a wonderful, terrible idea.
Microsoft had put a huge amount of effort into turning Office into an open development platform. All sorts of large and small businesses had created programs (‘macros’) that were embedded inside Office documents and allowed them to create wonderful automated workflows, and there was a big developer community around creating and extending this.
But the Grinch realized that there was an API for looking at your address book, an API for sending email and an API for making a macro run automatically when you opened a document. If you put these together in the right order, then you had a virus that would email itself to everybody you knew inside an innocuous-looking Word document, and as soon as they opened it it would spread to everyone they knew.
This was the ‘Concept’ virus, and it actually only infected about 35,000 computers. But four years later ‘Melissa’, doing much the same thing, really did go viral: at one point it even shut down parts of the Pentagon.
I've been reminded of this ancient history a lot in the last year or two as I’ve looked at news around abuse and ‘hostile state activity’ on Facebook, YouTube and other social platforms, because much like the Microsoft macro viruses, the ‘bad actors’ on Facebook did things that were in the manual. They didn’t prise open a locked window at the back of the building - they knocked on the front door and walked in. They did things that you were meant to be able to do, but combined them in an order and with intent that hadn’t really been anticipated.
It’s also interesting to compare the public discussion of Microsoft and of Facebook before these events. In the 1990s, Microsoft was the ‘evil empire’, and a lot of the narrative within tech focused on how it should be more open, make it easier for people to develop software that worked with the Office monopoly, and make it easier to move information in and out of its products. Microsoft was ‘evil’ if it did anything to make life harder for developers. Unfortunately, whatever you thought of this narrative, it pointed in the wrong direction when it came to this issue. Here, Microsoft was too open, not too closed.
Equally, in the last 10 years many people have argued that Facebook is too much of a ‘walled garden’ - that is is too hard to get your information out and too hard for researchers to pull information from across the platform. People have argued that Facebook was too restrictive on how third party developers could use the platform. And people have objected to Facebook's attempts to enforce the single real identities of accounts. As with Microsoft, there may well have been justice in all of these arguments, but also as with Microsoft, they pointed in the wrong direction when it came to this particular scenario. For the Internet Research Agency, it was too easy to develop for Facebook, too easy to get data out, and too easy to change your identity. The walled garden wasn’t walled enough.
The parallel continues when we think about how these companies and the industry around them tried to react to this abuse of their platforms:
In 2002, Bill Gates wrote a company-wide memo entitled ‘Trustworthy Computing, which signaled a shift in how the company thought about the security of its products. Microsoft would try to think much more systematically about avoiding creating vulnerabilities and about how ‘bad actors’ might use the tools it chose to create, to try to reduce the number opportunities for abuse
At the same time, there was a boom in security software (first from third parties and then from Microsoft as well) that tried to scan for known bad software, and scan the behavior of software already on the computer for things that might signal it’s a previously unknown bad actor.
Conceptually, this is almost exactly what Facebook has done: try to remove existing opportunities for abuse and avoid creating new ones, and scan for bad actors.
|Remove openings for abuse||Close down APIs and look for vulnerabilities||Close down APIs and look for vulnerabilities|
|Scan for bad behavior||Virus and malware scanners||Human moderation|
It’s worth noting that these steps were precisely what people had previously insisted was evil - Microsoft deciding what code you can run on your own computer and what APIs developers can use, and Facebook deciding (people demanding that Facebook decide) who and what it distributes.
However, while Microsoft’s approach was all about trying to make the existing model safe from abuse, over the last two decades the industry has moved to new models that make the kinds of abuse that targeted Microsoft increasingly irrelevant. The development environment moved from Win32 to the cloud, and the client moved from Windows (and occasionally Mac) to the web browser and then increasingly to devices where the whole concept of viruses and malware is either impossible or orders of magnitude more difficult, in the form of ChromeOS, iOS and to some extent also Android.
If there is no data stored on your computer then compromising the computer doesn’t get an attacker much. An application can’t steal your data if it’s sandboxed and can’t read other applications’ data. An application can’t run in the background and steal your passwords if applications can’t run in the background. And you can’t trick a user into installing a bad app if there are no apps. Of course, human ingenuity is infinite, and this change just led to the creation of new attack models, most obviously phishing, but either way, none of this had much to do with Microsoft. We ‘solved’ viruses by moving to new architectures that removed the mechanics that viruses need, and where Microsoft wasn’t present.
In other words, where Microsoft put better locks and a motion sensor on the windows, the world is moving to a model where the windows are 200 feet off the ground and don’t open.
Last week Mark Zuckerberg wrote his version of Bill Gates’ ‘Trustworthy Computing’ memo - ‘A Privacy-Focused Vision for Social Networking’. There are a lot of interesting things in this, but in the context of this discussion, two things matter:
Most of Facebook use (he expects) will be person-to-person messaging, not one-to-many sharing
All of that messaging will use end-to-end encryption.
Much like moving from Windows to cloud and ChromeOS, you could see this as an attempt to remove the problem rather than patch it. Russians can't go viral in your newsfeed if there is no newsfeed. ‘Researchers’ can’t scrape your data if Facebook doesn't have your data. You solve the problem by making it irrelevant.
This is one way to solve the problem by changing the core mechanics, but there are others. For example, Instagram does have a one-to-many feed but does not suggest content from people you don’t yourself follow in the main feed and does not allow you to repost things into your friends’ feeds. There might be anti-vax content in your feed, but only if one of your actual friends decided to share it with you. Conversely, problems such as the spread of dangerous rumours in India rely on messaging rather than sharing - messaging isn’t a panacea.
Indeed, as it stands Mr Zuckerberg’s memo raises as many questions as it answers - most obviously, how does advertising work? Is there advertising in messaging, and if so, how is it targeted? Encryption means Facebook doesn’t know what you’re talking about, but the Facebook apps on your phone necessarily would know (before they encrypt it), so does targeting happen locally? Meanwhile, encryption in particular poses problems for tackling other kinds of abuse: how do you help law enforcement deal with child exploitation if you can’t read the exploiters’ messages (the memo explicitly talks about this as a challenge)? Where does Facebook’s Blockchain project sit in all of this?
There are lots of big questions, though of course there would also have been lots of questions if in 2002 you’d said that all enterprise software would go to the cloud. But the difference here is that Facebook is trying (or talking about trying) to do the judo move itself, and to make a fundamental architectural change that Microsoft could not.