fbpx

Security & Privacy Tech & Society

WhatsApp is a Threat to Society. Here’s How to Fix It.

This year, Facebook distracted the world with its stances on political ads and law enforcement access, drawing attention away from a significant problem: the destructive impact of organic content. Around the world, enormous harm is being done using online platforms, all without spending a single dollar on ads. No platform illustrates this better than WhatsApp, the messaging app that Facebook acquired in 2014. 

With an estimated 1.5 billion users who send more than 75 billion messages per day, WhatsApp is the world’s biggest messaging service (ad free, for now). Users can call or send messages to their friends and family; but large groups (up to 256 people) are also a compelling feature of the app. Highly convenient, and with low bandwidth thirst, WhatsApp makes it very easy for users to rapidly send text, images and videos to literally thousands of other users within seconds.  

WhatsApp’s most defining feature—end-to-end encryption—means such user-generated or forwarded content can’t be read, let alone moderated, by outsiders. Mark Zuckerberg, when announcing “the future is private,” compared encrypted messaging to the “digital living room,” a place for private conversations with just a few people. In reality, though, WhatsApp groups can also resemble a “town square” filled with haters, racists, liars, and abusers. Enabling privacy is virtuous and essential. Enabling the rapid and large-scale spread of dangerous, distorted, and deceitful content is irresponsible and dangerous.

Very quickly, WhatsApp’s scale and speed have become vehicles for dangerous ammunition. Messages that promote hate, distort the truth, call for violence, and seek to manipulate recipients move at an unprecedented velocity and have an extraordinary reach.  

This is not theoretical: WhatsApp messages have incited a series of lynchings in Indiaswayed an election in Brazil, and sparked riots in Indonesia. The platform has also become a breeding ground for white supremacy. And the provenance of such messages is not only unverifiable, but in many cases, has been shown to be coming from organized purveyors with dark campaign agendas.  

WhatsApp and its parent company, Facebook, have been slow to respond to these and other abuses. In desperation, governments are resorting to the most amateur of IT fixes: flicking the switch off. But censorship is not justice, and the platform is still on the hook. WhatsApp can and must do more to address the dangerous behavior that its platform is enabling. 

Here’s how: 

First, to be clear, WhatsApp needs to stay strong on encryption. We live in an age of mass state-led and corporate surveillance. Encryption is critical to protect our human rights online. WhatsApp needs to stay the course, especially in the face of law enforcement demanding “backdoor” access to these services and spyware vendors like the NSO Group thwarting encryption to target vulnerable groups. 

Second, WhatsApp needs to make more product changes to stop its service from being so easily exploited. Like Facebook, WhatsApp was designed with minimal “friction” to maximize growth, creating a bonanza for bad actors. A recent study in the MIT Technology Review suggests that adding responsible limits, minor inconveniences, and slowing down transactions has significantly slowed the spread of disinformation. In this area, WhatsApp can go much further than its belated efforts to tweak group settings and forwarding limits. For example, the platform can cap spam and make all groups opt-in by default. Spamming the masses and adding hundreds of users to groups without their permission is antithetical to its privacy principle. 

Third, WhatsApp needs genuine innovation to address the most dangerous content and behavior on its platform. Facebook has developed various “trust and safety” systems to try to manage these issues, including automatic detection of pornography, child abuse, and inauthentic behavior, and initiatives to debunk and downrate false stories. Citing its end-to-end encryption, WhatsApp likes to say, “What content?” But as experts at a recent roundtable discussion at Stanford’s Internet Observatory suggested, there may be ways to develop safety tools to run automatically on users’ devices while preserving their privacy. Innovation along these lines might enable encryption to be maintained while tackling some of the most problematic content. 

Fourth, WhatsApp needs to dramatically increase its engagement with stakeholders. It’s not possible to understand and solve these problems from inside the Silicon Valley bubble. WhatsApp needs to connect and collaborate with child protection organizations, human rights defenders, minority groups, and other affected users as well as researchers and academics to understand the dynamics of dangerous content and test better ways to address it. This will require much more openness, transparency, and humility. In return, WhatsApp will gain a deeper understanding of harms, access better research, and source innovative solutions from a wider pool. 

Despite the damage that WhatsApp is enabling, so far the platform has largely escaped scrutiny. Mark Zuckerberg hasn’t had a “WhatsApp Myanmar moment” yet; he hasn’t been grilled by Congress about the trail of deaths and democratic distortions that WhatsApp enables. But it’s only a matter of time; last week’s Congressional hearings on encryption, while focused on the wrong solution, point to real risks. There’s growing concern in the US that white nationalists are increasingly using encrypted messaging to spread hate speech, and that the millions of pieces of child abuse content that Facebook reports every year would become invisible if Zuckerberg encrypts all messaging services. We predict that WhatsApp will be a powerful, targeted, and un-scrutinized misinformation tool in the 2020 elections. 

Legislators and regulators are paying much closer attention now—as they must. WhatsApp would be well-advised to spend less time on its new advertising and payment systems and more time fixing its current product.

David Madden is a senior advisor at Omidyar Network, Founder of Phandeeyar, an innovation lab in Myanmar (Burma), co-founder of Purpose, a digital strategy agency, and social movements, Avaaz.org and GetUp.org. Anamitra Deb is senior director of the beneficial tech initiative at Omidyar Network, the philanthropic investment firm established by Pam and Pierre Omidyar, the founder of eBay.

3 Responses to “WhatsApp is a Threat to Society. Here’s How to Fix It.”

  1. Richard Lunsford says:

    Traditional telephone services may long enabled transmission of messages between criminals engaged in illegal activities. We don’t blame AT&T or Verizon when their networks are used to conduct drug deals.

    The article is an example of wanting to have your cake and eat it, too. On the one hand there is the desire for one’s own privacy, but others can only have privacy if their messages are acceptable according to 3rd party values, and they’re supposed to be screened for acceptability.

    Yet then I looked not long ago at an article on PCMagazine, another liberal-slanting publication online, and saw an article critical of facial recognition software. You know, that helps companies randomly screen crowds for known trouble-makers, criminals, etc…?

    So, facial recognition for known malefactors is bad, but burdening companies with screening my personal messages to friends is a moral mandate, because you just don’t know what I might be up to.

    Wow. Just ‘wow.’

  2. BOB says:

    L Messages to large groups probably do not constitute a high percentage of WhatsApp traffic. It should be simple enough to limit the sizes of groups to, say, 20 or so. It would definitely be simple to incorporate an opt-in feature for group messages from people not in a receiver\’s contact list.

    The content of messages between and among people who welcome senders and put them into their contacts list should continue to be encrypted and unscreened, as is the case with any private conversation. The article seems to concentrate on “white supremacists” and the like but not to be concerned with wildly liberal and dangerously communistic communications which are almost certainly on WhatsApp all the time.

  3. Michael D'Amico says:

    Facial recognition software has not been perfected and there is zero control over how a user works with it. It may have caught some criminals. But it still produces unacceptable levels of false IDs with all darker skinned races, but especially with black individuals. That’s simply not fair. It’s a denial of due process. In the United States, I would venture to guess facial recognition software is being developed solely by white male nerds in Palo Alto and Mountain View. No inherent bias, no identity politics going on there, right? And I chuckled when I read recently that Russia is heavily investing in and developing facial recognition s/w. There are no blacks in Russia.

Leave a Reply

Your email address will not be published. Required fields are marked *