In the aftermath of 9/11, investigators, federal agencies, and Congress realized that the information we would have needed to detect and thwart the plot had been available. What was missing was a systemic ability to connect the dots, see patterns and view the big picture.
Since 9/11 the government has made great strides in sharing data between law enforcement, defense and intelligence organizations, but meanwhile, the world has moved. The new battlefield is cyberspace. And much of the most critical data in cyberspace are controlled not by government, but by internet and social media companies.
The efforts that such companies, particularly Facebook and Twitter, are undertaking to battle orchestrated efforts to spread disinformation, hate, and extremism are admirable, but their approaches have a fatal flaw.
The platforms are each working in isolation, seeking out bad actors based on activity on their own platform, then removing them and the content they created. It is laudable that they want to halt the spread of these actors’ messages, but their approach is leading us down the same path that resulted in 9/11.
Sophisticated bad actors’ strategies are cross-platform. You may not even be able to identify a bad actor if you are looking only at their posts on Facebook. It is not possible for any one platform to identify sophisticated adversaries by examining only data from their own platform. Critical patterns emerge only when data from a wide range of sources are combined. Limiting the search to only one (or even a few) sources is like trying to examine an elephant through a soda straw.
The 9/11 Commission Report emphasized that the critical tool to implement was a better system of information sharing. Government entities clearly heard and implemented this message. But 17 years later, we are at another inflection point of equal importance that requires partnership and cooperation between the public and private sectors.
In the recent hearings on social media in the House and the Senate, the focus was mainly on the past election and identifying fake content. What was missing was a proposal or any specific idea that could improve how we see patterns, gain insights and protect our citizens. That would allow us to make the next big leap.
We have an idea that is very simple, powerful, and easy to implement. It doesn’t require social media companies to do anything extraordinary. It does require an attitude of cooperation, a willingness on all sides to tone down the rhetoric and a desire to build positive partnerships.
The idea is to ask each social media channel that attracts bad actors to build and make available to certain partners a “bad actor API,” or application programming interface. Currently, when social media providers identify a bad actor’s account, they delete it and all the data with it. This makes it impossible for others to study these accounts’ behavior and learn from it. A bad actor API would allow third parties to access extensive public data about these wrongdoers for research purposes, and ultimately prevention.
It’s not a new concept, since APIs are already routinely used by social media channels to share user information with third parties. They help advertisers build plans and help an array of partners understand what customers or potential customers may be doing. It’s a widely accepted way to learn together.
When we want to promote or sell something, we fully embrace the use of APIs and the data that comes with it. For some reason, however, we don’t do this for bad actors. Instead, we applaud social media platforms for merely deleting accounts and information which is thus never seen again.
This information should be retained and the companies should make the API available to third parties whose mission would be to combine these data with other data sources to identify patterns.
Data scientists will be able to see those patterns more quickly and they should help us understand behavioral signatures, potential plans of action and other significant information.
If the public and private sector are to accomplish this goal, both will need to place more attention on the power of doing something right together.
Deleting accounts, today’s primary tool, is not the answer. If fake content reaches us for a few days and then is stopped, does that negate its impact? The answer is no. People have already been disinformed. The damage cannot be undone.
We don’t buy more Kleenex to treat the flu. We do research and develop vaccines. Society needs to build systems that enable us to act more like an R&D team. The two of us are professors for the U.S. State Department’s marketing college, where we describe how to counter disinformation and deal with levels of hate and extremism in our world. We can make much more progress in battling hate if we work as one team.
Instead of grandstanding at hearings, congresspeople should pull up the 9/11 Commission Report and read the section that discusses “a different way of organizing government to unify the many participants in the counterterrorism effort and their knowledge in a network-based information sharing system that transcends traditional government boundaries.”
Don’t let the fake news discussion divide us. Let it inspire us to team up, innovate together and build a more civil and safe society. We owe it to ourselves, and we owe it to the memories of our friends and colleagues who didn’t make it home on September 11, 2001.
Bob Pearson is co-author of the book Countering Hate and Senior Advisor of W2O Group, a digitally oriented communications and marketing firm. Dr. Victoria Romero, is chief scientist at Next Century, a technology consulting firm formed in response to 9/11. Bob and Victoria are professors for the U.S. State Department’s marketing college, which started in 2008 in an effort to provide private sector learning into the public sector.