Frances Haugen: I’ve Come Not to Bury Facebook, But to Save It

A change in leadership, employees with backgrounds in humanities, and a switch so a “slower, smaller” approach – this whistleblower darling is on a crusade to save lives by saving Facebook.

About 10,000 people were packed into the Altice Arena in Lisbon, Portugal on opening night of the Web Summit. It was one of the first tech gatherings this size since the outset of the pandemic. And Web Summit’s CEO, Paddy Cosgove, never one to shy away from thorny issues, was going to make a field day of it.

The opening speaker, no CEO or tech titan, was instead a young, genuine, and even unassuming blond woman who looked like she was plucked from an Iowa cornfield. And in an instant, Frances Haugen became the trending topic and darling of Web Summit.

Haugen (who really is from Iowa) has been thrust into the spotlight as the Facebook whistleblower. The former Facebook employee left her job as a product manager in May, taking thousands of internal documents with her.

And then she wrestled with how to do the right thing with the information she had, she told the audience. It was information that paints a sad picture of a company that has repeatedly valued profits over ethical business, sown discord, and created unhealthy dependencies.

On stage, to her right, sat Libby Lu, the CEO of Whistleblower Aid, a powerful organization that choose to represent Haugen only after a careful and laborious vetting process that looked at everything from her motives to her evidence. Now, with her as their client, the group coaches her, covers her legal fees, and even offers bodyguards and protection. Liu said Whistleblower Aid looks for a “strong sense of conscience, and an inability to tolerate wrong” before taking on a client. “They should not need to be torn between their careers and what’s right.”

Haugen is a whistleblower for our time. She is the “every girl” who worked hard and had the dream of working at Facebook. Within moments on stage she told the audience that her mother is a priest, one she turned to often for advice and counsel before deciding to come forward.  Her mother, she said, told her that “every human being deserves the dignity of the truth.”

“There’s been a pattern of behavior at Facebook where they have consistently prioritized their own profits over our general safety,” said Haugen. Facebook, she said, “has tried to reduce the argument to a false choice between censorship and free speech.” But in fact, she explained, it’s the algorithmic decisions Facebook runs its business on that amplify political discord, and spread misinformation, and contribute to addictive, unhealthy behaviors. The engagements Facebook creates are dangerous because they are based on a megaphone effect in which the most volatile posts are the most amplified.

Haugen has no shortage of remedies for what ails Facebook. She gives props to Twitter and Google, which she says more deftly and more transparently manage content moderation. “We need to make it slower and smaller, not bigger and faster,” she said of Facebook, explaining that smaller groups of family and friends are easier to moderate. She argued that computer scientists need to have a moral compass, and said that happens best when they have a background in humanities as well as coding. And certainly algorithms can be bettered, to change the dynamic of what information is presented to whom.

Finally, she said, “I’m a proponent of corporate governance,” reminding the audience that Mark Zuckerberg is the Chairman, CEO, and Founder of the company, holding well over half of the voting shares.  Things are “unlikely to change if that system remains,” she said.

“Maybe it’s a chance for someone else to take the reins. Facebook would be stronger with someone who’s willing to focus on safety,” she went on. She rued the fact that Facebook just announced hiring 10,000 engineers to build out its new Meta product, when that many engineers could have been deployed to fix Facebook’s obvious current problems. Though the company has publicly pushed back, declaring that her documents were a “cherrypicked” sample taken out of context, Haugen replied that she welcomes them to release more documents to the public.

Facebook Chief Product Officer Chris Cox kept his chin up in a virtual appearance at Web Summit after the company’s nemesis Frances Haugen had spoken.

At Web Summit, Facebook had ample opportunity to respond, but after Haugen’s authentically empathetic and smart delivery, its messages rang a bit hollow. Nick Clegg, who appeared at the conference via a Livestream, said, “There are always two sides to a story,” noting that Facebook’s artificial intelligence systems study the prevalence of hate speech on platforms. “Out of ten billion views of content, five might include hate speech – that’s 0.05%,” he said.

Chris Cox, one of Facebook’s earliest employees, now head of product and a close friend of Zuckerberg, also appeared on stage via Livestream, ostensibly to showcase the new Meta agenda. But he offered these words in response to questions about Haugen’s comments: “It’s been a tough period for the company. “He added that he welcomed “difficult conversations” around Facebook’s future.

The media, lawmakers and the Web Summit audience were enamored by Haugen and her message, which came peppered with wonderfully genuine, demurring comments like, “I don’t like to be the center of attention.”

But she came across as a force of nature when she said “Facebook would be stronger with someone who’s willing to focus on safety.” She is committed to a crusade she believes is fundamental to saving lives. What makes her not afraid, she said, is that “I genuinely believe that there are a million, or maybe 10 million lives on the line in the next 20 years, and compared to that, nothing really feels like a real consequence.”

“I have faith that Facebook will change.”

Related Posts
See All

Lilliputians Will Tie Up Gulliver in the Metaverse

A change in leadership, employees with backgrounds in humanities, and a switch so a "slower, smaller" approach – this whistleblower darling is on a crusade to save lives by saving Facebook.

The Name Facebook Needs to Change: Zuckerberg

A change in leadership, employees with backgrounds in humanities, and a switch so a "slower, smaller" approach – this whistleblower darling is on a crusade to save lives by saving Facebook.

The Stuff Shortage is a First World Problem

A change in leadership, employees with backgrounds in humanities, and a switch so a "slower, smaller" approach – this whistleblower darling is on a crusade to save lives by saving Facebook.

Techonomy 23 to Focus On the Promise and the Peril of AI

A change in leadership, employees with backgrounds in humanities, and a switch so a "slower, smaller" approach – this whistleblower darling is on a crusade to save lives by saving Facebook.

Lilliputians Will Tie Up Gulliver in the Metaverse

The metaverse was supposed to be open, not owned by anyone, and interoperable between all platforms. Does that sound like Facebook to you? 

Audacious. That is the only word to explain Mark Zuckerberg’s play to become the self-anointed king of the metaverse. This week Facebook announced the formation of a new parent company, Meta, and a full-steam-ahead approach to so-called Web 3.0. He seems to want to create a next-gen, distributed and ubiquitously immersive online world, where you will not just passively view the web experience, but will enter and interact with it. 

The shift of course is not dissimilar from Google’s new name Alphabet, coined in 2015 to allow it to move beyond the confines of a mere search engine into new realms. It’s a typical play for businesses that evolve beyond a single brand. But like Moses on the mountain, it’s likely Facebook will look down on the promised land of the metaverse (billions of dollars later) yet never dominate it the way that it has with social media.

This move evokes many mis-timed moves in Facebook’s past–think cryptocurrencies (several times), plays to dominate news, repeated privacy blunders and repositionings, and heedless application-platform moves that enabled data leaks like Cambridge Analytica. In the rebranding business, timing is everything, and Zuckerberg seems surprisingly tone deaf to the cadence. Big surprise.

During his Facebook Horizon announcement this week he pulled out all the charm he could summon, from playing a surfing game and making sunscreen jokes to talking about Facebook’s commitment to partnership. He says he wants to make the metaverse more open and help the burgeoning creator economy.

The 80 minute video demonstration/celebration of what Facebook’s metaverse might look like was a mixture of technically-exciting suggestions and cringeworthy vignettes. Overall, Zuckerberg, who aptly starred in the video, seemed like a guy who’d drunk the metaverse Kool-aid and has the money and resources to define what gets built. But the announcement had a used-car-salesman feel to it. “Gee, we sold you a lemon–so sorry. But look what we’re doing now!” He got high marks for tech prowess, low marks for authenticity.

Facebook is not cleaning up its shit in the real world (the real social media world of today, where it has made such a mess) but is instead trying to move on to the next world without a care in the world. La di da.

What could the company be thinking as it faces the wrath of almost uncountable government investigations in the U.S. and abroad?  Does it expect it can just abandon the sinking ship it’s been running and head to a new, shiny life raft?  How does its track record at Facebook prepare it for the even bigger responsibilities of a new type of more immersive, complex, and integrated virtual interaction?  

Facebook is Late to the Party

Zuckerberg is far from stupid. But he is not alone, nor is he early. Companies like Epic and Fortnite have already invested millions, have a direct line to the new generation, and don’t have such a tarnished reputation. Niantic and Snap, among other companies, are creating pieces of the metaverse that blend the real and physical world in ways that are refreshing and less dystopian. And others launch every day.

Money Doesn’t Buy Agility

A few months ago I wrote early looks at NFTs and the metaverse for this column. Who knew the metaverse frenzy would soon top tulip and dot.com manias, with so much stupid money being thrown at it?  Estimates say that the metaverse could be worth upwards of $82 billion by the end of the decade. Facebook is about to hire 10,000 workers to build its own, and plans to spend more than $10 billion on AR, VR, and metaverse tech this year alone. (As it spends about $5 billion annually on safety and security in its wobbly existing properties.) The question of who wins this battle–the pioneers, or the wealthy johnny-come lately–will be the David/Goliath story of the new internet. 

Disruptors Tend Not to Be the Old Guys, Right?

We’ve seen how disruption happens. It’s the Teslas, the Stripes, the Airbnbs, those nimble enough to see the holes, connect the dots, and move quickly but with focus. That was the Facebook of yore. It’s got too much baggage to move quickly now.

Moving Away from the Old Model

Facebook won’t be able to get away from their advertising business model. My suspicion is its metaverse will be a shopping mall with advertisers inventing a next generation of activations and ads.  But so many young metaverse companies are developing more interesting schemes.

Ties to Hardware and Facebook

We don’t know all the details yet but for the moment Facebook Horizon (the AR/VR/XR portion of meta) is too tied to Facebook’s hardware and Facebook’s past approach. Zuckerberg alluded to deals and openness, but for the moment you need a Facebook Oculus and a Facebook account to be in the club. 

The underlying tenets of the metaverse was supposed to have been that it was open, not owned by anyone, and interoperable between all platforms. Does that sound like Facebook to you? 

The Lilliputians will tie up Gullliver. 

Related Posts
See All

The Name Facebook Needs to Change: Zuckerberg

The metaverse was supposed to be open, not owned by anyone, and interoperable between all platforms. Does that sound like Facebook to you? 

Niantic Calls the Current Metaverse Dystopian

The metaverse was supposed to be open, not owned by anyone, and interoperable between all platforms. Does that sound like Facebook to you? 

Facebook, the Imperialist Media Company

The metaverse was supposed to be open, not owned by anyone, and interoperable between all platforms. Does that sound like Facebook to you? 

Facebook’s New Face: Dishonesty

The metaverse was supposed to be open, not owned by anyone, and interoperable between all platforms. Does that sound like Facebook to you? 

Facebook, the Imperialist Media Company

Techonomy’s Kirkpatrick continues to be appalled by developments at the company he wrote a book about back when. New name? Lipstick on a pig. This is a renegade media company, period, with horrific impact on the world.

The enormous set of unseemly revelations known as the Facebook Papers show, among other things, that Facebook is a global media company that does not bear its associated responsibilities. (A full list is here of the now well over 100 articles in over a score of outlets.)

A media company selects programming and presents it to viewers. Just about every one of the Facebook Papers stories is about how the company makes decisions about what content to show and to whom. Yet traditional media companies, unlike Facebook, are required by law and tradition to bear responsibility for the content they present viewers. Internet companies, by contrast, got an exemption in the now-notorious Section 230 of the Communications Decency Act of 1996, which says content created by users is not the responsibility of web platforms. Facebook has applied that logic of non-responsibility to speech and content distributed all over the world.

But Facebook doesn’t just provide a neutral window for content created by friends and organizations you follow. It curates that content and shows it to you selectively, in the order it chooses – in a way that maximizes revenue. What it shows you is its willful decision.

Many of the unseemly revelations in the Facebook papers flow directly or indirectly from how the company’s algorithms work. It is in the software that decisions about what content is shown to one viewer versus another is actually manifested on Facebook. Such software, like human editors, is making subjective decisions. And the algorithms are themselves governed by human decision making, either by the programmers who created the algorithm or later decisions made by content moderators, reviewers, or public policy executives. As the Washington Post reported in one recent story, “The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. “

It is in the poorer parts of the world where the company’s irresponsibility and inattention has the worst consequences. (Facebook revealed in its quarterly report this week that 3.5 billion people globally—half the population of the planet, use its products.) Whistleblower Frances Haugen herself says she was primarily motivated by worries about FB harming the “global south.” Her worries are justified. Here is a detailed and hair-raising accounting of Facebook’s misdeeds in poorer parts of the world, as revealed in the Papers, from the great site Rest of World.

Many of the problems FB faces are presented by its more forgiving journalistic critics, like Casey Newton, as consequences of its sad and unfortunately vast size, with the implication that it thus cannot be expected really to deal properly with these myriad issues. “There is a pervasive sense that…no one [in the company] is entirely sure what’s going on,” Newton wrote Monday, Oct. 25 in his Platformer newsletter, a little sympathetically.  

But I have watched Facebook over the past decade and a half make decision after decision in the interests of growth that could have been predicted to lead to the problems that resulted today, had any thought been given to that possibility. No thought was given, because growth, and the associated power and profit that flowed from it, was by far the main thing Facebook cared about.

Take for example the rapid expansion over the past decade into country after country, in language after language. Facebook did this enthusiastically, and heedlessly. There was in most cases no thought given to the moderation challenges that would inevitably ensue. But the kinds of content, privacy, and political missteps Facebook has been accused of in the United States are equally a risk, and a likelihood, in every other country.

To this day there is essentially no oversight given by Facebook to most content or user speech in distant countries. For this, in my opinion, Facebook is absolutely culpable. I will repeat a statistic that I cannot stop talking about from the Facebook documents: the company only allocates 13% of its moderators’ time for combatting misinformation outside the United States, even though 90% of users live elsewhere.

Let’s call this what it is: a neo-imperialist mindset that values what happens in the U.S. more highly than what happens in other countries. This has the consequence of causing less damage in the U.S. and a few selected European allies while more seriously weakening societies elsewhere. These neglected countries–where literally billions of the company’s users live–are the same ones the company blundered its way into as it sought growth and global hegemony. It’s not that different from how, for decades, the U.S. methodically undermined and overthrew governments perceived unfriendly to U.S. interests, especially starting in the 1950s, regardless of the consequences for people in those countries.

It is regularly said by Facebook spokespeople that right-wing discord and civic strife in the U.S., India, and elsewhere did not begin on Facebook. That is an idiotic truism. The question is whether Facebook’s actions, oversights, and willful neglect have worsened it. The Facebook Papers offer example after example that the answer is yes.

Facebook, had it been led by people who had an empathetic and ethical concern for its impact, could have grown more deliberately. It could have taken care to only enter a country or support a new language if it had the resources to properly govern its service there. But that would have slowed down the profit engine, and might not have made Mark Zuckerberg the richest 37-year-old in human history.

In the avalanche of material emerging from the Facebook Papers, it’s easy to get lost, or overwhelmed. But another quote from a different Washington Post story offers clarity: “The documents…provide ample evidence that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps.”

 

Note: The company also changed its name this week, to the arrogant-sounding “Meta.” How apt, that Facebook would trumpet its move towards a fuller escape from reality for its users just as COP26 was about to begin, underscoring how devastated will be the actual world those users live in. The central idea conveyed by the company’s new direction–that life should inexorably become more and more virtual–feels wrong. So with things burning down around us, is the idea that we can quietly and guiltlessly escape into a Meta virtual world where the great Zuckerberg guides us to digital bliss?

Related Posts
See All

The Name Facebook Needs to Change: Zuckerberg

Techonomy’s Kirkpatrick continues to be appalled by developments at the company he wrote a book about back when. New name? Lipstick on a pig. This is a renegade media company, period, with horrific impact on...

Facebook’s Tragic Obsession With Image

Techonomy’s Kirkpatrick continues to be appalled by developments at the company he wrote a book about back when. New name? Lipstick on a pig. This is a renegade media company, period, with horrific impact on...

Outlaw? Pariah? Renegade Empire? Digesting the WSJ Series About Facebook

Techonomy’s Kirkpatrick continues to be appalled by developments at the company he wrote a book about back when. New name? Lipstick on a pig. This is a renegade media company, period, with horrific impact on...

Facebook’s New Face: Dishonesty

Techonomy’s Kirkpatrick continues to be appalled by developments at the company he wrote a book about back when. New name? Lipstick on a pig. This is a renegade media company, period, with horrific impact on...

The Name Facebook Needs to Change: Zuckerberg

Facebook is reported to be considering a name change. But its image will not be remedied by anything other than fundamental reforms.

Can you imagine any normal company keeping its CEO if it faced, like Facebook does, an endless litany of criticism for failures, mistakes, harms, omissions, distortions, lies, and delusions? A normal company would not. A normal company would replace its leadership.

Facebook, of course, cannot. Facebook, of course, has no governance. Facebook is a dictatorship of one man, who controls 58% of the company’s voting shares. Yet his judgment appears worse with every passing day. It is hard to recall another big company facing this many scandals, this relentlessly.

I would not have expected myself to say it, but Facebook would be much better off without its current CEO. Sadly for the world, the prospect of such a change happening is almost nil.

We’ve reached a cruel irony. The world faces a blight of autocrats, leading country after country into xenophobic irrational often violent ultranationalism, just as one company has come to dominate the world’s communication networks, itself ruled by an irrational autocrat who is incapable of subjecting himself to any meaningful scrutiny.

And the further, deeply painful, irony is this: that exact company’s conscious and willful empowerment of those autocrats and their political parties is a significant factor in their rise. That is true around the world. A combination of several factors has played a key role. First, Facebook’s lenient “cross check” system, as revealed by the Wall Street Journal in its Facebook Files series, grants political leaders in almost every country a pass to say pretty much anything they want. (The company’s own Oversight Board this week said company managers had essentially lied to the board about how that system worked, as they investigated how Facebook treated Donald Trump.)

But that’s not all. Closely related to the lenient cross check system is the fact that the company has multiple incentives to keep government leaders happy, no matter what they post and no matter how bad they are. As this year’s first Facebook whistleblower, Sophie Zhang, put it this week in testimony for a committee of the British Parliament, “The people [at Facebook] charged with making important decisions about what the rules are and how the rules are getting enforced are the same as those charged with keeping good relationships with local politicians and governmental members.” She’s referring in large part to the power of Joel Kaplan, vice president for global public policy. (He’s a Republican known best for his friendship with conservative Supreme Court Justice Brett Kavanaugh, behind whom Kaplan sat in the controversial hearings that led to Kavanaugh’s appointment to the court.)

Just Friday Oct. 22, as this column was going to press, yet another new whistleblower complaint emerged in the Washington Post, alleging again that the company willfully disregard warnings about the pernicious impact of its moderation policy failures, especially in the U.S. This still-unnamed person apparently has filed a sworn affidavit with the U.S. Securities and Exchange Commission alleging, among other things, that company executives deliberately misled investors by playing down risks from failures of governance. The Post says this quote is in the affidavit, made by a company communications official during 2017 controversies about Russian government manipulation of voters on Facebook during the 2016 election: “It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

I continue to think that of all Facebook’s errors, both willful and inadvertent, the worst, by far, is that its policies regarding speech have enabled forces of division and political exploitation in country after country to spew lies, engender hate, and create social division, in a known pattern that ends up assisting evil autocrats and their allies to gain and hold political power. This is the criticism most stridently made by Zhang, whose job at Facebook was to try to combat “coordinated inauthentic behavior.”

In Time’s coverage of her testimony this week in Britain, it writes about what happened when Zhang tried to take action against manipulative and dishonest networks of pro-government posts on Facebook in undemocratic countries like Honduras: “When Zhang raised her findings with senior management, she was told by Guy Rosen, Facebook’s vice president of integrity, that threat intelligence would only prioritize campaigns in ‘the US/Western Europe and foreign adversaries such as Russia/Iran/etc.’ The company did not have ‘unlimited resources,’ he said.”

This approach is, in my view, a crime. Facebook is failing in an egregious way to take responsibility for its social and media influence in numerous countries where it is, almost always, the de facto primary media. (And this has been long known. Here’s an article I wrote about it back in 2018.) As for not having “unlimited resources,” that may be technically true, but Facebook can afford to do a lot more. Its net profits are in the range of $40 billion per year, and its CEO, even with the stock down, is still worth $127 billion.

I continue to believe the biggest smoking gun revealed so far in the numerous revelations from the most prominent recent Facebook whistleblower, Francis Haugen, is that only 13% of the company’s efforts against misinformation and disinformation in 2020 were conducted in languages other than English. Facebook is approaching 3 billion users around the world, who speak hundreds of different languages. That shameful statistic and related issues may emerge even more pointedly in revelations in the coming weeks, I hear. I hope so.

That a company could allow this sort of social harm to emerge is inexcusable, no matter what other good stuff it might make possible. This kind of harm is more likely when one person has unlimited power to make decisions with no oversight. Unfortunately, Facebook’s board of directors serves entirely at Zuckerberg’s whim, so he will go only if he decides to go.

The company was reported this week to be considering a name change. It is doing that, at least in part, to try to repair its image. But that image will not be remedied by anything other than fundamental reforms. And one fundamental reform is most needed.

Zuckerberg should no longer serve as Facebook’s autocrat.

 

David Kirkpatrick is Techonomy’s founder, and published The Facebook Effect in 2010.

Related Posts
See All

Facebook’s Tragic Obsession With Image

Facebook is reported to be considering a name change. But its image will not be remedied by anything other than fundamental reforms.

Outlaw? Pariah? Renegade Empire? Digesting the WSJ Series About Facebook

Facebook is reported to be considering a name change. But its image will not be remedied by anything other than fundamental reforms.

Facing Facebook’s Failure

Facebook is reported to be considering a name change. But its image will not be remedied by anything other than fundamental reforms.

Techonomy 23 to Focus On the Promise and the Peril of AI

Facebook is reported to be considering a name change. But its image will not be remedied by anything other than fundamental reforms.

Facebook’s Tragic Obsession With Image

Facebook executives and its all-powerful CEO feel so self-righteous that they dismiss all criticism outright, including this week. It’s tragic, but it’s possible to explain.

How long must we wring our hands over the errors and arrogance of one powerful company? Sadly, probably a lot longer. That’s largely because, whenever it’s confronted with criticism, that company treats it as a mere image problem to be managed with public relations rather than an opportunity for introspection or engagement.

The past week has brought near-endless discussion about the revelations of Facebook whistleblower Frances Haugen, as revealed in the Wall Street Journal, on 60 Minutes, and in her testimony to Congress. (My summary of the Journal series and its import.) Yet when CEO Mark Zuckerberg and his confederates try to explain away Haugen’s criticism, their statements are routinely laden with self-justification, self-righteousness, lack of humility, and myopia.

For all the fascinating ins and outs of the documents she took, Haugen’s own eloquent statements, and her proposed policy solutions, it’s important to keep it all in context. Long before she ever came along, we knew this company had done much that’s irresponsible, over an extended period. We knew it had inadequate governance. None of Haugen’s revelations come as a surprise to anyone who has closely followed this renegade company, led by its unrestrained and reckless all-powerful leader. We’ve heard it all before. (For just one of innumerable examples, in late 2018 I wrote a piece for Techonomy called “Facing Facebook’s Failure.”)

Some might say, as Stephen Colbert did this week “Wait a minute! Did you tell me a corporation chose money over the safety of consumers? That is so disturbing!” (About 5 minutes in.) But ruefully funny though that may be, many of us, and many in governments around the world, believe this company’s sociocultural and political role is so great it has unique responsibilities.

Facebook and Zuckerberg fail to take their responsibilities seriously enough, though ironically, their reasons for doing so are themselves grounded in a sense of exceptionalism. We–its legions of critics everywhere–see a crying need for this company to rise to the occasion of its centrality in global communications and commerce with better governance and oversight. But Facebook’s leaders see that centrality itself as justifying a general disregard for what they see as trivial and minor blemishes on a near-heroic effort to connect the world. Far from thinking themselves in error or even, as some say, criminally liable for harms, they believe they do not get sufficient credit for all the good they do.

Endless controversies have enveloped this company from its inception, but the turn toward widespread condemnation of Facebook began with the election of Donald Trump. The months and years following saw a progressive series of revelations about the social network’s role in his victory.

A quick set of reminders: Cambridge Analytica got hold of illicitly-acquired personal data about tens of millions of American Facebook users. That data was used by the Trump campaign and others to target Facebook advertising to potential voters with devastating exactitude, by both location and political disposition. The data was further targeted, it appears, with help from detailed electoral-district polling data stolen by Russian hackers from the Democratic National Committee. That same data was almost certainly used by Russian agents to precisely target pro-Trump ads they themselves purchased on Facebook, with rubles in some cases. Similar illicit political machinations have happened with far less press coverage in scores of countries around the world. Finally, Facebook embedded employees in the Trump campaign to assist it in spending many tens of millions on Facebook ads. (This same kind of assistance was also given around that time to the neo-fascist Alternative for Germany (AfD) party, helping enable its rise.)

In all this are multiple errors of commission and omission by Facebook. It failed to govern its data, enabling the Cambridge Analytica hack. It failed to govern its advertising, enabling the Russian electoral interference. It acted amorally in several domains, including the design of its algorithms and in its advertising operations. For lots more damning detail about all these failures, read the recent book An Ugly Truth: Inside Facebook’s Battle for Domination, by Sheera Frenkel and Cecilia Kang.

What can easily lead an observer to despair is the near-endless series of other areas Facebook has made similar errors and oversights. The errors have emerged from its headlong and heedless search for global growth, regardless of region, language, or country.

Much has been made this week about Instagram’s pernicious effects on girls’ mental health, and it’s a serious problem. But the most damning piece of data in the Haugen documents, in my opinion, is that only 13% of Facebook’s efforts to combat misinformation and disinformation in 2020 were conducted in languages other from English, despite that 90% of users are outside the US, overwhelmingly speaking other languages. People in most countries function on Facebook with literally no guardrails. This failure of oversight has contributed, by all evidence, to the rise of dishonest autocrats in country after country, including Duterte in the Philippines, Erdogan in Turkey, Orban in Hungary, and Bolsanaro in Brazil. In all those countries, Facebook’s services serve as the primary media, and corrupt fear-mongering politicians and political parties concentrate a huge share of their communications efforts on the platform, using both paid and unpaid messages.

In the many statements and appearances by company executives this week, none addressed this shameful statistic. And with the press obsessed over teenaged girls’ reaction to Instagram, they were not asked to. In Mark Zuckerberg’s own disingenuous and misleading Facebook post, he said a “false picture” of the company had been painted by the articles and testimony. Unless he can address issues like that statistic and show it to be false, he is lying.

For all the controversies aired in recent days, another very damning report, albeit unverified, emerged about two weeks ago. Bloomberg reporter Max Chafkin says, in his new biography of Peter Thiel, that in an early 2019 meeting with Trump and Jared Kushner at the White House, Zuckerberg agreed to continue his policy of not fact-checking political speech if Trump would hold off on regulating Facebook. Zuckerberg of course denies such a deal occurred.  But it would be in keeping with his self-righteous and self-centered governance of his company.

Always keep in mind that as he mouths high-minded defenses of his own behavior, Zuckerberg has through that behavior become the richest person his age in human history. He is now the sixth richest in the world, according to Bloomberg’s real-time rankings, with $123 billion, aged 37. As he moved fast and broke things, he got richer than anyone, ever. No wonder we feel disinclined to give him the benefit of the doubt. Also, in case you forgot, he absolutely and completely controls the company, in a manner also unprecedented for any organization this size in the history of business. This is the practical reason why he and other executives can so freely disregard criticism. He personally controls 58% of voting shares and cannot be overruled. When a group of directors began proposing significant steps towards governance of speech on Facebook about two years ago, he unceremoniously fired them all from the board.

The damning information could go on endlessly, and will, until Zuckerberg is forced to change. But for an eloquent analysis of the company’s ineffective image-centric crisis response this week, read this trenchant thread from political consultant Steve Schmidt, analyzing one of the only public appearances by a senior Facebook executive in the wake of Haugen’s revelations. As he calls it–“a blizzard of words and empty assertions delivered with confidence.”

David Kirkpatrick is founder of Techonomy and author of The Facebook Effect.

Related Posts
See All

Facing Facebook’s Failure

Facebook executives and its all-powerful CEO feel so self-righteous that they dismiss all criticism outright, including this week. It’s tragic, but it’s possible to explain.

Outlaw? Pariah? Renegade Empire? Digesting the WSJ Series About Facebook

Facebook executives and its all-powerful CEO feel so self-righteous that they dismiss all criticism outright, including this week. It’s tragic, but it’s possible to explain.

Facebook’s New Face: Dishonesty

Facebook executives and its all-powerful CEO feel so self-righteous that they dismiss all criticism outright, including this week. It’s tragic, but it’s possible to explain.

Facebook “Has Known This Forever”

Facebook executives and its all-powerful CEO feel so self-righteous that they dismiss all criticism outright, including this week. It’s tragic, but it’s possible to explain.

New Taxes Could Help Manage Big Tech

We have to find ways to level the playing field or this small group of vastly-powerful companies will continue to amass disproportionate wealth even as they wreak social havoc. Without major systemic changes society will remain the victim of their ever-growing power.

Historically technology has undone the economic power it creates.  IBM is a good example, still here, but living off its past and fumbling the future.  Yet for the current generation of tech giants, this time may be different.  It is hard to see what displaces the huge installed base that exists today for these companies.  Artificial intelligence, mirror worlds, quantum computing——these will all play an important part in our future, but I am not sure they create a discontinuity.  The current tech giants are already the leaders in these, across the board. So we must find ways to manage and restrain their power.

The technology giants benefit enormously from two factors.  First, they have been able to largely monopolize the market for top technology talent by being able to offer above-market but below-value compensation that others cannot match, aside from potential startup unicorns.  The vehicle for this has been equity-based compensation, aided and abetted by favorable tax treatment from the government along with short-term focused equity markets.  Second, their key raw material, for the most part, is aggregated personally-identifiable information (PII), which they collect by offering free services.  Again, the free services are above market but below value, as there is little value in one person’s data but enormous wealth in aggregating the data of many. 

Thus their dominance is the result of a combination of things.  There is a virtuous circle here—for them.  Top talent yields good business results yields rising stock prices, which yields tax advantages and high compensation for employees. 

Yet we have to find ways to level the playing field. Otherwise, this small group of vastly powerful companies will continue to amass disproportionate wealth even as they periodically wreak social havoc. Legal constraints like antitrust and conventional regulation just do not seem to work.  Taxes do.  We have what we have because corporate leaders optimize within the rules they are given. So we need new rules, and changing how and what we tax may be the most promising approach to genuinely-impactful positive progress.

To level the playing field, I propose five interlocking policy changes. Four of these tax changes are designed to alter behavior more than to raise revenue.  The last one alters how equity compensation awards are taxed, effectively raising taxes on the tech giants but lowering them for their employees.

1. An Excise Tax on Corporate Market Value

Here’s a strawman for this: a tax of 1% on market capitalizations over $100 billion, rising by 1% for each incremental $100 billion.  Thus, a corporation valued at $300 billion would pay $3 billion annually in tax.  This tax simply reflects the principle that diversity is good and increases economic robustness, but it lets the market decide what to value and who to punish. Competition is good for society and the economy.  The levels and amounts of such a tax will be a political decision.  This applies to all companies.  The social costs of excessive market and societal power are the same regardless of industry.

2.  An Excise Tax on Personally identifiable information.

PII is the core value for the tech giants, even more than their software.  Just as we tax oil companies for the oil they extract from the ground, we need to tax the tech giants on the PII they collect.  Since no individual’s data is very valuable on their own, but the value of large aggregated groups of such data is high, people charging for use of their own data is not realistic. But since the exact value of such data is impossible to accurately assess, despite its obvious worth, there are metrics such as petabytes accessed per month.  The tax should be targeted to collect, say, an annual tax equal to 20% of the profits of Facebook. Again, the level is a political decision.  It could be as low as 5% or as high as 50% or even higher.  There are ways for Facebook and similar companies to avoid aggregating PII but still conduct their business.

3.  An Excise Tax on Stock Trades

This is called a “Tobin tax” after its initial developer, Nobel-prize-winner James Tobin.  Essentially, it presumes that capitalism is too efficient and overweights the present.  Such a tax offsets this overweighting.  Such a tax would reduce the ability of managements to spike stock prices for their own advantage and focus our capital markets more on the longer term. Make it a low tax, maybe 0.5%, but even that would help refocus the market on economic rather than trading value. The goal is to change behaviors not raise revenue.

4.  An Excise Tax on Stock Repurchases

Corporate repurchases of their own stock are often motivated by the desire to increase executive pay packages which are tied to the price of the stock.  So it is no surprise corporate managements favor them.  There are some legitimate purposes for such repurchases, so the tax might be set relatively low, something like 5%.  Again, the intent is to drive behavior, not raise revenue.

5.  Tax Treatment of Equity Compensation

The tax code encourages companies to grant stock options by giving companies a tax deduction for the actual amount received by employees after a stock has appreciated, even though they do not include that in their reported earnings.  Employees pay ordinary income tax on that amount.  We should tax equity compensation for employees the same way we tax carried interest for venture capitalists, at capital gains rates. But we should allow corporations to only deduct their accounting cost of these awards or maybe a multiple of that.  No one would have foreseen the huge gaps we have seen.  Today this is a huge tax advantage for the tech giants.

Facebook and others did no wrong.  They did what our laws and courts ask them to do: maximize profits.  Maybe they made business judgments that were short-term biased, but our entire capital market is focused that way.  All these taxes should be phased in over time.  The biggest challenge to any policy change is the enormous amount of wealth their equity represents, with all the political pressures that implies. One consequence might be that we’d have more, smaller, companies. But letting the giants divide into multiple companies may ultimately be very beneficial to their shareholders.

If we want a different outcome, we must set different rules.  These tax ideas are but a start. The political reality may be that none of these are feasible, or perhaps only some.  But without major systemic change in how we give companies incentives, society will remain the victim of their ever-growing power.

Related Posts
See All

It’s Not Too Late for Digital Media Governance & Regulation

We have to find ways to level the playing field or this small group of vastly-powerful companies will continue to amass disproportionate wealth even as they wreak social havoc. Without major systemic changes society will...

Facebook “Has Known This Forever”

We have to find ways to level the playing field or this small group of vastly-powerful companies will continue to amass disproportionate wealth even as they wreak social havoc. Without major systemic changes society will...

The Would-Be Banking Regulator Who Wants to Eliminate Banks

We have to find ways to level the playing field or this small group of vastly-powerful companies will continue to amass disproportionate wealth even as they wreak social havoc. Without major systemic changes society will...

Generous Tax Subsidies for Sustainable Aviation Fuels in the U.S.? Yes, But Details Matter.

We have to find ways to level the playing field or this small group of vastly-powerful companies will continue to amass disproportionate wealth even as they wreak social havoc. Without major systemic changes society will...

Facebook “Has Known This Forever”

Facebook is soft-pedaling its own internal research about the effects of Instagram, and underestimating the severity of eating disorders, an expert says.

This article was originally published at Observer.com, where Marco is editor-in-chief.

 

n 1995, 15 years before founders Kevin Systrom and Mike Krieger would obtain seed funding for the app that would become Instagram, a meta study of the mortality of anorexia nervosa was published in the Journal of American Psychiatry The findings were alarming. The study showed that the mortality rate associated with anorexia nervosa was more than 12 times higher than the annual death rate for females 15-24 years old in the general population, and the risk of suicide more than 200 times higher. In the decades that followed, more research was conducted. The conclusions were similar. Eating disorders including anorexia, bulimia, and EDNOS (eating disorder not otherwise specified) were not only deadly, but had a range of mortality rates that, at the high end, were comparable to the abuse of cocaine. A meta study of all-cause mortality of mental disorders conducted in 2014 found that anorexia nervosa specifically was associated with a higher mortality rate than alcohol use disorder. Only opioid use was significantly more deadly.

It’s important to keep this context in mind when reading the internal Facebook research documents published by the Wall Street Journal on September 29.  In a presentation titled, “Teen Girls Body Image and Social Comparison on Instagram — An Exploratory Study in the US,” researchers at Facebook mapped out — in colorful diagrams and branded charts — the “downward spiral” that is both triggered and “exacerbated” by use of the Instagram platform. “Once in a spiral,” the document reads, “teens work through a series of emotions that in many ways mimic stages of grief.”

The stages of grief are presented as an ouroboros of brightly hued arrows pulled from the Instagram brand color palette. “Bargaining” is a deep royal purple. “Insecurity” is a lovely cornflower blue followed immediately by the bright kelly green of “Dysmorphia.” The result, according to Facebook, is that “aspects of Instagram exacerbate each other to create a perfect storm.”

What’s inside the storm? Facebook researchers concluded that “mental health outcomes related to this can be severe.” Below this headline, in bright red text, is a list of the outcomes. The first is “eating disorders.”

To say that explicitly connecting the use of a product to a category of disorders that have similar mortality rates to cocaine abuse is alarming would be an understatement. But eating disorders are primarily associated with — and disproportionately affect— women and girls. They are not treated with the same seriousness as substance use disorders. It is difficult to imagine a researcher at a tech company presenting a rainbow colored “downward spiral” that ends with amphetamine abuse before moving on to design recommendations that include implementing more “fun” photo filters and experimenting with “mindfulness breaks.”

Renee Engeln is a professor at Northwestern, where she runs the university’s Body and Media Lab. Engeln studies the same relationships between social media, mental health and body image that Facebook is addressing in the leaked report. I sent her the report and called shortly after.

“We’ve known all this forever,” she said immediately. “They’ve known this forever, too.”

She told me, based on viewing the report, that Facebook is underestimating the severity of eating disorders as well as how widespread eating disordered behavior is. At the same time, Engeln said Facebook was also missing the larger point — the effect that Instagram has on its users. “You don’t have to have an eating disorder for it to matter,” Engeln said. “When a whole generation of girls spends a significant amount of time hating what they see in a mirror, that’s a mental health issue, even if they don’t meet the diagnostic criteria for a mental disorder.”

Facebook claims that the leaked research has been mischaracterized by the Journal, and has responded by publishing and annotating two internal presentations about the toxicity of Instagram. “This type of research is designed to inform internal conversations and the documents were created for and used by people who understood the limitations of the research,” reads the update to a statement attributed to Pratiti Raychoudhury, Vice President, Head of Research for Instagram.

In a Senate hearing on September 30, Sen. Richard Blumenthal of Connecticut read from documents provided to his office by a whistleblower that contradicted Facebook’s soft-pedaling of its own internal research. “Substantial evidence suggests that experiences on Instagram or Facebook make body dissatisfaction worse, particularly viewing attractive images of others, viewing filtered images, posting selfies and viewing content with certain hashtags,” Blumenthal quoted.

Testifying at the same hearing, Facebook’s global head of safety Antigone Davis said that the company believes that Instagram helps more teens than it harms, but added that the research led to “numerous” changes that include “a dedicated reporting flow for eating disorder content.”

Engeln rejected Facebook’s argument that Instagram was sometimes a positive experience for young people. “The fact that the platform can provide positive and negative experiences isn’t interesting. That’s just typical. When I see people downplay a report like this, I want to know how many people produced the report. How many people were in the meeting when it was presented? I want you to add up all those hours, and how much those people are paid, and then tell me you didn’t think it was a big deal.”

Facebook has, in recent days, paused their initiative to develop “Instagram Kids,” a version of the app for users under 13. “This will give us time to work with parents, experts, policymakers and regulators, to listen to their concerns, and to demonstrate the value and importance of this project for younger teens online today,” wrote Adam Mosseri, Head of Instagram. 

Mental health experts were not called out specifically in Mosseri’s list. Whether or not Facebook’s internal research is accurate, the manner in which the conclusions were presented demonstrates an attitude that seems out of step with the seriousness of their findings. And there isn’t a lot of evidence to suggest that their internal researchers are either wrong or unqualified to study the problem.

“I know there are scientists working at Facebook and Instagram,” said Engeln, “We have people who’ve gotten PhDs from our department who work there. I know they have good scientists, so I know that they knew this stuff already.”

A study conducted by Engeln’s lab along with researchers from UCLA and the University of Oxford , showed that Instagram is potentially more uniquely harmful to self-image than Facebook’s other products.
Engeln and her team found that when study subjects used Instagram (but not Facebook) it led to a significant decrease in body satisfaction — in only seven minutes of use.

“They just messed around on their own Instagram account for seven minutes,” Engeln said. “And that was enough.”

I asked Engeln what social media companies could do to improve mental health outcomes for their users. She was not optimistic that Facebook would implement changes without intervention.

“I don’t trust social media companies to do anything to minimize the harm to young people. I think it is not in their best interest to do so. I think what they’re most interested in doing is minimizing harm to their reputation so that they can continue to make lots of money and garner lots of social influence and power. And I’m sorry if that’s obnoxious, but you can quote me on that.”

Meg Marco (@meghann on Twitter) is editor in chief at The Observer, published by New York-based Observer Media. This piece was originally published there.

Related Posts
See All

Instagram for Kids: Doomed from the Start

Facebook is soft-pedaling its own internal research about the effects of Instagram, and underestimating the severity of eating disorders, an expert says.

Facing Facebook’s Failure

Facebook is soft-pedaling its own internal research about the effects of Instagram, and underestimating the severity of eating disorders, an expert says.

Outlaw? Pariah? Renegade Empire? Digesting the WSJ Series About Facebook

Facebook is soft-pedaling its own internal research about the effects of Instagram, and underestimating the severity of eating disorders, an expert says.

7 Technologies for Fighting Climate Change

Facebook is soft-pedaling its own internal research about the effects of Instagram, and underestimating the severity of eating disorders, an expert says.

Instagram for Kids: Doomed from the Start

Will Facebook continue wading into the quagmire of children’s digital content or double down on the next generation gold rush – the metaverse?

Under intense scrutiny from lawmakers and childrens’ advocates, Facebook has, once again, reluctantly pivoted to a promise that it will listen first and act only after that. This week the company, which owns Instagram,  paused its controversial efforts to build an “Instagram for Kids” that would allow kids under 13 years of age to share photos and comments with each other providing they had parental permissions. Facebook said that the company is taking a timeout on Instagram for Kids because it is “the right thing to do”, adding that it would “work with parents, experts, and policymakers to demonstrate the value and need for this product.” (It made this decision after a devastatingly critical series of articles in The Wall Street Journal, with related follow-ons in The New York Times and Washington Post.)

Putting the Instagram for kids project on pause is a big concession for Facebook.  Those who give the company the benefit of the doubt will say this marks a big and thoughtful step from a company whose mantra has too often been “move fast and break things”. Harsher critics will say the embattled company will simply move on to more lucrative waters like its new infatuation with the metaverse in order to win the next generation of consumers. But whichever way it turns, Facebook has lost another round in its campaign to earn public trust.

Facebook’s Inability to Move into New Marketplaces

The company has a history of ill-timed rollouts as it struggles to remain the dominant social network and keep attracting new generations of users. For example, it abandoned and then downsized its 2019 plans to forge into the cryptocurrency space with its Libra token amidst public outcry.

Facebook Messenger for Kids, now available in the U.S., Canada, Peru, and Mexico, suffered serious public backlash when it was introduced in 2017, though it has operated mostly without incident since.  It is meant to be a safe way for parents to give children under 13 access to instant messaging. Anecdotal evidence suggests that despite creating a reasonable set of guardrails, most parents who tried the platform were either not tech-savvy enough or committed enough to use Messenger for Kids routinely. As we publish, Facebook is facing extensive inquiries from Congressional lawmakers, looking at everything from its role in vaccine misinformation to whether it contributed to the January 6th insurrection. A Senate hearing on September 30 found a Facebook safety official unable to convincingly reply to legislators from both parties with strong criticisms of how it manages Instagram.

Timing and Good Will

Is it Facebook’s missteps, timing, or lack of goodwill that makes its efforts to stake out the kids’ space so polarizing? Other video-sharing-based companies have carved a relatively smooth path to their walled gardens for kids, notably YouTube and TikTok.  While not perfect, YouTube for Kids makes it fairly straightforward for a parent to set up access to age-appropriate videos that have been vetted for appropriateness and are presented free of ads. And TikTok offers settings that provide additional safety and privacy features for kids under 13. Why does Facebook/Instagram get so much pushback? Timing is everything.

The conversation surrounding Instagram for Kids became much more frenzied when the Wall Street Journal released its series of reports called the Facebook Files. Based on a whistleblower’s internal documents and memos, one part of the expose reveals that Facebook has been, for quite some time, conducting numerous studies into how its photo-sharing app affects its millions of young users. The documents suggest it is well aware of Instagram’s potentially harmful effects, notably amongst teenage girls. Instagram can exacerbate negative body image issues, anxiety, depression, and even suicidal thoughts.

Facebook retaliated with a statement saying the WSJ document was taken out of context and that the data shows teens can benefit from photo-sharing sites. And the WSJ  has now published some of the documents, which do show plenty of positive effects for teens from Instagram. At the end of the day, it would be surprising if Facebook hadn’t done its research and identified the negatives. It shouldn’t come as a big surprise that Instagram might cause harm. Like most other social media products, the double-edged sword is omnipresent. The question is: What will Facebook do to mitigate the most toxic effects? In all the controversy, that question has not really been addressed by the company. And it certainly wasn’t answered on September 30 to the senators’ satisfaction.

All Hands on the Metaverse?

The “for kids digital content” market has always been a bear.  That’s even more true when the content is user-created and shared. From a revenue perspective, this market appears to be a losing proposition.  You can’t advertise to these kids, for example. Under COPPA laws you can’t gather any personal information. And kids don’t typically own credit cards to purchase things. Yet enterprising children everywhere seem to either cajole parents to release the purse strings or figure workarounds, because they’re more tech-savvy than their parents. I’ve long thought that kids are so important to the future of the Internet that it might be wiser to create a nationally-funded effort — a sort of PBS of the Internet — where kids’ safety was paramount.

Moving forward, it’s clear that Facebook is in the hot seat for Instagram for Kids, and just generally more subject to strong criticism than when it purchased Instagram in 2012. It was initially run more independently, until its two founders, Kevin Systrom and Mike Krieger, left in late 2018. Adam Mosseri, the longtime Facebook executive who is the latest CEO of Instagram, has vociferously defended the company’s decision to put the kids project on pause.

At the same time, Andrew Bosworth, Facebook’s newly appointed CTO, has been super-vocal on a related matter, cheerleading Facebook’s newest mission to become a metaverse company. And maybe that’s the direction Facebook/Instagram will move toward, as it plows millions of dollars into a campaign to build a socially-responsible metaverse while rebranding itself from a social media company to a metaverse one. It’s a bold, long-visioned play, even though Facebook is a relative latecomer to the metaverse. (The Washington Post, however, portrays the metaverse pivot as partly a head fake that aims to distract regulators and legislators from the company’s separate mistakes.)

Facebook’s past, the current Instagram Kids media coverage, and future metaverse issues are more closely tied than you might think. The question is whether Facebook will continue wading into the current quagmire of children’s digital content or double down and focus on the next generation gold rush — for kids who will inevitably be active in the metaverse. (Read a Techonomy summary of that trend here.) I think it would be best for the company to adopt a “tech-Darwinism” view that they’ll make more money reaching out to kids where they’ll be tomorrow, and ditch Instagram for Kids entirely.

Extra credit: Watch this recent Atlantic Interview with Andrew Bosworth to understand the company’s ambitious metaverse plans.

Related Posts
See All

Outlaw? Pariah? Renegade Empire? Digesting the WSJ Series About Facebook

Will Facebook continue wading into the quagmire of children’s digital content or double down on the next generation gold rush – the metaverse?

Spiritual Opium: China Limits Kids’ Online Gaming Time

Will Facebook continue wading into the quagmire of children’s digital content or double down on the next generation gold rush – the metaverse?

Facebook’s New Face: Dishonesty

Will Facebook continue wading into the quagmire of children’s digital content or double down on the next generation gold rush – the metaverse?

Seth Godin Believes We Can Still Tackle Climate Change

Will Facebook continue wading into the quagmire of children’s digital content or double down on the next generation gold rush – the metaverse?

Outlaw? Pariah? Renegade Empire? Digesting the WSJ Series About Facebook

How can one company make so many mistakes? An astonishing reportorial coup by the Wall Street Journal is rattling Zuckerberg’s empire.

The Wall Street Journal this week began publishing an extraordinary series of damning articles about Facebook. The five so far consistently hit one theme–that the company is often very well informed about harms its systems cause society, but nonetheless chooses either not to take action or to lie in public about what it knows, or both. What makes the series unusually authoritative is that it derives from an extensive cache of internal company documents, many created by teams at the company intended to detect and prevent harms.

Here’s the Journal’s summary of its first four articles: “Facebook’s own research lays out in detail how its rules favor elites; its platforms have negative effects on teen mental health; its algorithm fosters discord; and that drug cartels and human traffickers use its services openly.” The most recent piece, published Friday Sept. 17, documents how the company has been unable to prevent its systems from being overrun by anti-vaccine propaganda. Again, all this comes not from dissident ex-employees or outsiders, but from the company’s own employees, as documented in internal presentations.

Sadly, for those like me who have followed Facebook closely for years, none of this is a surprise. But the Journal series may bespeak a major shift, even for those who are jaded and expect little other than evasion and apathy from this shockingly-powerful company. The articles suggest it may start to be seen widely as an outlaw enterprise, one that cares little whether or not its activities bring harm to society. The series demonstrates a company reflexively making choices about how to respond to challenges based not on a desire to reduce harm but rather on an obsession with user growth, user activity (which generates page views that enable more advertising), and especially, public perception.

One depressing and telling new statistic revealed in the the Thursday installment of the series is about global efforts around misinformation, which is rampant and has damaged country after country. Internal documents show that Facebook employees and contractors in 2020 spent 3.2 million hours finding and often removing false or misleading posts and ads. But only a shocking 13 percent of all that work was spent on content outside the U.S. Yet, the article too gently points out, 90 percent of Facebook usage is in other parts of the world. The reason for the discrepancy, based on the evidence presented here as well as accounts in the recent book The Ugly Truth, is that the company is primarily concerned about negative press coverage in the U.S.

The article makes the urgent point that Facebook does not even have the capacity to perform oversight in many places where egregious harms are taking place because of the platform. It gives the example of Ethiopia, where “armed groups use the site to incite violence against ethnic minorities.” The documents show that few, if any, company employees monitor such activity. The company has no speakers or reviewers for some languages used in Ethiopia, nor are the artificial intelligence tools Facebook touts as a silver bullet to address harmful speech used in those languages.

In classic obfuscatory company-speak, spokesman Andy Stone responded to such concerns by telling the Journal this week the company has safety programs in “over 50 languages.” Sounds good, right? In fact, Facebook officially supports over 110 languages, and scores more are used on its platform. Facebook’s net profits after tax will exceed $40 billion this year. Could it afford to ensure hate is monitored and managed in all those languages?

The Journal notes the documents show that employees themselves feel “embarrassment and frustration, citing decisions that allow users to post videos of murders, incitements to violence, government threats against pro-democracy campaigners and advertisements for human trafficking.” The Journal also writes that these internal reports “offer an unparalleled picture of how Facebook is acutely aware that the products and systems central to its business success routinely fail and cause harm.”

“We’re not actually doing what we say we do publicly,” a company researcher says in one document. That statement summarizes much of the entire series, and even Facebook’s standard attitude.

Other articles in the series describe equally unsavory actions. The first installment revealed a vast list of 5.8 million prominent users worldwide, many of them politicians and public figures, who are exempt from the company’s normal enforcement policies regarding speech. “These people can violate our standards without any consequences,” according to an internal presentation. The list was created, according to the documents, specifically to prevent what the company calls “PR fires”–instances of bad publicity.

Another article that has generated tremendous reaction, including demands from U.S. senators for more information, focuses on Instagram’s effects on teen self-image, and Facebook’s disingenuous public statements about what it knows about harmful effects from its platform. The senators are especially concerned because a letter sent from Mark Zuckerberg to Congress in response to a formal request for company data on this problem omitted mention of a key company study that found 32 percent of teen girls said Instagram made them feel worse if they already felt bad about their bodies.

Though the Journal repeatedly has sought comment from Facebook on all this, executives have shown little sign of contrition or even a willingness to admit the company has done anything untoward. One tiny exception is in the first article about the program exempting 5.8 million from policy enforcement on speech. Spokesperson Stone conceded criticism of the program was “fair.”

Many people find it hard to explain how one company could make so many mistakes, often so willfully. Why doesn’t this company want to be a force of order and progress? The answer is pretty simple, it seems to me. Facebook has no governance. It doesn’t have a CEO. It has an emperor. What he cares about is growth, so growth priorities prevail over harm remediation.

His control over the company is total. If he doesn’t like a board decision, he fires its members. But everyone needs and benefits from oversight. No one is perfect. Zuckerberg may be the richest person his age in human history. He may have unfailing confidence in his judgement and motives, but in central, fundamental, critical, obvious ways, he is failing as a leader.

Related Posts
See All

Why Zuckerberg Is Not Worthy

How can one company make so many mistakes? An astonishing reportorial coup by the Wall Street Journal is rattling Zuckerberg's empire.

How Do We React to the Climate Tragedy?

How can one company make so many mistakes? An astonishing reportorial coup by the Wall Street Journal is rattling Zuckerberg's empire.

7 Technologies for Fighting Climate Change

How can one company make so many mistakes? An astonishing reportorial coup by the Wall Street Journal is rattling Zuckerberg's empire.

Techonomy 23 to Focus On the Promise and the Peril of AI

How can one company make so many mistakes? An astonishing reportorial coup by the Wall Street Journal is rattling Zuckerberg's empire.