Friction gets a bum rap in tech.
Tech entrepreneurs have internalized an underlying ethos to “move fast.” With this baked into their DNA, they have generally viewed friction as a speed bump that should be removed. Whether we are sharing an article on Facebook, sending a tweet, or clicking a YouTube video that has been algorithmically recommended for us, the process is designed to be quick, frictionless, and therefore not requiring much thought. Friction, the conventional wisdom dictates, slows down users and lowers engagement.
Most websites, for instance, view transparency as a barrier to entry, or as friction. That may mean, for instance, serving up a simple explanation of what the user is trading away with his or her next click, pausing to get explicit buy-in, or even just presenting a clear view of what lies beyond the current page. And since so many digital revenue models are built around engagement, the lower the barriers, the better. Friction gets in the way of profits.
We Need More Friction—But It Will Not Happen Organically
It is friction, however, that could be the saving grace for the web as society works to decrease misinformation and hate speech. Speed bumps that slow users down in how they write and share online are probably necessary to move us from reactive to reflective. The current structures of major web platforms optimize making information viral even as they give the platform immunity from consequences. Is it really a surprise that users are served up a toxic stew of hate and misinformation?
Of course not. Right now, there is a societal pressure for networks to “do the right thing,” but it has not altered the cold, hard calculations that could truly alter corporate behavior. As long as the business model of the net platforms is based on selling users’ information to advertisers, things will not improve.
But what if companies could maximize their profits by creating an environment that is better for users and society at large? We’ve seen it work on the margins already. For example, Europe’s General Data Protection Regulation (GDPR) has incentivized all digital businesses to increase friction. It prioritizes transparency and informed consent, and threatens financial penalties for those that fail to provide it. Slowing down a user’s behavior online in order to offer them greater transparency certainly makes sense if the potential financial penalty outweighs the engagement cost.
The side benefit is that we’ve seen friction in action. Slowing people down allows them to consider the consequences of their actions.
While there are obviously bad actors with ill intentions aiming to spread hate and misinformation online, the influential power of the content is largely a creation of regular users. Bad actors require users (and occasionally bots) to spread their content by mindlessly sharing and commenting on it. Small roadblocks that slow users down could lessen all the mindless consumption and communication. By increasing the time from head-to-said and from seen-to-share, we can likely decrease hate speech and misinformation online.
Thinking about the appropriateness of our actions online requires time for reflection. Should I write this? Should I share this? The removal of friction has dramatically reduced this time to reflect. And, even worse, we reward those impulses with the instant gratification of likes and shares. This is dangerous.
How we act is often related to the amount of time we have to consider our action. Human behavior is not fixed. This is the very idea behind a cooling-off period, which allows for a greater level of consideration before a final action. It’s Jiminy Cricket sitting on our shoulder, allowing for a process of consideration. Platforms have the opportunity to insert small speed bumps in the process, making it more likely that a users’ Jiminy Cricket appears. One interesting example of this is the startup ReThink, which lowers incidents of cyberbullying by allowing users to “rethink” their words before they are officially sent. It adds a layer of friction to improve user behavior. The technology scans the text’s linguistic sentiments before it is sent, and gives a user the opportunity to alter or not send an offensive message.
Users Won’t Apply Friction on Their Own
Unfortunately, this is not how we are currently approaching the problems of misinformation and hate speech online. In our pursuit of decreasing toxic behavior, we typically focus on educating users to “be kind,” even as platforms remove content deemed offensive (according to agreed-upon terms of service and content moderation policies). In our efforts to decrease misinformation, we have focused on telling users to fact-check the veracity of articles and on having platforms flag and remove dubious content.
I don’t believe platforms will move in a direction that is beneficial for society without smart regulation. Right now, in many ways, the public is trying to shame platforms into altering their behavior—basically asking companies with shareholders to make less money. That hasn’t worked, because what we are asking of the tech companies is in conflict with their duty to maximize profits. A company that wants to do the right thing would be at a competitive disadvantage. Regulation is needed to set a new baseline of acceptable behavior. Applying small levels of user friction could be a way to maximize profits if it reduced the likelihood of financial penalties.
Regulation will force the platforms to think about different business models. Instead of solely focusing on removing toxic content online, companies will be incentivized to lower the likelihood that it appears online in the first place. This all starts with friction.
AUTHOR’S NOTE: Perhaps the movement towards friction is catching on. On the same day this piece is being published (having been in editing since Thanksgiving), Kevin Roose from the New York Times published an opinion piece with similar themes, “Is Tech Too Easy to Use?“