Fighting Misinformation With Design Thinking

Lucas Mosele
8 min readFeb 7, 2018

Recently there’s been quite a lot of discussion surrounding social media companies and their responsibility for political propaganda on their platforms. Meanwhile, the design community has been slowly but surely moving towards a more open-ended conversation about the ethics of being a designer. Design, in of itself, is inherently political, so where do our responsibilities lie when the software we build is used for politically malicious purposes?

It’s often the designers who first face the dilemma of how people interact with digital experiences. Designers are the first to filter through our platform’s acceptable use cases, both social and otherwise. This process starts as early as the whiteboarding sessions.

Whiteboarding sessions are typically used to address concepts for a new product. Whether it’s a web application like Facebook Messenger or mobile application like Snapchat, a group of stakeholders get together to ask hard questions and design around one or multiple “personas” that identify the expected audience for that product.

We’ve run into a problem though.

From what we’ve seen, social apps like Reddit, Facebook, and Twitter, were designed mostly around benevolent personas. Recently, however, we’ve seen a number of state sponsored (and private) actors using social media in a malicious manner. Reddit is plagued by astroturfing for products and political opinions, Twitter has a bot problem, and Facebook’s newsfeed algorithm is abused every day by all sides to spread biased information. In a nutshell, platforms designed to bring people together are being used to sow division.

That’s not all. Whatsapp, which initially seemed unlikely as a propaganda platform due to its role as a one on one messaging app, has become one of the largest sources of misinformation in countries where it’s been adopted. In fact, Whatsapp is in an especially precarious position since the private nature of Whatsapp groups makes misinformation hard to track and debunk, often surfacing until way after it’s reached saturation point within a community. Facebook’s recent efforts to address the same issue on the core newsfeed is essentially nullified in Whatsapp because of this. The likelihood of a homogenous political group calling out information that confirms their biases is effectively zero.

It’s not surprising then that China’s Whatsapp equivalent, WeChat, is capable of highly targeted and effective propaganda campaigns. Chinese-American political group CVA (Chinese Voice for America) used WeChat’s “official account” feature as a signal boosting tool to help spread pro-Trump rumors during the 2016 elections. Impossible to confirm or deny within these small groups, rumors like pork being banned in the USA to please Muslim immigrants spread like wildfire. In her investigation the effects of CVA’s techniques, Eileen Guo added succinctly:

The litmus test for truthfulness has moved from, “is this argument supported by evidence?” to, “is this argument shared by someone whose judgment I trust?”

2016 was a case study on the effectiveness of social propaganda, from the Ukraine to the USA, any political party who does not leverage some form of subversive media tactic is now effectively at a disadvantage. The Indian political party BJP has even set up trained employees to spread favorable news via 5,000 Whatsapp groups.

K Amresh, the convenor of the BJP IT cell notes the effectiveness of these campaigns:

“They don’t have to do any meetings, hold protests or any other activity. The least expected [of them] is to spend half an hour everyday on mobile”

This type of political campaigning requires very little money, little to no effort, and just enough moral ambiguity that we’ll start to see parties on all sides begin to adopt it, all it takes is knowing your audiences threshold for outrage.

Our goal as designers, regardless of political opinion, is to make it harder for this to happen.

Changing our way of approaching the design of platforms that can be leveraged for abuse is the first step. It’s crucial to start designing not only around benevolent personas, but against malicious personas as well. Designers need to start thinking like white hat hackers building out safeguards against security threats.

Designers and developers have gone for a few decades without being faced with any major societal issues, but the onus is on us now to start thinking about ways to mitigate the spread of what is essentially a social virus on platforms we helped create. Here are some tips for starting the conversation on your own teams:

1. Don’t jump to censor the conversation.

The easiest mistake to make when getting into the topic of “fake news” or online abuse is facilitating censorship. It gets worse if people find out you’ve removed, deleted, or blocked any content that confirms their biases. That’s not to say organizations should not ban toxic behavior, nor should we all suffer for the sake of some people’s opinions on what believe Free Speech constitutes. In fact, censoring communities that breed toxicity is the key to keeping more people from being swayed by divisive rhetoric.

The challenge that lies in this is knowing the appropriate threshold for the nuclear option. Outrage is fuel to the fire, and being censored can and will be used against any platform to propagate the persecution complex and validate people’s biases. Not only this, but once censorship becomes the norm, the processes in place for censorship will mean that the bar will get lower and lower for what constitutes as an offense, paving the way for communities dictated by the political opinions of the teams managing them.

2. Design against virality

One of the most common factors in abuse on social platforms is the ease of anonymity. “Alt” accounts facilitate this by allowing one person managing multiple accounts to “signal boost” using retweets. Account verification (ie: the “blue checkmark”) was an attempt to combat this, but doesn’t actually target the root issue. It doesn’t matter if the account you learned information from is a verified journalist or not, because of the nature of sites like Twitter, biases are confirmed by virality of the content.

Let’s look at Twitter’s trending hashtag page as the main offender. Say a user logs in, the first thing they see is #releasethememo (or some other notable hashtag). Without any context, it’s easy to fall under the assumption that the information it contains is factually correct, because 1. It’s on Twitter’s trending UI, which provides validity, and 2. it’s being debated by numerous accounts with thousands of retweets.

Without ever actually being exposed to the content of said viral “news story” the user is already forming opinions by being exposed to the “Top” tab on Twitter’s trending page. The user’s newfound opinions then begin to form around their existing biases as they dive deeper into the top tweet’s debates, eventually solidifying when the user is inevitably forced to justify their bias, or when challenged with factual information.

The abuse vector in “Top” tab’s algorithm is the ease of molding a tweet’s virality. As we’ve seen recently, entire networks of bots are maintained to retweet and spread information tweeted out by small clusters of human maintained accounts, often at all times of the day. Networks of accounts with thousands of users (often bots within the same network) leads to a virality bias very similar to what I described above. Any Twitter user can relate to the experience of having an opinion shaped or reshaped by the conversations on the trending page, what many don’t realize is how easy it is to fall into a planned campaign around certain subjects.

While we understand that you can’t design away human nature, we can facilitate more responsible exposure to new information. In its simplest form, just switching the default tab from “Top” to “Latest” would give users the breathing room to seek information sources to validate said story, since it’s a lot less likely that I’ll take the average two retweet Joe’s opinion on a subject at face value. Moving away from a retweet focused sorting would also ease the effects of “signal boosting” by malicious botnets. The likelihood of this happening is low though, since it would force Twitter to move away from its main competitive advantage against traditional media, which is the access to pre-digested information as it happens.

3. Shift the focus back to the human factor

One of the positive developments (depending on your perspective) in recent social media is the increasing usage of platforms like Snapchat and Instagram due to their inherently apolitical content. While they’re not entirely invulnerable as a vector for biased information (looking at you, Snapchat news) the format of the media and the manner in which the content is consumed creates a barrier to entry for any wannabe propagandists. It’s hard to spread false information with a fake account when these platforms revolve solely around cults of personality, unless its via traditional paid news content.

Facebook’s recent shift in newfeed strategy is also a great example of the lessons learned on Instagram. It’s considerably harder to spread viral news if access to unmoderated content is limited to that one uncle sharing Infowars articles. This reality, in conjunction with the fact that most people don’t want to be exposed to politics while browsing online, means that users can focus on interacting with friends and family vs. entering the information firehose of unfiltered news that the Facebook newsfeed became over the past decade.

4. Create Design Patterns to surface misinformation

One of the most interesting methods of combating misinformation I’ve seen so far came in the form of Botcheck.me, which uses a set of algorithms and criteria to detect bot accounts and target them for flagging via their Chrome Extension’s UI. Botcheck me looks at the following:

  1. Account Creation Dates
  2. Relationships to “botlike accounts”
  3. Unnatural post frequency/hours
  4. Moderated behavior
  5. Usernames
  6. Compromised account lists
Botcheck.me allows you to access the analysis data from inside the UI during conversations

These are criteria that can be used to surface misinformation in a non-biased way on any platform, if done correctly. With all that said though, the most important aspect of all of this is:

5. Allow users room to form their own informed opinion

An interesting observation I’ve made recently is that Digg, a traditionally Left-leaning content aggregator, recently added a “Trending Stories on Social” box to their homepage. While this may seem counterintuitive, it actually provides a good counterweight to the potentially biased information coming from their staff-curated media, since a good chunk of the content that displays on it daily comes from viral news articles on Fox or similarly contrasting news sites. In spite of my opinion on Fox, it’s important to note the learning opportunity here. Social platforms will always benefit from linksharing and news articles being posted and discussed. That will never stop. What can be improved is how we design solutions that expose users to media they normally consume.

This isn’t me saying “we should start recommending news we agree on”, thats a terrible slippery slope (if not a full-blown cliff) but instead a recommendation to use the immense amount of existing data on political affiliation as a kernel for new features to open a wider variety of content to users who might otherwise not be exposed to it.

As designers, we should be allowed to be opinionated on how we want to educate users on our platforms, if that means creating features that surface media in a responsible way and using that data to drive the development of features that enable conversations instead of reactions, then we’ll be doing our job as the user’s first line of defense against unpleasant experiences, which is why we’re here in the first place.

--

--

Lucas Mosele

Interaction Designer, Photographer & Developer. UX Developer @Appcues. Instructor at @Springboard. @StartupInstBOS Design Alumni. www.lmosele.com