A new Twitter account (@misinfonewsfeed) highlights one of the most difficult-to-follow conversations in media and politics today: how search engines, social platforms, and news organizations verify and share reports. It’s becoming more difficult to follow the news without seeing outdated or false information, and this is my small attempt to make it easier to find the gaps.
I hope the account is helpful for researchers, reporters and news consumers. For more about how the account works, check out the FAQ below.
What kind of tweets does it surface?
The conversations touch upon many aspects of news gathering and sourcing, ranging from:
- Reports about how automation, bots, and algorithms are used to (mis)inform. This is the scope of the original idea behind the account. It includes studies from researchers as well as investigative reporting about the spread of falsified information.
- Verification of sources as commented upon by journalists (the common way that reporters use Twitter). This includes news about individual persons who are confirming or denying reports (such as comments by the accused).
- Reports about 1) the ongoing investigation about interference in the U.S. presidential election in 2016, and 2) any legislative activity or policy measures taken by members of Congress to consider the role and responsibilities tech companies have in preventing the spread of falsified information, specifically propaganda or other information which causes public harm or harm to public or private institutions.
- Analysis, opinion, and commentary about misinformation, disinformation, and what is or is not considered “fake news.” This is classic “chatter” about what is consider important elements of
Why does it share so many topics that seem unrelated?
They appear to be related topics, though I admit I didn’t expect to come to such a conclusion. The range of topics reflects the challenge of separating conversations about of the verifiability of individual reports or sources, as seen in this tweet below…
….to the broader conversation about the imperfect aspects of journalism and public trust in traditional institutions:
Honoring the irreverent tone of most reporters on Twitter, some of the retweets are merely flotsam of newsgathering.
Americans’ trust in institutions is changing, and that extends to trust in media and platforms. Here’s a grim example of the kind of messaging which can stoke a fear of bias in media reporting, even when that reporting is widely respected. The Washington Post used 30 sources to develop their bombshell investigation on Roy Moore. Moore spent days vaguely alluding to dating minors––and denying it. Then there’s this unexplained pivot to Vladimir Putin, of all people.
The theme of trust in platforms, media, and government will continue to develop over the next decade, partially because there’s no good way for the public to participate in the decision-making about how social platforms spread news.
But that’s not for a lack of trying; a petition to urge Facebook to disclose information about government propaganda on Facebook has garnered nearly 90,000 signatures, and public officials are tasked to find out how to stop the spread of propoganda and hate speech online.
How does it “find” tweets?
I’ve worked with Lissted founder Adam Parker (@AdParker) in years past on other Twitter research after discovering the U.S. news-focused account @USTweetsDistill. Lissted accounts (of which @misinfonewsfeed is one) keep track of influential activity between a set of users who, arguably, have a greater control over what’s considered important about a given news story. It uses Twitter’s API to monitor Twitter like a social network, which (oddly) most similar tools ignore.
How is this similar to and different from other tools that monitor Twitter trends?
There are two similar tools that monitor interactions, Explore (formerly Discover), part of Twitter itself and Dataminr, a VC-backed business and an official Twitter partner. The main difference between the three is which metrics are valued. Explore focuses on trending topics, keywords, or hashtags to surface tweets, as well as your activity and interaction as an individual user. These are the least important metrics I can think of, especially when it’s easy to create bots to tweet something all the time, nevermind bots which can falsify interactions.
Dataminr is built with predictive usage in mind by correlating the timing and location of interactions, as opposed to the connectivity of interactions between a specific set of users (like, say, experts) which Lissted’s Tweetsdistilled technology focuses on. I’ve used it to monitor potential breaking news, especially regarding natural disasters and acts of violence. Its geo-targeting works well for rapidly-changing events.
Do you see something that’s not covered by the FAQ? Reach me at margafretter at gmail.
To add some personality to the account, I used a John Tenniel illustration of Alice and the Caterpillar as the avatar. Caterpillar always speaks in riddles, finds automatic offense with any interaction, and speaks in a contrarian poetry only appreciated by itself.
Recommended Reading + Resources
- Campaign: Take Back Our Voter Data [Crowd Justice]
- Dashboard: Securing Democracy: “Data about Russian propaganda efforts on Twitter in near-real time.”
- Here’s the first evidence Russia used Twitter to influence Brexit [Wired UK]
- Facebook Is Ignoring Anti-Abortion Fake News [New York Times]
- When Russia met Silicon Valley [The Atlantic]
- RT agrees to register as an agent of the Russian government [Washington Post]
- Something is wrong on the internet [Medium]