<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Fact Check with Logically.

Download the Free App Today

The middle machines that help us forget where our news comes from

The middle machines that help us forget where our news comes from

The days of licking your finger before turning the pages of a newspaper are in rapid decline. The middlemen have been slotted back into our lives as machines and they've changed the way we access news. These are the Googles, Facebooks and the Twitters; the algorithm-based platforms which collect your data and then choose what you see based on it.

On the one hand, they're great. They've revolutionised the way we interact with information and with each other. A platform that helps me search and find relevant content. What's not to like? The problem with these information middle machines is that they may be hindering our ability to remember critical parts of the information they provide us with, namely where it's from.

Kalogeropoulos, Fletcher and Nielson's research into news brand attribution showed that only 47% of people could remember the news publication of an article they read after arriving through social media. This dropped to 37% when accessing the news through a search engine. So, can we be sure that what we're reading is credible if we don't even pay attention to where it's coming from?

There are a number of factors that change one's ability to recollect news accessed from algorithm-based platforms. Age is one of them, with the study showing that younger individuals were more likely to remember the news brand they found via social media (this could be put down to their above average use of social media). Also, it makes sense that those who had an interest in the news, and read more of the story, were more likely to correctly remember the source of it. Unfortunately, we don't all have an interest in the news. Many of us don't read news stories at all, but this doesn't stop us from sharing them, according to a 2016 study that showed 59% of news media shared on Twitter had never been clicked on. In addition to this, 1 in 5 millennials say that they only read headlines when browsing social or content feeds. It seems, for many of us, the headlines have become the content. People (and probably bots, but that's another half dozen posts) blindly share headlines that catch their eye and although it seems trivial, this share bait has a wider effect on what is consumed through these middle machines. It affects the algorithms that drive these machines, what's trending and what's likely to be seen.

Kalogeropoulos, Fletcher and Nielson argue that in these algorithm-based environments, news brands seem to matter less, which potentially increases the risk of exposure to misleading information.

Whether it be through paid ads or someone sharing a suspicious post on your newsfeed, algorithm-based platforms are susceptible to the propagation of misinformation. Social media and search engines, in particular, have been under scrutiny to come up with ways to thwart misinformation and 'fake news'.

At the beginning of this year, Facebook changed its algorithm for this very reason; leaving media companies fretful over the impact it would have on their online readership. The changes favoured content promoted by friends and family as opposed to posts from brands and publishers and although it did arguably help, the problem is far from being eradicated.

Google also launched Project Owl in 2017, to try and combat problematic searches. They implemented new tools which allow users to report inappropriate predictions.

Google also changed their algorithm to put more importance on authoritative content, as well as allowing the thousands of contracted search quality raters to flag offensive content.

As the spread of inaccurate information becomes evermore prevalent online, multitudinous initiatives have set out to educate the public in media/information literacy and how to spot "fake news"; urging us to be vigilant when assessing the information we come across in digital spaces.

Poor journalism, factual mistakes and misleading headlines/clickbait are a major concern for many who access news online, and a recent study reported that 42% of people came across this type of misinformation within the past week.

Yet these algorithmic platforms exceed all other ways in which we now access the news. Over half of us prefer to access the latest headlines through search engines, social media or news aggregators. Interfaces driven by humans have been eclipsed by a preference for algorithmic ranking, determining the options for our consumption.

In transitioning from flicking through paper pages to scrolling through web pages, we now have the ability to interact with the news. We can critique it, save it for later, send it halfway around the globe and recreate it with the tap of the thumb. It's a blessing and a curse to have such vast amounts of easily accessible information at our fingertips, but it's never been more critical to keep a close eye on the platforms which give us access to this information, as the unregulated nature of digital news means it's easy for us to fall victim to it.

Sources from this article include; Reuters Institute, INRIA-MSR Joint Centre and Sharethrough.

Related Articles