Facebook has decided to discontinue the flagging of articles deemed "fake news." It announced the decision in a Medium post yesterday.

The social media giant began a war against fake news after being accused of hosting bogus news articles during the 2016 presidential campaign. One of the weapons being used in the fight was adding the ability for third-party "fact checkers" to flag articles deemed misleading.

The feature was added to the platform this summer and by September, academics had already determined the flags were not working. Facebook hoped users would be less likely to believe and share 'disputed' news but a study from Yale claims this is rarely the case. The research showed that people are only 3.7 percent more likely to identify an article as fake when it is flagged. Such a small percentage indicates that efforts to point out false news are mostly pointless.

Another study published in the journal "Psychological Science in the Public Interest" showed that flagged stories in some cases could actually "further entrench someone's beliefs." In other words, by labeling an article as "fake news," Facebook may have been reinforcing the beliefs of people who agreed with what was being put forth.

"Research suggests that strong language or visualizations (like a bright red flag) can backfire and further entrench someone's beliefs."

Facebook also conducted its own research and found that other means, such as offering related articles for context, were more effective than flagging.

"In April of this year, we started testing a new version of Related Articles that appears in News Feed before someone clicks on a link to an article. In August, we began surfacing fact-checked articles in this space as well. During these tests, we learned that although click-through rates on the hoax article don't meaningfully change between the two treatments, we did find that the Related Articles treatment led to fewer shares of the hoax article than the disputed flag treatment."

Because of this, Facebook will stop flagging fake news and focus more on curating "related articles." It will continue to use third-party fact checkers to identify bogus content but instead of disputing it, the related content will contain "fact-checked information."

"While we've made many changes, we've also kept the strongest parts of the previous experience. Just as before, as soon as we learn that an article has been disputed by fact-checkers, we immediately send a notification to those people who previously shared it. When someone shares this content going forward, a short message pops up explaining there's additional reporting around that story. Using language that is unbiased and non-judgmental helps us to build products that speak to people with diverse perspectives."