Newsroom News Breaking Comics Tags RSS
News other 29 January 2019, 22:48

author: Bart Swiatek

YouTube plans to reduce the visibility so-called borderline content

YouTube informed that it intends to change the website’s system of recommendations – the aim is to limit the visibility of so-called borderline content.

YouTube plans to reduce the visibility so-called borderline content - picture #1
Changes are coming to YouTube.

IN A NUTSHELL:

  1. YouTube plans to reduce the visibility of so-called borderline content – including such that contains false information or conspiracy theories;
  2. Videos that are close to breaking the so-called community guidelines will not be suggested by the recommendation system;
  3. Moderators and AI algorithms are to be responsible for the implementation of the change.

Last Friday, YouTube announced that it intends to take action to limit the influence of videos containing "disinformation" and "conspiracy theories" in order to reduce the impact of extremist content on public opinion. Youtube is to stop recommending films with so-called "borderline content" – content that does not break community guidelines, but is one step from doing so. The change will reportedly affect approximately 1% of the materials posted on the site and will be tested in the USA first.

{cytat} “We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” as we read in the announcement.{/cytat}

Problematic „borderline content”

YouTube has often been accused of radicalizing opinions by promoting extreme or false material (according to a study by the Pew Research Center, as many as 64% of its users occasionally find content with false or misleading theses, and, according to BuzzFeed, a person who creates a new account and starts watching videos about politics, will encounter extremist content roughly after watching the sixth video).

Therefore, the current step of the company seems to be going in the right direction, but the devil’s in the details. It’s worth asking what films will be considered "borderline". A determinant is to be the aforementioned guidelines for communities, in which there is, for example, a catch concerning "hate-spreading content".

YouTube plans to reduce the visibility so-called borderline content - picture #2
Change of policy may theoretically affect some online scandals.

Our products are platforms for free expression. But we don't support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line, we read. 

The problem is that in many cases distinguishing between criticism and attack can be a difficult task, which can be clearly seen in the case of many philosophical disputes (e.g. about freedom of speech, gender identity, political correctness, and even phenomena/events from our own industry, such as the famous GamerGate). Where does the polemic end and the hate begin?

Algorithms and people will uphold the order

The answer to this last question will belong to a team of moderators supported by learning algorithms – or vice versa, because, what is interesting, the main task of people is to properly educate machines, so that they deal with the detection of problematic content as effectively as possible. As a result, changes in the recommendation system are to be introduced automatically and efficiently.

It is worth noting that films that will be "flagged" by the AI will not be removed from the platform – the only consequence will be that they will be prevented from appearing among recommendations. According to the company, thanks to the idea it will be possible to ensure a balance between freedom of speech and responsibility. In practice, however, there will certainly be numerous accusations of censorship, and in fact, it’s really difficult to say whether they will be completely unfounded. Does the freedom of expression have the same value in a situation where we are only allowed to preach in a closed, soundproofed room while others can do so in the middle of the street?

YouTube plans to reduce the visibility so-called borderline content - picture #3
The topic of freedom of speech is on everyone’s lips recently.

The fight against false information and cyber extremism on the web seems to be one of the greatest challenges that people face in the 21st century. The solution suggested by YouTube can help – but it can also be a source of great frustration and serious problems for many people. The assessment will have to wait at least until the end of test tests in the United States.

  1. YouTube – official website