Googling for the ”truth” about Srebrenica

According to the survey carried out last year by Mediacentar Sarajevo, the main sources of information for young people in Bosnia and Herzegovina (BiH) are online media and social networks. But what if what you read on the Internet is not true?

When you type “Srebrenica is” in the Google search engine, the first or among the first results automatically filled in is “Srebrenica is a lie”. Try it.

‘I’m not surprised’

In July 1995, during the war in Bosnia and Herzegovina, in just a few days more than 8,000 Bosnian men and boys were taken from Srebrenica, imprisoned and killed. More than 25,000 women, children and the elderly were expelled from the area then protected by the United Nations (UN).

When I saw that it was written “Srebrenica is a lie”, I was not surprised because, unfortunately, still today a large number of people deny the genocide saying that the genocide is a lie, that it did not take place, etc., says Azemina Suljic, a third-year secondary school student in Srebrenica.

How does the search work?

In order to decide which predictions to display, our system looks for common queries that match what someone is going to enter into the search box,” it says in Google’s explanation.

Filip Milosevic of the Share Foundation, an organization specializing in promoting rights and freedoms in the online sphere, explains the same.

To decide what to display, the systems analyze frequent and popular queries, but also consider other factors, such as the language and location of the user performing the search, to make the predictions more ” relevant” and save as much time as possible.

Why such results?

But if the Tribunal ruled on genocide, if countries like BiH and Montenegro have a law prohibiting genocide denial, why is lying among the first results offered? Why is there fake content on the shelf?

Such algorithm-based systems cannot be perfect and neutral due to the very nature of the data they process, which is created by humans – who are not neutral in nature,” Milosevic further explains.

Break the cycle – report inappropriate results

As they said from Google, when the community flags a prediction as inappropriate, they rate and react.

And any user can flag inappropriate predictions right in the autotune field. “Either disable or remove certain predictions, or report predictive issues.”

Milosevic also explains that Google “trains artificial intelligence systems, but also hires teams of people who would recognize violations of company policies and the rights of various social groups and remove them from the platform.”

Believe in the first result

We’ve known for a long time that people trust search results that rank higher than those lower,” said Chirag Shah, University of Washington professor and founding director of the InfoSeeking Lab and co-founder of the Center. for Responsible Artificial Intelligence. , explained, adding that “most people don’t even bother to look at the results at the bottom of the page.”

And young people, like those who sent their results to Google, get the most information by “scrolling” and sharing content with each other, says Anida Sokol, a researcher at Mediacentar Sarajevo.

Which leads to the possibility of encountering misinformation and not verifying sources of information. Given the low level of media and information literacy of young people in BiH, this can be dangerous,” says Sokol, writing RSE.

Comments are closed.