I had an interesting discussion where the “Filter bubble” was declared dead and non-existent. That got my attention.
But first: “A filter bubble – a term coined by Internet activist Eli Pariser – is a state of intellectual isolation that allegedly can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history.” // English Wikipedia
I wondered where the opinion that the filter bubble was dead came from, so I searched the phenomenon and found, a bit (but not too) surprisingly, that in Sweden the filter bubble seems to be dead, while in the English speaking world it seems to be more of an ongoing debate.
I just looked at the Swedish and English Wikipedia page for “Filter bubble” for that insight (and read a lot “between the lines” since neither one of them really gave any definite answer to the question).
However, the whole discussion of whether a phenomenon such as a filter bubble is dead is an alarming sign that you know little or nothing about how computer systems work or can be programmed.
Before we go on, I interpret “intellectual isolation” from the quote above to mean intellectually isolated by external means, not being, somehow isolated intellectually in your mind.
I.e. that the filter bubble is an external phenomenon and that in order to determine its full impact you’ll also need to study its effect on the mind. E.g. does the filter bubble make us more fundamentalist in our opinions, etc.
To do scientific studies, or experiments, to determine if the filter bubble does not exist is about as stupid and useless as performing scientific studies, and having lots of debates, of course, about whether it is possible for a Word document to contain the word “stupid”.
For all that we know, there could be a configuration parameter in Google and Facebook and their likes, called “enable-filter-bubble” that could have the value “false” today and “true” tomorrow.
Or for that matter could be set to “false” if the system detects activities that could indicate you are trying to perform an experiment to determine if a filter bubble exists.
I’m not saying this is how it’s done, I’m saying this is possible to do with computers. (Although, maybe not good enough to really fool us… maybe…)
So, if you publish a paper or article today saying that you have found no filter bubble, the evil social media platforms can change the value of that configuration parameter tomorrow (woahahaha!)
Research into confirmation bias, by all means, or how much effect a filter bubble, where such a thing does exist, has on people.
Obviously, keep looking for algorithms that seem to produce too biased search results and feed content that could prevent users from getting objective information.
But the only reasonable attitude towards specialized algorithms you do not know anything about (and in fact are likely forever prohibited by law from knowing enough about) is to assume you have to do a reality check now and again to verify that you’re not being manipulated.
Oh, and by the way, in the case of confirmation bias, the manipulator is you yourself, not an algorithm… in that case, the algorithm is just a tool to lull you into stupid satisfaction with your splendid, time-proven, world view… (Or maybe confirmation bias is a tool in the hands of the evil mind-controlling algorithm…)
Legislation that would prohibit mind-altering manipulative computer algorithms is, of course also a way to go… (imagine the EU spent all that energy getting all their citizens all these extremely tiresome “we’re-using-cookies”-banners when they could have protected us against mind control!)
I am not saying social media platforms and search engines are necessarily evil or trying to manipulate us. I think the filter bubble was an unintended consequence of trying to prevent information overflow and to stay interesting to users or in fact not losing them (all, of course in the name of the holy ad-sales-revenue).
Imagine, if you searched for “all the ways immigrants are ruining our country” and got results that stated, “immigrants are not ruining our country” (or vice verse). Chances that you would use that search engine again is very low, even if your search seemed to support an opinion that could need some counter-arguments.
Behavioral science or media studies are not able to tell us that there are no algorithms that produce search results and content feeds specialized to such an extent that we’re not exposed to new ideas.
They could, however, tell us if such algorithms are harmful to us or not (and if we end up in a world were one social media platform or search engine company owns every service we get information from—or our choices of information services produce that result—the answer is a resounding yes!)
You should probably not get all your news from Facebook or Twitter (and I don’t even know how that would work or what “news” you can get from those two, but that is frankly a subject for another post) or any one place.
I think state-sponsored TV and webs might still be OK.
Unless, of course, you’re worried that you’re living in a “western filter bubble”. Then Aljazeera or the likes might be something to look into.
Or are living in a country where state-sponsored TVs and webs are subject to censoring. Then maybe Facebook is the way to go… or at least get closer to good.
Otherwise, I think information consumption works pretty much like food consumption: don’t eat too much of just one thing… variation is key!
Header image: Piqsels.com, CC0, Link