How do standard recommendation algorithms work?
Recommendation algorithms, particularly those based on deep learning, suffer from a lack of explainability to orient end users about the latent aspects guiding the recommendations they are provided with.
These algorithms typically rely both on individual user interests and collective preference patterns in a community, they are trained based on historical data collected within a specific application context. Personal recommendations are formed using this data to build user profiles, calculate similarities, and find correlations. Consequently, recommendations are prone to replication of structural and behavioural bias since the data is captured from human interactions.
In the context of content recommendation within TV services, these biased and opaque recommendation technologies perpetuate social discrimination against vulnerable groups, values, and cultures by emphasizing mainstream content. These algorithms often dismiss an important part of the existing cultural offer, typically non-mainstream, that might otherwise receive a greater deal of attention from end users and society in general.
In media, the best-known example is the phenomenon of the “filter bubble” (Pariser, 2011). An algorithmic bubble can emerge when it learns about users’ interests and opinions over time and only displays content that matches these assumed interests and opinions. Ultimately, this can lead to self-reinforcing feedback loops which may then result in undesired societal effects such as opinion polarisation or the increased spread of one-sided information.