Accepted Papers

  • Same, Same, but Different: Algorithmic Diversification of Viewpoints in News.  Nava Tintarev, Emily Sullivan, Dror Guldin, Sihang Qiu, Daan Odjik, Reza Aditya Permadi and Andreas Christian Pangaribuan. 

    Abstract: Recommender systems for news articles on social media select and filter content through automatic personalization. As a result, users are often unaware of opposing points of view, leading to informational blindspots and potentially polarized opinions. They may be aware of a topic, but only be exposed to one viewpoint on this topic. However, recommender systems designed with diversity in mind have just as much potential to help users find a plurality of viewpoints. In this spirit, this paper introduces an approach to automatically identifying content that represents a wider range of opinions on a given topic. Our offline results show positive results for our distance measure with regard to diversification on topic and channel. However, our user study results confirm that user acceptance of this diversification also needs to be addressed in tandem to enable a complete solution.


  • Compliance of Personalized Radio with Public Mandates.  Stefan Hirschmeier and Vanessa Beule

    Abstract: Public radio broadcasters do not consider personalization in their compliance with their public mandate so far. However, personalization brings along the risk of filter bubbles, which contradicts with the ideas of the public mandate. We shed light on the interconnection of personalization and the public mandate of broadcasters anchored in an analysis of the interstate treaty on broadcasting and tele-media. The contribution of this paper is two-fold. First, we propose an approach how to selectively avoid filter bubbles in personalized radio consumption. Second, we develop a framework that helps to assess the compliance of personalized radio offers with public mandates.


  • Analysing Biases in Perception of Truth in News Stories and their Implications for Fact Checking.  Mahmoudreza Babaei, Abhijnan ChakrabortyJuhi Kulshrestha, Elissa M. Redmiles, Meeyoung Cha and Krishna Gummadi. 

    Abstract: A flurry of recent research has focused on understanding and mitigating the threat of “fake news” stories spreading virally on social media sites like Facebook and Twitter. In this work, we focus on how users perceive truth in viral news stories. To this end, we conduct online user-surveys asking people to rapidly assess the likelihood of news stories being true or false. Our goal is to quantify the extent to which users can implicitly recognize (perceive) the accurate truth-level of a news story (obtained from fact checking sites like Snopes). Our analysis of users’ implicit perception biases (i.e., inaccuracies in estimating truth-level of stories) reveals many interesting trends. For instance, we observe that in the set of stories fact checked by Snopes, the perception biases are not correlated with the actual truth-level of the news stories. Our finding implies that there exist as many true stories that are believed by users to be more false than they actually are, as there exist false stories that are believed to be more true than they actually are. We argue that the stories that are in need of being fact checked are the stories where users exhibit the largest perception biases. However, we show that existing fact checking strategies that rely on users to report stories they suspect to be false, would prioritize fact checking stories based on their actual truth level rather than perception biases. We propose an alternative strategy to select stories with large perceived biases for fact checking.


  • Using image fairness representations in diversity-based re-ranking for recommendationsChen Karako and Putra Manggala. 

    Abstract: The trade-off between relevance and fairness in personalized recommendations has been explored in recent works, with the goal of minimizing learned discrimination towards certain demographics while still producing relevant results. In this work, we present a fairness-aware variation of the Maximal Marginal Relevance (MMR) re-ranking method which uses representations of curated labeled image dataset of demographic groups computed using a pre-trained deep convolutional neural network (CNN) using a curated labeled dataset. This method is intended to incorporate fairness with respect to these demographic groups. We perform a pilot study of this method on an internal dataset and examine the trade-off between relevance and fairness using different fractions of a curated labeled dataset. We show that our proposed method is robust against the amount of curated labeled data used to compute the representations. This implies that the method can be practically useful even with a limited amount of curated labeled data and since this method extends MMR, it can be used as a post-processing step for recommender systems and search.


  • Fairness In Reciprocal Recommendations: A Speed‐Dating Study.  Yong Zheng, Tanaya Dave, Neha Mishra and Harshit Kumar. 

    Abstract: Traditional recommender systems suggest items by learning from the user preferences. Recently, researchers propose to consider the opinions from multiple stakeholders. Not only the receiver of the recommendations, but also other stakeholders may come into play, such as the producers of items or those of the system owner. Reciprocal recommender system in dating or job recommendations is one of the examples. However, we may have to simulate the utilities for each type of the stakeholder due to the utility definitions. In this paper, we perform exploratory analysis on a speed-dating data, where the user expectations are clearly defined. We try to build a multi-dimensional utility framework by utilizing multi-criteria ratings, and demonstrate that we are able to obtain a successful tradeoff between the utility optimizations and the recommendation performance. Even more, the proposed approach is able to beat exiting reciprocal recommendation algorithms in precision, recall and overall utilities. Finally, we derive a promising way to define and optimize utilities to be generalized in other applications or domains.


  • Diversity Checker: toward recommendations for improving journalism with respect to diversity.  Jeroen Peperkamp and Bettina Berendt

    Abstract: The Diversity Checker is a tool that aims to make it easier for journalists to author their texts with diversity in mind. To provide helpful hints for them in this respect, it is necessary to define how to quantify diversity so that this can be programmed into the tool. At this early stage in the development of the tool, we present a two-fold contribution. First, we offer an analysis on what we mean by “improving diversity”. Second, we present the first version of the Diversity Checker, along with some analysis of its current performance.


Advertisements