Workshop Schedule (June 9, 2019):

09:15 – 09:30 Introduction & Overview
09:30 – 10:30 Paper Session 1: Fairness in Recommendation
  • Personalized Learning with Multi-Stakeholder Optimizations. Yong Zheng, Nastaran Ghane and Milad Sabouri.
  • Localized Fairness in Recommender Systems. Nasim Sonboli and Robin Burke.
10:30 – 11:00 Coffee Break
11:00 – 12:30 Paper Session 2: Privacy and Transparency in Personalization
  • The Need For Identifying Ways To Monetize Personalization and Recommendation. Eelco Herder.
  • Setting the Stage: Towards Principles for Reasonable Image Inferences. Severin Engelmann and Jens Grossklags.
  • On the Compatibility of Privacy and Fairness. Rachel Cummings, Varun Gupta, Dhamma Kimpara and Jamie Morgenstern.
12:30 – 14:00 Lunch Break
14:00 – 15:00 Invited Talk – Peter Brusilovsky, University of Pittsburgh: “Interfaces for User-Controlled and Transparent Recommendations” [See below for Abstract]
15:00 – 15:30 Coffee Break
15:30 – 16:30 Invited Talk – Nava Tintarev, Delft University of Technology: “Toward Measuring Viewpoint Diversity in News Consumption” [See below for Abstract]
16:30 – 17:30 Panel Discussion / Wrap Up


Peter Brusilovsky:Interfaces for User-Controlled and Transparent Recommendations


As recommendation algorithms become more and more complex, the resulting recommendations become less evident to end users decreasing their trust in the recommender systems. Moreover, the
increasing use of large volumes of available data to improve overall precision of recommendations make them less sensitive to different contexts where users seek recommendations. These two problems have been addressed by a growing volume of research in two new areas – making the recommendation process more transparent to the users and offering users some forms of control over the recommendation process.
While these two directions of research are mostly independent, I consider them as two sides of the same coin. Efficient user control is not possible without reasonable transparency of the process. In turn, a recommender system with a transparent recommendation process could benefit from some forms of user control. In my talk, I will review lessons learned in several projects focusing on creating interfaces for user-controllable and transparent recommendations for several domains such as news recommendation, conference talk recommendation, and scholar recommendation. I will also present our most recent work on transparent interfaces for student modeling and learning content recommendation in E-learning context.

peterBio: Peter Brusilovsky is a Professor of Information Science and Intelligent Systems at the University of Pittsburgh, where he directs Personalized Adaptive Web Systems (PAWS) lab. Peter Brusilovsky has been working in the field of adaptive educational systems, user modeling, and intelligent user interfaces for more than 30 years. He published numerous papers and edited several books on adaptive hypermedia, adaptive educational systems, user modeling, adaptive Web, and social information access. Peter is a board member of several journals including User Modeling and User Adapted Interaction and ACM Transactions on Social Computing. In 2019 he is also serving as a program co-chair of RecSys 2019 conference.

Nava Tintarev: “Toward Measuring Viewpoint Diversity in News Consumption”

Abstract:The growing volume of digital data stimulates the adoption of recommender systems in different socioeconomic domains, including e-commerce, music, and news industries. While news recommenders help consumers deal with information overload and increase their engagement and satisfaction, their use also raises an increasing number of societal concerns, such as “Matthew effects”, “filter bubbles”, and an overall lack of transparency. Considerable recommender systems research has been conducted on balancing diversification of content with relevance, however this work focuses specifically on topical diversity. For readers, diversity of _viewpoint_ on a topic in news is however more relevant. This allows for measures of diversity that are multi-faceted, and not necessarily driven by previous consumption habits. This talk introduces preliminary work together with several Dutch news organizations (e.g., Blendle, Persgroep, and FDMediagroep), aiming to find ways to help users explore viewpoint diversity. The talk will describe our first steps toward informing diverse content selection in a way that is meaningful and understandable, to both content providers and news readers.

NavaBio: Nava Tintarev is an Assistant Professor and Technology Fellow at Delft University of Technology. She completed her PhD at the University of Aberdeen in 2010, and was previously an Assistant Professor at Bournemouth University (UK). Her research looks at how to improve the transparency in, and decision support for, recommender systems. She in on the management team for Delft Design for Values and an active member of Delft Data Science, where she is contributing to bringing together, integrating, and expanding existing practices and expertise for responsible data science. She acts as a senior member of the program committee for the ACM Conference on Intelligent User Interfaces, the ACM Recommender Systems Conference, and the Conference on User Modeling Adaptation and Personalization. She will be serving as a Program Chair for the Intelligent User Interfaces conference 2020 in Cagliari. This year she is also co-organizing a UMUAI Special issue on “Fair, Accountable, and Transparent Recommender Systems’’.