Accepted Papers

  • Personalized Learning with Multi-Stakeholder Optimizations.  Yong Zheng, Nastaran Ghane and Milad Sabouri.
    Abstract: Recommender systems (RS) have been introduced to educations as an effective technology-enhanced learning technique. Traditional RS produce recommendations by considering the preferences of the end users only. Multi-stakeholder recommender systems (MSRS) claim that it is necessary to consider the utility of other stakeholders in order to balance the needs of multiple stakeholders. Take book recommendations for example, the utility of parents, instructors and even publishers may be also important in addition to the student preferences. In this paper, we propose and exploit utility-based MSRS for personalized learning. Particularly, we attempt to address the challenge of over-/under-expectations in the utility-based MSRS. Our experimental results based on an educational data demonstrate the effectiveness of our proposed models and solutions.


  • The Need For Identifying Ways To Monetize Personalization and Recommendation.  Eelco Herder.
    Abstract: Research on user modeling and personalization typically only serves the needs of end-users. However, when applied in real-world, commercial contexts, recommendations should also serve the (often monetary) interests of other parties, such as platform providers, sellers and advertisers. In this paper, I provide a brief historical perspective on the research field, contrast this with the commercial context, and investigate the topics currently addressed at the UMAP and RecSys conferences. The paper concludes with a discussion on the need for the research community to take multi-stakeholder (monetary) interests into account in the design and evaluation of adaptive system. This would allow us to foresee unwanted effects, such as online filter bubbles, and to pro-actively find strategies to prevent them.


  • Localized Fairness in Recommender Systems. Nasim Sonboli and Robin Burke.
    Abstract: Recent research in fairness in machine learning has identified situations in which biases in input data can cause harmful or unwanted effects. Researchers in the areas of personalization and recommendation have begun to study similar types of bias. What these lines of research share is a fixed representation of the protected groups relative to which bias must be monitored. However, in some real-world application contexts, such groups are defined a priori, but must be derived from the data itself. Furthermore, as we show, it may be insufficient in such cases to examine global system properties to identify protected groups. Thus, we demonstrate that fairness may be local, and the identification of protected groups only possible through consideration of local conditions.


  • Setting the Stage: Towards Principles for Reasonable Image Inferences. Severin Engelmann and Jens Grossklags.
    Abstract: User modeling has become an indispensable feature of a plethora of different digital services such as search engines, social media or e-commerce. Indeed, decision procedures of online algorithmic systems apply various methods including machine learning (ML) to “learn” virtual models of billions of human beings based on large amounts of personal and other data. Recently, there has even been a call for a “Right to Reasonable Inferences” for Europe’s General Data Protection Regulation (GDPR). But what exactly is a reasonable inference? Here, we explore a conceptualization of reasonable inferences in the context of image analytics that refers to the concept of evidence in theoretical reasoning. Given the inherent semantic ambiguity of images, the goal here is to start developing principles for image inferences that are eligible to be called reasonable. Based on an image analytics case study, we demonstrate that measures of accuracy and correctness are independent of the reasonableness of an inference. Finally, we discuss a fundamental trade-off between privacy preservation and “model fit” and touch upon the potential value of hidden quasi-semantics in image inferences.


  • On the Compatibility of Privacy and Fairness.  Rachel Cummings, Varun Gupta, Dhamma Kimpara and Jamie Morgenstern.
    Abstract: In this work, we investigate whether privacy and fairness can be simultaneously achieved by a single classifier in several different models. Some of the earliest work on fairness in algorithm design defined fairness as a guarantee of similar outputs for “similar” input data, a notion with tight technical connections to differential privacy. We study whether tensions exist between differential privacy and statistical notions of fairness, namely Equality of False Positives and Equality of False Negatives (EFP/EFN). We show that even under full distributional access, there are cases where the constraint of differential privacy precludes exact EFP/EFN. We then turn to ask whether one can learn a differentially private classifier which approximately satisfies EFP/EFN, and show the existence of a PAC learner which is private and approximately fair with high probability. We conclude by giving an efficient algorithm for classification that maintains utility and satisfies both privacy and approximate fairness with high probability.


Advertisements