Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today’s highly complex information-rich online environments. Machine learning, recommender systems, and user modeling are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users’ needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in such popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives.
Bias and fairness in machine learning are topics of considerable recent research interest. However, more work is needed to expand and extend this work into algorithmic and modeling approaches where personalization is of primary importance. The goal of this workshop is to bring together a growing community of experts from academia and industry to discuss ethical, social, and legal concerns related to personalization, and specifically to explore a variety of mechanisms and modeling approaches that help mitigate bias and achieve fairness in personalized systems.
Topics of interest include, but are not limited to the following.
- Bias and discrimination in user modeling, personalization and recommendation
- Computational techniques and algorithms for fairness-aware personalization
- Definitions, metrics and criteria for optimizing and evaluating fairness-related aspects of personalized systems
- Data preprocessing and transformation methods to address bias in training data
- User modeling approaches that take fairness and bias into account
- User studies to evaluate impact of personalization on fairness, balance, diversity, and other social welfare criteria
- Balancing needs of multiple stakeholders in recommender systems and other personalized systems
- ‘Filter bubble’ or ‘balkanization’ effects of personalization
- Transparent and accurate explanations for recommendations and other personalization outcomes
Research papers reporting original results as well as position papers proposing novel and ground-breaking ideas pertaining to the workshop topics are solicited. See the Submission page for more details.
- Submission deadline: April 18, 2018 (23:59 American Samoa Zone – UTC-11)
- Notification of acceptance: May 15, 2018
- Camera-ready due: May 27, 2018 (23:59 American Samoa Zone – UTC-11)