EvalUMAP 2020


Towards comparative evaluation in user modeling,
adaptation and personalization


To be held in conjunction with the 28th Conference on User Modeling,
Adaptation and Personalization, UMAP 2020, June 2020, Genoa, Italy

**Full details on the EvalUMAP 2020 shared challenge/task and on participating in the EvalUMAP 2020 shared challengeare available at challenge details and participating**

Register here

Background:

Research in the areas of User Modelling, Adaptation and Personalization faces a number of significant scientific challenges. One of the most significant of these challenges is the issue of comparative evaluation. It has always been difficult to rigorously compare different approaches to personalization, as the function of the resulting systems is, by their nature, heavily influenced by the behavior of the users involved in trialing the systems. A forum for comparative evaluations in this space would be a huge advancement as it would enable shared comparison across research. To-date this topic has received relatively little attention.

Taking inspiration from communities such as Information Retrieval and Machine Translation, we are offering the first UMAP Evaluation Track at UMAP 2020. This Evaluation Track will offer shared tasks to support the comparative evaluation of approaches to User Modelling, Adaptation and Personalization.

EvalUMAP 2020 Shared Task

**Full details on the EvalUMAP 2020 shared task & on how teams can take part are available at challenge details and participating**

The use-case for the challenge is personalized mobilephone notification generation. Specifically, assuming individuals’ interactions with their mobile phone have been logged, the challenge is to create an approach to generate personalized notifications on individuals’ mobile phones, whereby such personalization would consist of deciding what events (emails, alerts, reminders etc.) to show to the individual and when to show them. Given the number of steps associated with such personalization, this challenge focuseson the first step in this process, that of user model generation using the logged mobile phone interactions.

For this shared task a dataset consisting of several individuals’ mobile phone interactions is provided, along withappropriate models, content, metadata, user behaviors, etc., and can be used to comprehensively compare how different approaches and systems perform. In addition, a number of metrics and observations for participants to perform in order to facilitate comparisonareoutlined.

This shared task providesopportunity for participants to test and tune their systems and complete the shared task in order for comparative results and associated publications to be prepared for and presented at the EvalUMAP Evaluation Track at UMAP 2020

Timeline for EvalUMAP 2020:

31st October 2019 - 1st February 2020: Task registration open

31 October 2019 – March 2020: Training data released

28th Feb. 2020:User-model development freeze (no further changes allowed) & submission of developed user-model

1st March 2020:Test data release

15th March 2020:Evaluation lab closes; participants submit their results on the test data

30 April 2020 @23.59 GMT: Participants submit their results overview papers via Easychair

7 May 2020: Notification of acceptance

15 May 2020: Camera-ready paper submission deadline

17 July 2020: EvalUMAP runs at UMAP 2020

Workshop Chairs

Bilal Yousuf, Trinity College Dublin, Ireland
Kieran Fraser, Trinity College Dublin, Ireland
Liadh Kelly, Maynooth University, Ireland

Program Committee

Owen Conlan, Trinity College Dublin, Ireland
Eelco Herder, Radboud University
Stephan Weibelzahl, Private University of Applied Sciences Göttingen, Germany
Kevin Koidl, Trinity College Dublin, Ireland

Past Editions of the Workshop

2019 Workshop
2017 Workshop
2016 Workshop

Call for Proposals for New Shared Tasks

We also call for proposals for new shared challenges which proposers would like to organize during the academic year 2020-21. Selected challenges would then run at an EvalUMAP 2021 event.

The call for shared challenge proposals will be opened at the end-March 2020.

Details on preparing your shared challenge proposal:

Your shared challenge proposal should be no more than 6-pages long (there is no minimum length guideline). Completed proposals should be submitted via EasyChair at: https://www.easychair.org/conferences/?conf=evalumap2020. The proposal should be for a challenge in the UMAP space & should include all details necessary to run the challenge. Details to include in your proposal are:

  1. Name of the challenge.
  2. Brief description of the challenge and its significance.
  3. Description of the challenge, including the task participants would be expected to perform, the evaluation methodology and setup and how the challenge will run, the dataset that will be used in the challenge, and evaluation metrics that will be used in the challenge.
  4. Proposers’ (organizing committee) names, contact details and experience with regard to running the challenge.

See the EvalUMAP 2020 challenge description for guidance on the type of details to include when designing and describing your challenge.