EvalUMAP 2017 Workshop


Towards comparative evaluation in user modeling,
adaptation and personalization


To be held in conjunction with the 25th Conference on User Modeling,
Adaptation and Personalization, UMAP 2017, July 2017, Bratislava, Slovakia

EvalUMAP 2017 Agenda

Sunday 9th of July

Room 1.38


9:00 - 9:15

Workshop Introduction
(Welcome, purpose, goals, agenda, Introduction of participants)
(15 Minutes)

Owen Conlan

9:15 - 9:55

Keynote Talk
(30 Minutes + 10 Minutes Q&A)

Ian Soboroff
User Modeling in the Cranfield Tradition

9:55 - 10:30

Paper Presentations I (Each Presentation 15 minutes plus 2 minutes Q&A)

Seyyed Hadi Hashemi and Jaap Kamps
Reusability of Test Collections Over a Year

Radek Pelánek
Measuring Predictive Performance of User Models: The Details Matter

10:30 - 11:00

Coffee Break

11:00 - 12:30

Paper Presentations II (Each Presentation 15 minutes plus 2 minutes Q&A)

William Wright
Personalities of web forum users

Kieran Fraser, Bilal Yousuf and Owen Conlan
Synthesis & Evaluation Of A Mobile Notification Dataset

Mirjam Augstein and Thomas Neumayr
Layered Evaluation of a Personalized Interaction Approach

Yasen Kiprov, Pepa Gencheva and Ivan Koychev
Generating Labeled Datasets of Twitter Users

Athanasios Staikopoulos and Owen Conlan
Proposing an Evaluation Task for Identifying Struggling Students in Online Courses

12:30 - 14:00

Lunch

14:00 - 15:30

Discussion Session
(plan and decide upon next workshop activities)






Call for Papers

Call for 2-6 page papers describing:

  1. available infrastructure that could be used to capture data for shared evaluation challenges;
  2. available datasets that could be exploited for shared challenge generation;
  3. challenges and potential solutions around shared challenge generation in the UMAP space.


Extended submission deadline: April 27, 2017

Research in the areas of User Modelling, Adaptation and Personalization faces a number of significant scientific challenges. One of the most significant of these challenges is the issue of comparative evaluation. It has always been difficult to rigorously compare different approaches to personalization, as the function of the resulting systems is, by their nature, heavily influenced by the behaviour of the users involved in trialling the systems. To-date this topic has received relatively little attention. Developing comparative evaluations in this space would be a huge advancement as it would enable shared comparison across research, which to-date has been very limited.

Taking inspiration from communities, such as Information Retrieval and Machine Translation, the EvalUMAP Workshop series seek to propose and design one or more shared tasks to support the comparative evaluation of approaches to User Modelling, Adaptation and Personalization. This year’s workshop will solicit presentations from key practitioners in the field on innovative datasets that meet specific requirements (e.g. ownership, accessibility, privacy) and that could form the basis to start scoping and designing shared task-based challenges and evaluations in the area of user adaptation and personalisation for next year. The resulting shared task(s) will be accompanied by appropriate models, content, metadata, user behaviours, etc., and can be used to comprehensively compare how different approaches and systems perform. In addition, a number of evaluation metrics and methods will be outlined, that participants would be expected to perform in order to facilitate comparison. Finally, the proposed shared task(s) will be disseminated in the community and the resulting outcomes will be presented at an EvalUMAP forum next year.

In particular, the planned outcomes of the EvalUMAP Workshop 2017 is as follows: (1) A clear understanding of the challenges and requirements related to the design of a shared task-based approach in User Modeling, Adaptation and Personalization space and (2) identify specific issues and requirements on user data capturing and dataset processing in the context of personalization for a shared task (3) the identification and description of suitable and publicly accessible datasets that overcome the previous identified challenges and (4) the design of shared task-based evaluations using suitable datasets that will take place throughout 2017 and early 2018, and be presented at UMAP 2018.


Workshop topics are evaluation focused and include, but are not limited to:

  • Understanding UMAP evaluation
  • Defining tasks and scenarios for evaluation purposes
  • Identification of potential corpora (datasets) for shared tasks
  • Automated and semi automated processes on creating appropriate datasets and simulating user behaviours etc in order to accommodate a shared task
  • Interesting target tasks and explanations of their importance
  • Critiques or comparisons of existing evaluation metrics and methods
  • How we can combine existing evaluation metrics and methods?
  • Reusing or improving previously suggested metrics and methods
  • Reducing the cost of evaluation
  • Proposal of new evaluation metrics and methods
  • Technical challenges associated with design and implementation
  • Anonymization of datasets, Privacy, Ethics and Security issues on the use of datasets

Workshop format:

This will be an interactive workshop structured to encourage group discussion and active collaboration among attendees. The workshop will feature a keynote talk, lightning round presentation session for position papers, multiple (parallel) breakout sessions, and a final discussion session to wrap up the event.


Paper Submissions

The workshop is now accepting position papers from 2 to 6 pages (including references) describing approaches or ideas/challenges on the topics of the workshop are invited. In particular, three types of papers are solicited: (1) papers describing available infrastructure that could be used to capture data for shared evaluation challenges; (2) papers describing available datasets that could be exploited for shared challenge generation; (3) papers describing challenges and potential solutions around shared challenge generation in the UMAP space.

Submissions should be in ACM Standard SIGCONF format. LaTeX and Word templates are available at (http://www.acm.org/publications/proceedings­-template).

Papers should be submitted in pdf format through the EasyChair system (https://easychair.org/conferences/?conf=evalumap2017) no later than midnight 11:59pm Hawaii time on April 20, 2017. Submissions will be reviewed by members of the workshop program committee. Accepted papers will be included in the extended UMAP 2017 Proceedings and will be available via the ACM Digital Library. In addition, the EvalUMAP workshop proceedings will be indexed with CEUR. Authors of select papers may be invited to contribute to a journal publication which describes the outcomes of the workshop.


Important Dates

April 20, 2017: Deadline for paper submission (11:59pm Hawaii time)

May 20, 2017: Notification to authors

May 28, 2017: Camera-ready paper due

July 9, 2017: EvalUMAP Workshop at UMAP

Further Information

Further information is available by emailing the workshop organizers at evalumap@adaptcentre.ie.


Workshop Organizers

Owen Conlan, Trinity College Dublin, Ireland

Liadh Kelly, Trinity College Dublin, Ireland

Kevin Koidl, Trinity College Dublin, Ireland

Séamus Lawless, Trinity College Dublin, Ireland

Athanasios Staikopoulos, Trinity College Dublin, Ireland


Programme Committee

Paul De Bra, Eindhoven University of Technology, The Netherlands

Iván Cantador, Universidad Autónoma de Madrid, Spain

David Chin, University of Hawaii, USA

William Wright, University of Hawaii, USA

Eelco Herder, L3S Research Center, Hannover, Germany

Geert-Jan Houben, Delft University of Technology, The Netherlands

Judy Kay, University of Sydney, Australia

Tsvi Kuflik, The University of Haifa, Israel

Alexandros Paramythis, Contexity, Switzerland

Alan Said, University of Skövde, Sweden

Vincent Wade, Trinity College Dublin, Ireland

Stephan Weibelzahl, Private University of Applied Sciences Göttingen, Germany



Confirmed Keynote Speaker: Ian Soboroff, National Institute of Standards and Technology


Talk Title:
User Modeling in the Cranfield Tradition


It might seem surprising, but User Modeling is a critical part of Information Retrieval test collection development. If all you see are query topics and ‘document' relevance judgments, then the connection to the user might seem pretty tenuous. In fact, a conceptual user model comes first: from that we can imagine the user's task, their goals, and how to define success. Those tasks and goals underlie the topics and the relevance judgments, and thus represent a form of personalization. This talk will explore the role of User Modeling in the Cranfield paradigm of evaluation, how that has influenced different test collections that have come out of the TREC program, and offer some thoughts towards what evaluation in the UMAP context might look like.

EvalUMAP 2016 Website

evalumap.adaptcentre.ie/2016