Show simple item record

dc.contributor.authorMassimo D
dc.contributor.authorElahi M
dc.contributor.authorRicci F
dc.description.abstractRecommender systems generate recommendations by analysing which items the user consumes or likes. Moreover, in many scenarios, e.g., when a user is visiting an exhibition or a city, users are faced with a sequence of decisions, and the recommender should therefore suggest, at each decision step, a set of viable recommendations (attractions). In these scenarios the order and the context of the past user choices is a valuable source of data, and the recommender has to effectively exploit this information for understanding the user preferences in order to recommend compelling items. For addressing these scenarios, this paper proposes a novel preference learning model that takes into account the sequential nature of item consumption. The model is based on Inverse Reinforcement Learning, which enables to exploit observations of users' behaviours, when they are making decisions and taking actions, i.e., choosing the items to consume. The results of a proof of concept experiment show that the proposed model can effectively capture the user preferences, the rationale of users decision making process when consuming items in a sequential manner, and can replicate the observed user behaviours.en_US
dc.titleLearning User Preferences by Observing User-Items Interactions in an IoT Augmented Spaceen_US
dc.typeBook chapteren_US
dc.publication.titleAdjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record