Show simple item record

dc.contributor.authorElahi M
dc.contributor.authorRicci F
dc.contributor.authorRubens N
dc.contributor.editor
dc.date.accessioned2018-05-08T13:21:58Z
dc.date.available2018-05-08T13:21:58Z
dc.date.issued2013
dc.identifier.issn2157-6904
dc.identifier.urihttp://dx.doi.org/10.1145/2542182.2542195
dc.identifier.urihttps://dl.acm.org/citation.cfm?doid=2542182.2542195
dc.identifier.urihttp://hdl.handle.net/10863/4625
dc.description.abstractThe accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluationmethodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems. © 2013 ACM 2157-6904/2013/12-ART5en_US
dc.language.isoenen_US
dc.rights
dc.titleActive learning strategies for rating elicitation in collaborative filtering: A system-wide perspectiveen_US
dc.typeArticleen_US
dc.date.updated2017-11-04T09:37:06Z
dc.publication.title
dc.language.isiEN-GB
dc.journal.titleACM Transactions on Intelligent Systems and Technology
dc.description.fulltextreserveden_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record