Show simple item record

dc.contributor.authorLanzilotti R
dc.contributor.authorArdito C
dc.contributor.authorCostabile MF
dc.contributor.authorDe Angeli A
dc.date.accessioned2020-06-30T09:08:12Z
dc.date.available2020-06-30T09:08:12Z
dc.date.issued2011
dc.identifier.issn1071-5819
dc.identifier.urihttp://dx.doi.org/10.1016/j.ijhcs.2010.07.005
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S1071581910000972
dc.identifier.urihttps://bia.unibz.it/handle/10863/14443
dc.description.abstractEvaluating e-learning systems is a complex activity which requires considerations of several criteria addressing quality in use as well as educational quality. Heuristic evaluation is a widespread method for usability evaluation, yet its output is often prone to subjective variability, primarily due to the generality of many heuristics. This paper presents the pattern-based (PB) inspection, which aims at reducing this drawback by exploiting a set of evaluation patterns to systematically drive inspectors in their evaluation activities. The application of PB inspection to the evaluation of e-learning systems is reported in this paper together with a study that compares this method to heuristic evaluation and user testing. The study involved 73 novice evaluators and 25 end users, who evaluated an e-learning application using one of the three techniques. The comparison metric was defined along six major dimensions, covering concepts of classical test theory and pragmatic aspects of usability evaluation. The study showed that evaluation patterns, capitalizing on the reuse of expert evaluators know-how, provide a systematic framework which reduces reliance on individual skills, increases inter-rater reliability and output standardization, permits the discovery of a larger set of different problems and decreases evaluation cost. Results also indicated that evaluation in general is strongly dependent on the methodological apparatus as well as on judgement bias and individual preferences of evaluators, providing support to the conceptualisation of interactive quality as a subjective judgement, recently brought forward by the UX research agenda.en_US
dc.languageEnglish
dc.language.isoenen_US
dc.relation
dc.rights
dc.subjectE-Learning evaluationen_US
dc.subjectEvaluation patternsen_US
dc.subjectUsability evaluation techniquesen_US
dc.titleDo patterns help novice evaluators? A comparative studyen_US
dc.typeArticleen_US
dc.date.updated2020-06-30T03:01:01Z
dc.language.isiEN-GB
dc.journal.titleInternational Journal of Human-Computer Studies
dc.description.fulltextnoneen_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record