Abstract
What is a concept? This is a fundamental question in philosophy, cognitive science, psychology, and AI. However, answers rarely converge. Research in experimental and cognitive psychology has extensively pursued to answer the question of the nature of concepts, giving rise to complex representation models. These models, however, often lack a clear or complete formalisation which makes it difficult to capture them precisely in computational models. Knowledge Representation aims to represent knowledge about the world in a format suitable for re-use in computational systems, with the general goal of advancing Artificial Intelligence. Cognitive models of human conceptualisation are thus of pivotal importance for the field. However, formal work in AI and Knowledge Representation does not always consider these models, and the cognitive adequacy of computational systems is frequently sacrificed in favour of better performance. This thesis aims to develop a computational and logical framework inspired and informed by the theories of concepts as they can be found across the disciplines of Cognitive Science and Experimental Psychology. The main hypothesis is that including cognitive models in the equation may improve KR systems by providing a more faithful, as well as more understandable, representation of human knowledge. To this end, this work provides formal accounts of different cognitive models and phenomena related to categorisation and, especially, concept combination. On the one hand, this is done by designing novel and innovative ontology combination methodologies, reflecting aspects of cognitive models. On the other hand, a new family of weighted Description Logic is introduced, called Perceptron Logic, which is inspired by Prototype Theory. The introduced logic is shown to be capable of rendering classification and combination tasks in a cognitively more adequate way. Perceptron logic can also provide a bridge between learning and formal modelling, and therefore is shown to contribute to neural symbolic integration. As a result of this aspect and the logic’s arguably intuitive semantics, the new formalism, we expect, may play an important role in future approaches to explainable AI.