Abstract
Validating and debugging conceptual models is a very time-consuming task. Though separate software tools for model validation and machine learning are available, their integration for an automated support of the debugging-validation process still needs to be explored. The synergy between model validation for finding intended/unintended conceptual models instances and machine learning for suggesting repairs promises to be a fruitful relationship. This paper provides a preliminary description of a framework for an adequate automatic support to engineers and domain experts in the proper design of a conceptual model. By means of a running example, the analysis will focus on two main aspects: i) the process by which formal, tool-supported methods can be effectively used to generate negative and positive examples, given an input conceptual model; ii) the key role of a learning system in uncovering error-prone structures and suggesting conceptual modeling repairs.