Abstract
We report on the procedures followed in order to acquire a multimodal sensory corpus that will become the primary source of data retrieval, data analysis and testing of mobility assistive robot prototypes in the European project MOBOT. Analysis of the same corpus with respect to all sensorial data will lead to the definition of the multimodal interaction model; gesture and audio data analysis is foreseen to be integrated into the platform in order to facilitate the communication channel between end users and the assistive robot prototypes expected to be the project’s outcomes. In order to allow estimation of the whole range of sensorial data acquired, we will refer to the data acquisition scenarios followed in order to obtain the required multisensory data and to the initial post-processing outcomes currently available.