Multimodal perceptual congruence in an shape recognition artificial system.

Although tactile and visual data differ in nature, the brain can adeptly connect their internal representations. This link, known as congruence or alignment, enables humans to effortlessly recognise objects by sight that they previously only explored by touch. Although artificial unimodal recognition systems have been extensively studied in recent years, providing them with a multimodal congruence mechanism remains an interesting problem with many unresolved questions. In this talk, we discuss the behavioural approach of biological psychology to this question and show how this methodology can be applied to artificial systems. We show under which conditions perceptual congruence emerges and how it is related to cross-modality. Finally, we also introduce some initial results of a low-level (neural) approach to the same problem and compare them with previous results.

Getting here