Imperial College London

DrLorenzoPicinali

Faculty of EngineeringDyson School of Design Engineering

Senior Lecturer
 
 
 
//

Contact

 

l.picinali Website CV

 
 
//

Location

 

Studio 210-12 Prince's GardensSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

41 results found

Levtov Y, Picinali L, D'Cruz M, Simeone Let al., Audio Engineering Society Convention, 140th Audio Engineering Society Convention

CONFERENCE PAPER

Picinali L, D'Cruz M, Simeone L, 3D-Tune-In: 3D sound, visuals and gamification to facilitate the use of hearing aids, EuroVR Conference 2015

CONFERENCE PAPER

Picinali L, Rona A, Cockrill M, Panday P, Tripathi R, Ayub Met al., Vibro-acoustic response of tympanic-membrane-like models, 22nd International Converence on Sound and Vibration

CONFERENCE PAPER

Isaac Engel J, Picinali L, 2017, Long-term user adaptation to an audio augmented reality system

Audio Augmented Reality (AAR) consists in extending a real auditory environment with virtual sound sources. This can be achieved using binaural earphones/microphones. The microphones, placed in the outer part of each earphone, record sounds from the user's environment, which are then mixed with virtual binaural audio, and the resulting signal is finally played back through the earphones. However, previous studies show that, with a system of this type, audio coming from the microphones (or hear-through audio) does not sound natural to the user. The goal of this study is to explore the capabilities of long-term user adaptation to an AAR system built with off-the-shelf components (a pair of binaural microphones/earphones and a smartphone), aiming at achieve perceived realism for the hear-through audio. To compensate the acoustical effects of ear canal occlusion, the recorded signal is equalised in the smartphone. In-out latency was minimised to avoid distortion caused by comb filtering effect. To evaluate the adaptation process of the users to the headset, two case studies were performed. The subjects wore an AAR headset for several days while performing daily tests to check the progress of the adaptation. Both quantitative and qualitative evaluations (i.e., localising real and virtual sound sources and analysing the perception of pre-recorded auditory scenes) were carried out, finding slight signs of adaptation, especially in the subjective tests. A demo will be available for the conference visitors, including also the integration of visual Augmented Reality functionalities.

CONFERENCE PAPER

Mascetti S, Gerino A, Bernareggi C, Picinali Let al., 2017, On the evaluation of novel sonification techniques for non-visual shape exploration, ACM Transactions on Accessible Computing, Vol: 9, ISSN: 1936-7228

© 2017 ACM. There are several situations in which a person with visual impairment or blindness needs to extract information from an image. For example, graphical representations are often used in education, in particular, in STEM (science, technology, engineering, and mathematics) subjects. In this contribution, we propose a set of six sonification techniques to support individuals with visual impairment or blindness in recognizing shapes on touchscreen devices. These techniques are compared among themselves and with two other sonification techniques already proposed in the literature. Using Invisible Puzzle, a mobile application which allows one to conduct non-supervised evaluation sessions, we conducted tests with 49 subjects with visual impairment and blindness, and 178 sighted subjects. All subjects involved in the process successfully completed the evaluation session, showing a high level of engagement, demonstrating, therefore, the effectiveness of the evaluation procedure. Results give interesting insights into the differences among the sonification techniques and, most importantly, show that after a short training, subjects are able to successfully identify several different shapes.

JOURNAL ARTICLE

Picinali L, Wallin A, Levtov Y, Poirier-Quinot Det al., 2017, Comparative perceptual evaluation between different methods for implementing Reverberation in a binaural context

Reverberation has always been considered of primary importance in order to improve the realism, externalisation and immersiveness of binaurally spatialised sounds. Different techniques exist for implementing reverberation in a binaural context, each with a different level of computational complexity and spatial accuracy. A perceptual study has been performed in order to compare between the realism and localization accuracy achieved using 5 different binaural reverberation techniques. These included multichannel Ambisonic-based, stereo and mono reverberation methods. A custom web-based application has been developed implementing the testing procedures, and allowing participants to take the test remotely. Initial results with 54 participants show that no major difference in terms of perceived level of realism and spatialisation accuracy could be found between four of the five proposed reverberation methods, suggesting that a high level of complexity in the reverberation process does not always correspond to improved perceptual attributes.

CONFERENCE PAPER

Mascetti S, Picinali L, Gerino A, Ahmetovic D, Bernareggi Cet al., 2016, Sonification of guidance data during road crossing for people with visual impairments or blindness, International Journal of Human Computer Studies, Vol: 85, Pages: 16-26, ISSN: 1071-5819

© 2015 Elsevier Ltd. All rights reserved. In the last years several solutions were proposed to support people with visual impairments or blindness during road crossing. These solutions focus on computer vision techniques for recognizing pedestrian crosswalks and computing their relative position from the user. Instead, this contribution addresses a different problem; the design of an auditory interface that can effectively guide the user during road crossing. Two original auditory guiding modes based on data sonification are presented and compared with a guiding mode based on speech messages. Experimental evaluation shows that there is no guiding mode that is best suited for all test subjects. The average time to align and cross is not significantly different among the three guiding modes, and test subjects distribute their preferences for the best guiding mode almost uniformly among the three solutions. From the experiments it also emerges that higher effort is necessary for decoding the sonified instructions if compared to the speech instructions, and that test subjects require frequent 'hints' (in the form of speech messages). Despite this, more than 2/3 of test subjects prefer one of the two guiding modes based on sonification. There are two main reasons for this: firstly, with speech messages it is harder to hear the sound of the environment, and secondly sonified messages convey information about the "quantity" of the expected movement.

JOURNAL ARTICLE

Mascetti S, Rossetti C, Gerino A, Bernareggi C, Picinali L, Rizzi Aet al., 2016, Towards a natural user interface to support people with visual impairments in detecting colors, Pages: 171-178, ISSN: 0302-9743

© Springer International Publishing Switzerland 2016. A mobile application that detects an item’s color is potentially very useful for visually impaired people. However, users could run into difficulties when centering the target item in the mobile device camera field of view. To address this problem, in this contribution we propose a mobile application that detects the color of the item pointed by the user with one finger. In its current version, the application requires the user to wear a marker on the finger used for pointing. A preliminary evaluation conducted with blind users confirms the usefulness of the application, and encourages further development.

CONFERENCE PAPER

Patel H, Cobb S, Hallewell M, D'Cruz M, Eastgate R, Picinali L, Tamascelli Set al., 2016, User involvement in design and application of virtual reality gamification to facilitate the use of hearing AIDS, Pages: 77-81

© 2016 IEEE. The 3D Tune-In project aims to create an innovative toolkit based on 3D sound, visuals and gamification techniques to facilitate different target audiences in understanding and using the varied settings of their hearing aid to attain optimum performance in different social contexts. In the early stages of project development, hearing aid (HA) users participated in activities to identify user requirements regarding the difficulties and issues they face in everyday situations due to their hearing loss. The findings from questionnaire and interview studies and identification of current personas and scenarios of use indicate that the project can clearly and distinctly s the requirements of people with hearing loss as well as improve the general public's understanding of hearing loss. Five Future Scenarios of use have been derived to describe how the technologies and games to be developed by the 3D Tune-In project will address these requirements.

CONFERENCE PAPER

Battey B, Giannoukakis M, Picinali L, 2015, Haptic control of multistate generative music systems, Pages: 98-101

© 2015 Battey et al. Force-feedback controllers have been considered as a solution to the lack of sonically coupled physical feedback in digital-music interfaces, with researchers focusing on instrument-like models of interaction. However, there has been little research applied to the use of force-feedback interfaces to the control of real-time generative-music systems. This paper proposes that haptic interfaces could enable performers to have a more fully embodied engagement with such systems, increasing expressive control and enabling new compositional and performance potentials. A proof-of-concept project is described, which entailed development of a core software toolkit and implementation of a series of test cases.

CONFERENCE PAPER

Eastgate R, Picinali L, Patel H, D'Cruz Met al., 2015, 3D Games for Tuning and Learning about Hearing Aids, Hearing Journal, Vol: 69, Pages: 30-32, ISSN: 0745-7472

JOURNAL ARTICLE

Gerino A, Picinali L, Bernareggi C, Alabastro N, Mascetti Set al., 2015, Towards large scale evaluation of novel sonification techniques for non visual shape exploration, Pages: 13-21

© 2015 ACM. There are several situations in which a person with visual impairment or blindness needs to extract information from an image. Examples include everyday activities, like reading a map, as well as educational activities, like exercises to develop visuospatial skills. In this contribution we propose a set of 6 sonification techniques to recognize simple shapes on touchscreen devices. The effectiveness of these sonification techniques is evaluated though Invisible Puzzle, a mobile application that makes it possible to conduct non-supervised evaluation sessions. Invisible Puzzle adopts a gamification approach and is a preliminary step in the development of a complete game that will make it possible to conduct a large scale evaluation with hundreds or thousands of blind users. With Invisible Puzzle we conducted 131 tests with sighted subjects and 18 tests with subjects with blindness. All subjects involved in the process successfully completed the evaluation session, with high engagement, hence showing the effectiveness of the evaluation procedure. Results give interesting insights on the differences among the sonification techniques and, most importantly, show that, after a short training, subjects are able to identify many different shapes.

CONFERENCE PAPER

Gerino A, Picinali L, Mascetti S, Bernareggi Cet al., 2015, Eyes-free exploration of shapes with invisible puzzle, Pages: 425-426

© 2015 ACM. Recent contributions proposed sonification techniques to allow people with visual impairment or blindness to extract information from images on touchscreen devices. In this contribution we introduce Invisible Puzzle Game, an application that is aimed at performing an instrumented remote evaluation of these sonification techniques. The aim is to reach a wide audience of both sighted and visually impaired users and to engage them, thanks to game elements, in playing over time, so that it is possible to evaluate how the performance of each sonification technique is affected by practice.

CONFERENCE PAPER

Iliya S, Menzies D, Neri F, Cornelius P, Picinali Let al., 2015, Robust impaired speech segmentation using neural network mixture model

© 2014 IEEE. This paper presents a signal processing technique for segmenting short speech utterances into unvoiced and voiced sections and identifying points where the spectrum becomes steady. The segmentation process is part of a system for deriving musculoskeletal articulation data from disordered utterances, in order to provide training feedback for people with speech articulation problem. The approach implement a novel and innovative segmentation scheme using artificial neural network mixture model (ANNMM) for identification and capturing of the various sections of the disordered (impaired) speech signals. This paper also identify some salient features that distinguish normal speech from impaired speech of the same utterances. This research aim at developing artificial speech therapist capable of providing reliable text and audiovisual feed back progress report to the patient.

CONFERENCE PAPER

Iliya S, Neri F, Menzies D, Cornelius P, Picinali Let al., 2015, Differential evolution schemes for speech segmentation: A comparative study

© 2014 IEEE. This paper presents a signal processing technique for segmenting short speech utterances into unvoiced and voiced sections and identifying points where the spectrum becomes steady. The segmentation process is part of a system for deriving musculoskeletal articulation data from disordered utterances, in order to provide training feedback. The functioning of the signal processing technique has been optimized by selecting the parameters of the model. The optimization has been carried out by testing and comparing multiple Differential Evolution implementations, including a standard one, a memetic one, and a controlled randomized one. Numerical results have also been compared with a famous and efficient swarm intelligence algorithm. For the given problem, Differential Evolution schemes appear to display a very good performance as they can quickly reach a high quality solution. The binomial crossover appears, for the given problem, beneficial with respect to the exponential one. The controlled randomization appears to be the best choice in this case. The overall optimized system proved to segment well the speech utterances and efficiently detect its uninteresting parts

CONFERENCE PAPER

O'Sullivan L, Picinali L, Gerino A, Cawthorne Det al., 2015, A prototype audio-tactile map system with an advanced auditory display, International Journal of Mobile Human Computer Interaction, Vol: 7, Pages: 53-75, ISSN: 1942-390X

Copyright © 2015, IGI Global. Tactile surfaces can display information in a variety of applications for all users, but can be of particular beneft to blind and visually impaired individuals. One example is the use of paper-based tactile maps as navigational aids for interior and exterior spaces; visually impaired individuals may use these to practice and learn a route prior to journeying. The addition of an interactive auditory display can enhance such interfaces by providing additional information. This article presents a prototype system which tracks the actions of a user's hands over a tactile surface and responds with sonic feedback. The initial application is an Audio-Tactile Map (ATM); the auditory display provides verbalised information as well as environmental sounds useful for navigation. Two versions of the interface are presented; a desktop version intended as a large-format information point and a mobile version which uses a tablet computer overlain with tactile paper. Details of these implementations are provided, including observations drawn from the participation of a partially-sighted individual in the design process. A usability test with fve visually impaired subjects also gives a favourable assessment of the mobile version.

JOURNAL ARTICLE

Ulanicki B, Picinali L, Janus T, 2015, Measurements and analysis of cavitation in a pressure reducing valve during operation a case study, Pages: 270-279

© 2015 The Authors. Published by Elsevier Ltd. This paper proposes a methodology and presents its practical application for evaluating whether a pressure reducing valve (PRV) is under cavitation during its operation in a water distribution system. The approach is based on collecting measurements over a 24-hour period such that high demand and low demand times are included. The collected measurements allow evaluation of four indicators related to cavitation, namely the hydraulic cavitation index, noise generated by the valve, acoustic cavitation index and the spectra of the noise. These four indicators provide sufficient information for diagnosis of cavitation with high certainty.

CONFERENCE PAPER

Caraffini F, Neri F, Picinali L, 2014, An analysis on separability for Memetic Computing automatic design, Information Sciences, Vol: 265, Pages: 1-22, ISSN: 0020-0255

This paper proposes a computational prototype for automatic design of optimization algorithms. The proposed scheme makes an analysis of the problem that estimates the degree of separability of the optimization problem. The separability is estimated by computing the Pearson correlation indices between pairs of variables. These indices are then manipulated to generate a unique index that estimates the separability of the entire problem. The separability analysis is thus used to design the optimization algorithm that addresses the needs of the problem. This prototype makes use of two operators arranged in a Parallel Memetic Structure. The first operator performs moves along the axes while the second simultaneously perturbs all the variables to follow the gradient of the fitness landscape. The resulting algorithmic implementation, namely Separability Prototype for Automatic Memes (SPAM), has been tested on multiple testbeds and various dimensionality levels. The proposed computational prototype proved to be a flexible and intelligent framework capable to learn from a problem and, thanks to this learning, to outperform modern meta-heuristics representing the-state-of-the-art in optimization. © 2013 Elsevier Inc. All rights reserved.

JOURNAL ARTICLE

Menelas BAJ, Picinali L, Bourdot P, Katz BFGet al., 2014, Non-visual identification, localization, and selection of entities of interest in a 3D environment, Journal on Multimodal User Interfaces, Vol: 8, Pages: 243-256, ISSN: 1783-7677

© 2014, OpenInterface Association. This paper addresses the use of audio and haptics as a mean to reduce the load of the visual channel in interaction tasks within virtual environments. An examination is made regarding the exploitation of audio and/or haptic interactions for the acquisition of a target of interest in an environment containing multiple and obscured distractors. A first study compares means for identifying and locating a specified target among others employing either audio, haptic, or both sensori-motor channels activated simultaneously. Following an analysis of the results and subject comments, an improved multimodal approach is proposed and evaluated in a second study, combining advantages offered by each sensory channel. Results confirm the efficiency and effectiveness of the proposed multimodal approach.

JOURNAL ARTICLE

O'Sullivan L, Picinali L, Feakes C, Cawthorne Det al., 2014, Audio tactile maps (ATM) system for the exploration of digital heritage buildings by visually-impaired individuals - First prototype and preliminary evaluation, ISSN: 2221-3767

Navigation within historic spaces requires analysis of a variety of acoustic, proprioceptive and tactile cues; a task that is well-developed in many visually-impaired individuals but for which sighted individuals rely almost entirely on vision. For the visually-impaired, the creation of a cognitive map of a space can be a long process for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate to what degree this task can be performed off-site using a virtual simulator. We propose a tactile map navigation system with interactive auditory display. The system is based on a paper tactile map upon which the user's hands are tracked. Audio feedback provides; (i) information on user-selected map features, (ii) dynamic navigation information as the hand is moved, (iii) guidance on how to reach the location of one hand (arrival point) from the location of the other hand (departure point) and (iv) additional interactive 3D-audio cues useful for navigation. This paper presents an overview of the initial technical development stage, reporting observations from preliminary evaluations with a blind individual. The system will be beneficial to visuallyimpaired visitors to heritage sites; we describe one such site which is being used to further assess our prototype.

CONFERENCE PAPER

Picinali L, Afonso A, Denis M, Katz BFGet al., 2014, Exploration of architectural spaces by blind people using auditory virtual reality for the construction of spatial knowledge, International Journal of Human Computer Studies, Vol: 72, Pages: 393-407, ISSN: 1071-5819

Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. For blind people, the act of creating cognitive maps for spaces, such as home or office buildings, can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate at which point this task can be performed off-site, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for navigation preparation in new and unknown environments. The main goal of the present research can therefore be summarized as investigating the possibilities of assisting blind individuals in learning a spatial environment configuration through the listening of audio events and their interactions with these events within a virtual reality experience. A comparison of two types of learning through auditory exploration has been performed: in situ real displacement and active navigation in a virtual architecture. The virtual navigation rendered only acoustic information. Results for two groups of five participants showed that interactive exploration of virtual acoustic room simulations can provide sufficient information for the construction of coherent spatial mental maps, although some variations were found between the two environments tested in the experiments. Furthermore, the mental representation of the virtually navigated environments preserved topological and metric properties, as was found through actual navigation. © 2013 Published by Elsevier Ltd. All rights reserved.

JOURNAL ARTICLE

Caraffini F, Iacca G, Neri F, Picinali L, Mininno Eet al., 2013, A CMA-ES super-fit scheme for the re-sampled inheritance search, Pages: 1123-1130

The super-fit scheme, consisting of injecting an individual with high fitness into the initial population of an algorithm, has shown to be a simple and effective way to enhance the algorithmic performance of the population-based algorithm. Whether the super-fit individual is based on some prior knowledge on the optimization problem or is derived from an initial step of pre-processing, e.g. a local search, this mechanism has been applied successfully in various examples of evolutionary and swarm intelligence algorithms. This paper presents an unconventional application of this super-fit scheme, where the super-fit individual is obtained by means of the Covariance Adaptation Matrix Evolution Strategy (CMA-ES), and fed to a single solution local search which perturbs iteratively each variable. Thus, compared to other super-fit schemes, the roles of super-fit individual generator and global optimizer are switched. To prevent premature convergence, the local search employs a re-sampling mechanism which inherits parts of the best individual while randomly sampling the remaining variables. We refer to such local search as Re-sampled Inheritance Search (RIS). Tested on the CEC 2013 optimization benchmark, the proposed algorithm, named CMA-ES-RIS, displays a respectable performance and a good balance between exploration and exploitation, resulting into a versatile and robust optimization tool. © 2013 IEEE.

CONFERENCE PAPER

Caraffini F, Neri F, Cheng J, Zhang G, Picinali L, Iacca G, Mininno Eet al., 2013, Super-fit multicriteria adaptive differential evolution, Pages: 1678-1685

This paper proposes an algorithm to solve the CEC2013 benchmark. The algorithm, namely Super-fit Multi-criteria Adaptive Differential Evolution (SMADE), is a Memetic Computing approach based on the hybridization of two algorithmic schemes according to a super-fit memetic logic. More specifically, the Covariance Matrix Adaptive Evolution Strategy (CMAES), run at the beginning of the optimization process, is used to generate a solution with a high quality. This solution is then injected into the population of a modified Differential Evolution, namely Multicriteria Adaptive Differential Evolution (MADE). The improved solution is super-fit as it supposedly exhibits a performance a way higher than the other population individuals. The super-fit individual then leads the search of the MADE scheme towards the optimum. Unimodal or mildly multi-modal problems, even when non-separable and ill-conditioned, tend to be solved during the early stages of the optimization by the CMAES. Highly multi-modal optimization problems are efficiently tackled by SMADE since the MADE algorithm (as well as other Differential Evolution schemes) appears to work very well when the search is led by a super-fit individual. © 2013 IEEE.

CONFERENCE PAPER

Picinali L, Chrysostomou C, Seker H, 2012, The sound of proteins, Pages: 612-615

Transforming proteins into signals, and analyzing their spectra using signal processing techniques (e.g., the Discrete Fourier Transform), has proven to be a suitable method for extracting information about the protein's biological functions. Along with imaging, sound is always found to be one of the helpful tools that have been used to characterize and distinguish objects, particularly, in medicine and biology. The aim of the current study is therefore to sonify (read "render with sound") these signals, and to verify if the sounds produced with this operation are perceived as similar if generated from proteins with similar characteristics and functions, therefore if the information gathered through sound can be considered biologically and medically meaningful, and perceived as such. The approach taken was applied to distinguishing the influenza related proteins, namely H1N1 and H3N2 protein sets. The study, for the first time, reveals that sonification of the proteins allows to clearly separate the protein sets. This promising approach could be further utilized to open new research fields in biology and medicine. © 2012 IEEE.

CONFERENCE PAPER

Picinali L, Feakes C, Mauro D, Katz BFGet al., 2012, Tone-2 tones discrimination task comparing audio and haptics, Pages: 19-24

To investigating the capabilities of human beings to differentiate between tactile-vibratory stimuli with the same fundamental frequency but with different spectral content, this study concerns discrimination tasks comparing audio and haptic performances. Using an up-down 1 dB step adaptive procedure, the experimental protocol consists of measuring the discrimination threshold between a pure tone signal and a stimulus composed of two concurrent pure tones, changing the amplitude and frequency of the second tone. The task is performed employing exactly the same experimental apparatus (computer, AD-DA converters, amplifiers and drivers) for both audio and tactile modalities. The results show that it is indeed possible to discriminate between signals having the same fundamental frequency but different spectral content for both haptic and audio modalities, the latter being notably more sensitive. Furthermore, particular correlations have been found between the frequency of the second tone and the discrimination threshold values, for both audio and tactile modalities. © 2012 IEEE.

CONFERENCE PAPER

Picinali L, Feakes C, Mauro DA, Katz BFGet al., 2012, Spectral discrimination thresholds comparing audio and haptics for complex stimuli, Pages: 131-140, ISSN: 0302-9743

© Springer-Verlag Berlin Heidelberg 2012. Individuals with normal hearing are generally able to discriminate auditory stimuli that have the same fundamental frequency but different spectral content. This study concerns to what extent it is possible to perform the same differentiation considering vibratory tactile stimuli. Three perceptual experiments have been carried out in an attempt to compare discrimination thresholds in terms of spectral differences between auditory and vibratory tactile stimulations. The first test consists of assessing the subject’s ability in discriminating between three signals with distinct spectral content. The second test focuses on the measurement of the discrimination threshold between a pure tone signal and a signal composed of two pure tones, varying the amplitude and frequency of the second tone. Finally, in the third test the discrimination threshold is measured between a tone with even harmonic components and a tone with odd ones. The results show that it is indeed possible to discriminate between haptic signals having the same fundamental frequency but different spectral. The threshold of sensitivity for detection is markedly less than for audio stimuli.

CONFERENCE PAPER

Picinali L, Ferey N, Et Al, 2012, Advances in Human-Protein Interaction - Interactive and Immersive Molecular Simulations, Protein-Protein Interactions - Computational and Experimental Tools, Editors: Cai, Hong

BOOK CHAPTER

Picinali L, 2011, Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind, Pages: 1311-1316, ISSN: 2221-3767

Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. Focusing on the needs of the blind, the creation of cognitive maps for spaces such as home or office buildings can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate to what point this task can be performed offsite, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for preparing for navigation in new and unknown environments. A comparison of three learning scenarios has been performed: in-situ real displacement, passive playback of recorded navigation (binaural and Ambisonic), and active navigation in virtual auditory environment architecture. For all conditions, only acoustic cues are employed. This research is the result of collaboration between researchers in psychology and acoustics on the issue of interior spatial cognition.

CONFERENCE PAPER

Picinali L, Etherington A, Feakes C, Lloyd Tet al., 2011, VR interactive environments for the blind: Preliminary comparative studies, Pages: 113-115, ISSN: 0357-9387

People living with impaired vision rely upon other sensory inputs in order to learn the configuration of a new space. This research project asks 2 questions: what types of acoustic cues are used to mentally represent a given environment without the visual channel? And is it possible to accurately model these cues computationally [in a Virtual Reality (VR) space] to provide an easy mechanism for someone with visual impairment to learn the configuration of a new environment in advance of being introduced to it? In this poster, three preliminary comparative studies are presented which focus on the ability of blind and sighted individuals to detect walls and obstacles within an environment, relying only on the auditory sense.

CONFERENCE PAPER

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00851530&limit=30&person=true