Imperial College London

DrLorenzoPicinali

Faculty of EngineeringDyson School of Design Engineering

Senior Lecturer
 
 
 
//

Contact

 

l.picinali Website CV

 
 
//

Location

 

Studio 210-12 Prince's GardensSouth Kensington Campus

//

Summary

 

Areas of interest

 

Binaural auralization - HRTF measurement and synthesis; HRTF selection, individualisation and adaptation; distance perception and simulation; psychophysiology of the spatial hearing perception.

Download the latest version of the 3D Tune-In Toolkit for real-time binaural auralisation. And see here the latest development of the LIMSI Spatialisation Engine (LSE).

3D audio virtual reality applications for the blind

Room acoustics - creation and auralization of room models; real-time 3D audio rendering

Audio-haptics interaction design and multimodal interfaces

Audiology and hearing aids technology - audio-prosthetic techniques and technologies; subjective and objective evaluation of audio quality and speech intelligibility with hearing aids; auditory spatial perception with hearing aids

Other - data sonification; sound recording; mixing and mastering techniques

Current Research Projects

show research

3D Tune-In - Lorenzo Picinali is the Project Coordinator for this EU-H2020 ICT project on the calibration and evaluation of hearing aid devices using acoustic VR and videogames.

PLUGGY - Pluggable Social Platform for Heritage Awareness and Participation - Lorenzo Picinali is the Imperial College PI on this EU-H2020 project aimed at promoting citizens' active involvement in Europe's rich cultural heritage. Imperial's work will focus on the development of an application for creating sonic narratives from non-audio materials, and will lead the user evaluation.

HRTF selection and adaptation - in collaboration with Dr. Brian FG Katz (LIMSI-CNRS, France). The cues that human listeners use for sound localisation are dependent on many physical factors and are therefore highly individualised. The size of the head or the shape of the ears, for example, change the way that sounds are filtered when emanating from a given direction. This is problematic for applications using virtual audio where economic or temporal constraints make the measurement of an individual’s direction-dependant filter, or Head-Related Transfer Function (HRTF), infeasible. In these situations, HRTFs must be taken from an existing database measured from other individuals or dummy heads. Fortunately, it has been shown that the brain is able to adapt to modified cues for sound localisation but often this takes too long to be practical for consumer, scientific or medical applications. Recent studies, however, suggest that this time can be substantially reduced using active perceptual learning methods. A virtual reality application is being developed to investigate the efficacy of gamification techniques to further expedite the process and increase effectiveness of training.

Acoustic Augmented Reality - in collaboration with Isaac Engel (currently PhD student within the Dyson School of Design Engineering). Augmented Reality (AR) consists in adding an audio-visual digital layer on top of the world, changing the way we perceive it through our senses. The acoustic component of AR plays an essential role in creating an immersive experience for the user; in theory, with the help of 3D audio techniques we can blend real sounds with computer-generated ones so that the latter is indistinguishable from the first. Current work focuses in developing audio systems for Acoustic AR and studying their impact in spatial perception and user experience.

Acoustic Augmented Reality

Continuous monitoring of rainforest biodiversity via acoustic signal processing - in collaboration with Sarab Sethi, Rob Ewers and Nick Jones (Imperial College London). Sound carries substantial information about local biodiversity, being used for navigation and communication by a wide range of taxa. Acoustic information is now commonly used to assist in point-surveys of many of these species, aiding the identification of bats, grasshoppers, birds, amphibians, and even for identifying individual animals. Furthermore, acoustic data has been used to create metrics of α and β diversity that correspond to changes in species diversity and turnover. Studies published to date, however, share the limitations that they (1) tend to record acoustics for short periods only; and (2) analyse the data at slower than real-time speed. Our goal is to develop and implement continuous, automated monitoring of rainforest biodiversity, and use those data to investigate the effects of logging on tropical biodiversity.

Invisible Puzzle and ZebraX - two interactive audio-based apps (iOS) for visually impaired and blind individuals for understanding shapes through sound (Invisible Puzzle) and for detecting pedestrian crossings with the mobile phone's camera and receive audio guidance towards aligning and crossing operations (ZebraX). Download the invisible puzzle app from the Apple Store!

VR Interactive Applications for the Blind - in collaboration with Brian FG Katz (Institut Jean Le Rond d'Alembert, Paris) and Tony Stockman (Queen Mary University of London) - see also the video interview

Audio Tactile Maps - See also previous versions of the ATM app.

Loss and Gain - Assisting hearing, expanding listening - in collaboration with Dr. Pete Batchelor (MTI, De Montfort University) and Dr. Ximena Alarco

The sound of proteins - in collaboration with Dr. Huseyin Secker and Charalambos Chrysostomou (CCI - De Montfort University)

CV AND COMPLETE PUBLICATIONS LIST

show research

Various links and downloads

show research

 

Click on the following links to listen to binaural recordings and simulations (obviously, listen with a pair of headphones!):
Reverberant Room            Train Station            Where am I?

Click here to download the latest version of the 3D Tune-In TestApp for real-time binaural and loudspeaker spatialisation, hearing aid and hearing loss simulations.

Click here to download the book chapter Spatial Audio Applied to Research with the Blind , written with Brian FG Katz (LIMSI-CNRS)

Video interview on Virtual reality for the blind

Video interview on Sound of proteins

Video interview to Lorenzo in the Sounds of the Invisible Word documentary.

Article on Forbes about the ZebraX project.

Organ and Augmented Reality - a sound installation