Imperial College London

DrLorenzoPicinali

Faculty of EngineeringDyson School of Design Engineering

Reader in Audio Experience Design
 
 
 
//

Contact

 

l.picinali Website CV

 
 
//

Location

 

Level 1 staff officeDyson BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

112 results found

Caraffini F, Neri F, Picinali L, 2014, An analysis on separability for Memetic Computing automatic design, Information Sciences, Vol: 265, Pages: 1-22, ISSN: 0020-0255

This paper proposes a computational prototype for automatic design of optimization algorithms. The proposed scheme makes an analysis of the problem that estimates the degree of separability of the optimization problem. The separability is estimated by computing the Pearson correlation indices between pairs of variables. These indices are then manipulated to generate a unique index that estimates the separability of the entire problem. The separability analysis is thus used to design the optimization algorithm that addresses the needs of the problem. This prototype makes use of two operators arranged in a Parallel Memetic Structure. The first operator performs moves along the axes while the second simultaneously perturbs all the variables to follow the gradient of the fitness landscape. The resulting algorithmic implementation, namely Separability Prototype for Automatic Memes (SPAM), has been tested on multiple testbeds and various dimensionality levels. The proposed computational prototype proved to be a flexible and intelligent framework capable to learn from a problem and, thanks to this learning, to outperform modern meta-heuristics representing the-state-of-the-art in optimization.

Journal article

Picinali L, Afonso A, Denis M, Katz BFGet al., 2014, Exploration of architectural spaces by blind people using auditory virtual reality for the construction of spatial knowledge, International Journal of Human-Computer Studies, Vol: 72, Pages: 393-407, ISSN: 1071-5819

Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. For blind people, the act of creating cognitive maps for spaces, such as home or office buildings, can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate at which point this task can be performed off-site, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for navigation preparation in new and unknown environments. The main goal of the present research can therefore be summarized as investigating the possibilities of assisting blind individuals in learning a spatial environment configuration through the listening of audio events and their interactions with these events within a virtual reality experience. A comparison of two types of learning through auditory exploration has been performed: in situ real displacement and active navigation in a virtual architecture. The virtual navigation rendered only acoustic information. Results for two groups of five participants showed that interactive exploration of virtual acoustic room simulations can provide sufficient information for the construction of coherent spatial mental maps, although some variations were found between the two environments tested in the experiments. Furthermore, the mental representation of the virtually navigated environments preserved topological and metric properties, as was found through actual navigation.

Journal article

Picinali L, O’Sullivan L, Cawthorne D, 2014, Audio Tactile Maps (ATM) System for Environmental Exploration by Visually-impaired Individuals, Pages: 149-150

Navigation within open and closed spaces requires analysis of a variety of acoustic, proprioceptive and tactile cues; a task that is well-developed in many visually-impaired individuals but for which sighted individuals rely almost entirely on vision. For the visually-impaired, the creation of a cognitive map of a space can be a long process for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate to what degree this task can be performed off-site using a virtual simulator. We propose a tactile map navigation system with interactive auditory display. The system is based on a paper tactile map upon which the user’s hands are tracked. Audio feedback provides; (i) information on user-selected map features, (ii) dynamic navigation information as the hand is moved, (iii) guidance on how to reach the location of one hand (arrival point) from the location of the other hand (departure point) and (iv) additional interactive 3D-audio cues useful for navigation. This demo paper presents an overview of the initial technical development stage.

Conference paper

Caraffini F, Iacca G, Neri F, Picinali L, Mininno Eet al., 2013, A CMA-ES Super-fit Scheme for the Re-sampled Inheritance Search, IEEE Congress on Evolutionary Computation, Publisher: IEEE, Pages: 1123-1130

Conference paper

Caraffini F, Neri F, Cheng J, Zhang G, Picinali L, Iacca G, Mininno Eet al., 2013, Super-fit Multicriteria Adaptive Differential Evolution, IEEE Congress on Evolutionary Computation, Publisher: IEEE, Pages: 1678-1685

Conference paper

Picinali L, Feakes C, Mauro D, Katz BFGet al., 2012, Tone-2 tones discrimination task comparing audio and haptics, Pages: 19-24

To investigating the capabilities of human beings to differentiate between tactile-vibratory stimuli with the same fundamental frequency but with different spectral content, this study concerns discrimination tasks comparing audio and haptic performances. Using an up-down 1 dB step adaptive procedure, the experimental protocol consists of measuring the discrimination threshold between a pure tone signal and a stimulus composed of two concurrent pure tones, changing the amplitude and frequency of the second tone. The task is performed employing exactly the same experimental apparatus (computer, AD-DA converters, amplifiers and drivers) for both audio and tactile modalities. The results show that it is indeed possible to discriminate between signals having the same fundamental frequency but different spectral content for both haptic and audio modalities, the latter being notably more sensitive. Furthermore, particular correlations have been found between the frequency of the second tone and the discrimination threshold values, for both audio and tactile modalities. © 2012 IEEE.

Conference paper

Picinali L, Chrysostomou C, Seker H, 2012, The sound of proteins, Pages: 612-615

Transforming proteins into signals, and analyzing their spectra using signal processing techniques (e.g., the Discrete Fourier Transform), has proven to be a suitable method for extracting information about the protein's biological functions. Along with imaging, sound is always found to be one of the helpful tools that have been used to characterize and distinguish objects, particularly, in medicine and biology. The aim of the current study is therefore to sonify (read "render with sound") these signals, and to verify if the sounds produced with this operation are perceived as similar if generated from proteins with similar characteristics and functions, therefore if the information gathered through sound can be considered biologically and medically meaningful, and perceived as such. The approach taken was applied to distinguishing the influenza related proteins, namely H1N1 and H3N2 protein sets. The study, for the first time, reveals that sonification of the proteins allows to clearly separate the protein sets. This promising approach could be further utilized to open new research fields in biology and medicine. © 2012 IEEE.

Conference paper

Picinali L, Feakes C, Mauro DA, Katz BFGet al., 2012, Spectral discrimination thresholds comparing audio and haptics for complex stimuli, Pages: 131-140, ISSN: 0302-9743

Individuals with normal hearing are generally able to discriminate auditory stimuli that have the same fundamental frequency but different spectral content. This study concerns to what extent it is possible to perform the same differentiation considering vibratory tactile stimuli. Three perceptual experiments have been carried out in an attempt to compare discrimination thresholds in terms of spectral differences between auditory and vibratory tactile stimulations. The first test consists of assessing the subject’s ability in discriminating between three signals with distinct spectral content. The second test focuses on the measurement of the discrimination threshold between a pure tone signal and a signal composed of two pure tones, varying the amplitude and frequency of the second tone. Finally, in the third test the discrimination threshold is measured between a tone with even harmonic components and a tone with odd ones. The results show that it is indeed possible to discriminate between haptic signals having the same fundamental frequency but different spectral. The threshold of sensitivity for detection is markedly less than for audio stimuli.

Conference paper

Picinali L, Ferey N, Et Al, 2012, Advances in Human-Protein Interaction - Interactive and Immersive Molecular Simulations, Protein-Protein Interactions - Computational and Experimental Tools, Editors: Cai, Hong

Book chapter

Picinali L, 2011, Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind, Pages: 1311-1316, ISSN: 2221-3767

Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. Focusing on the needs of the blind, the creation of cognitive maps for spaces such as home or office buildings can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate to what point this task can be performed offsite, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for preparing for navigation in new and unknown environments. A comparison of three learning scenarios has been performed: in-situ real displacement, passive playback of recorded navigation (binaural and Ambisonic), and active navigation in virtual auditory environment architecture. For all conditions, only acoustic cues are employed. This research is the result of collaboration between researchers in psychology and acoustics on the issue of interior spatial cognition.

Conference paper

Picinali L, Katz FGB, 2011, Spatial Audio Applied to Research with the Blind, Advances in Sound Localization, Editors: Sturmillo, ISBN: 9789533072241

Book chapter

Picinali L, Etherington A, Feakes C, Lloyd Tet al., 2011, VR Interactive Environments for the Blind: Preliminary Comparative Studies, Joint Virtual Reality Conference (JVRC 2011) of euroVR and EGVE, Publisher: TECHNICAL RESEARCH CENTRE FINLAND, Pages: 113-115, ISSN: 0357-9387

Conference paper

Picinali L, Menelas B, Katz BFG, Bourdot Pet al., 2010, Evaluation of a haptic/audio system for 3D targeting tasks, Pages: 1710-1720

While common user interface designs tend to focus on visual feedback, other sensory channels may be used in order to reduce the cognitive load of the visual one. In this paper, non-visual environments are presented in order to investigate how users exploit information delivered through haptic and audio channels. A first experiment is designed to explore the effectiveness of a haptic audio system evaluated in a single target localization task: a virtual magnet metaphor is exploited for the haptic rendering, while a parameter mapping sonification of the distance to the source, combined with 3D audio spatialisation, is used for the audio one. An evaluation is carried out in terms of the effectiveness of separate haptic and auditory feedbacks versus the combined multimodal feedback.

Conference paper

Ménélas B, Picinalli L, Katz BFG, Bourdot Pet al., 2010, Audio haptic feedbacks for an acquisition task in a multi-target context, Pages: 51-54

This paper presents the use of audio and haptic feedbacks to reduce the load of the visual channel in interaction tasks within virtual environments. An examination is made regarding the exploitation of audio and/or haptic cues for the acquisition of a desired target in an environment containing multiple and obscured distractors. This study compares different ways of identifying and locating a specified target among others by the mean of either audio, haptic, or both feedbacks rendered simultaneously. The analysis of results and subjective user comments indicate that active haptic and combined audio/haptic conditions offer better results when compared to the audio only condition. Moreover, that the association of haptic and audio feedback presents a real potential for the completion of the task. ©2010 IEEE.

Conference paper

Picinali L, Prosser S, 2010, Monolateral and bilateral fitting with different hearing aids directional configurations, Pages: 726-734

Hearing aid bilateral fitting in hearing impaired subjects raises some problems concerning the interaction of the perceptual binaural properties with the directional characteristics of the device. This experiment aims to establish whether and to which extent in a sample of 20 normally hearing subjects the binaural changes in the speech-to-noise level ratio (s/n), caused by different symmetrical and asymmetrical microphone configurations, and different positions of the speech signal (frontal or lateral), could alter the performances of the speech recognition in noise. Speech Reception Threshold (SRT) in noise (simulated through an Ambisonic virtual soundfield) has been measured monolaterally and bilaterally in order to properly investigate the role of the binaural interaction in the perception of reproduced signals.

Conference paper

Picinali L, 2009, 3D sound simulation over headphones, Handbook of Research on Computational Arts and Creative Informatics, Pages: 113-131, ISBN: 9781605663524

What is the real potential of computer science when applied to music? It is possible to synthesize a "real" guitar using physical modelling software, yet it is also possible virtually to create a guitar with 40 strings, each 100 metres long. The potential can thus be seen both in the simulation of that which in nature already exists, and in the creation of that which in nature cannot exist. After a brief introduction to spatial hearing and the binaural spatialization technique, passing from principles of psychoacoustics to digital signal processing, the reader will be included on a voyage through multi-dimensional auditory worlds, first simulating what in nature already exists, starting from zero and arriving at three "soundscape dimensions", then trying to advance the idea of a fourth "auditory dimension", creating synthetically a four-dimensional soundscape. © 2009, IGI Global.

Book chapter

Vezien JM, Menelas B, Nelson J, Picinali L, Bourdot P, Ammi M, Katz BFG, Burkhardt JM, Pastur L, Lusseyran Fet al., 2009, Multisensory VR exploration for computer fluid dynamics in the CoRSAIRe project, VIRTUAL REALITY, Vol: 13, Pages: 257-271, ISSN: 1359-4338

Journal article

Ferey N, Nelson J, Martin C, Picinali L, Bouyer G, Tek A, Bourdot P, Burkhardt JM, Katz BFG, Ammi M, Etchebest C, Autin Let al., 2009, Multisensory VR interaction for protein-docking in the <i>CoRSAIRe</i> project, VIRTUAL REALITY, Vol: 13, Pages: 273-293, ISSN: 1359-4338

Journal article

Prosser S, Pulga M, Mancuso A, Picinali Let al., 2009, Speech perception with hearing aids: Effects of noise reduction and directional microphone systems on amplified signals, Audiological Medicine, Vol: 7, Pages: 106-111, ISSN: 1651-386X

Our objective was to measure the variations of speech reception threshold (SRT) in noise induced by hearing aids with or without noise reduction (NR) and directional microphone systems (DM). Data were collected from 10 normal hearing volunteers wearing bilateral hearing aids and tested in a sound field of speech and noise. SRT was measured in function of: 1) speech source azimuth (0°,90°,180°); 2) background noise (monophonic vs. quadraphonic); 3) amplification (unaided vs. linearly aided); 4) amplification mode (linear, NR, DM, NR + DM). Compared using a hearing aid linear setting, NR does not improve the SRT in monophonic noise, while it improves the SRT by 23dB in quadraphonic noise with frontal and lateral speech. DM in monophonic noise improves frontal SRT (1dB) and worsens lateral and posterior SRT, while in quadraphonic noise frontal SRT is further advantaged and the DM negative effect disappears. With both devices activated a stronger positive effect is evident for frontal SRT in both noise fields (24dB) and for lateral SRT in quadraphonic noise. These results confirm that NR and DM can facilitate SRT in adverse noise conditions for normal hearing persons, and the results are useful as a reference for hearing impaired persons. © 2009 Informa UK Ltd.

Journal article

d'Alessando C, Noisternig M, Le Beux S, Picinali L, Katz BFG, Jacquemin C, Ajaj R, Planes B, Strurmel N, Delprat Net al., 2009, The ORA project: Audio-visual live electronics and the pipe organ, Pages: 477-480

This paper presents musical and technological aspects of real-time digital audio processing and visual rendering applied to a grand nineteen-century pipe organ. The organ is "augmented" in both its musical range and visual dimensions, thus increasing its potential for expression. The discussed project was presented to a public audience in the form of concerts. First, a brief project description is given, followed by in-depth discussions of the signal processing strategies and general musical considerations. Digital audio effects allow for the addition of new electronic registers to the organ stops. The "direct" sound is captured inside the organ case close to the pipes in order to provide "dry" audio signals for further processing. The room acoustic strongly affects the pipe organ sound perceived by the listener; hence, to combine the processed sound with the organ sound both room simulation and spatial audio rendering are applied. Consequently, the transformed sound is played back via a multitude of loudspeakers surrounding the audience. Finally, considerations of musical aspects are discussed, comprising reflections on virtuosity and technique in the musical play and how the new possibilities could affect composition practice and the use of the organ in contemporary music. © July 2009- All copyright remains with the individual authors.

Conference paper

Picinali L, Prosser S, Mancuso A, Vercellesi Get al., 2008, Speech intelligibility in virtual environments simulating an asymmetric directional microphone configuration, Pages: 2245-2249, ISSN: 2226-5147

In the hearing aids applications, the benefit of directional processing and bilateral listening in terms of speech intelligibility from frontal sound sources has been well documented in recent and past studies. Nevertheless, only a few of the situations in real life present a speaker located exactly in a frontal position, and this seems to constitute a limitation for the directional microphones mounted on the hearing aids. Although several attempts have been done to optimize the directional pattern of the hearing aid trough self-adapting or manually controlled settings, practical results tend to remain quite unsatisfactory. The purpose of this study was to explore the advantage expected by a bilateral hearing aid with an asymmetric directional microphone configuration: responses in terms of speech intelligibility in noise were evaluated in normally hearing subjects for frontal and lateral sound sources. Through a 3D Ambisonic virtual environment manipulation, the presence of two microphones (the two hearing aids) was simulated in a noisy environment with a speech sound source. The listeners were presented with the signals synthesized from the two simulated microphones calibrated with symmetrical and asymmetrical directional patterns, played through a pair of headphones. The speech intelligibility was measured for all the directional microphones' configurations and for reference speech sources located in frontal and lateral positions.

Conference paper

Young JF, Picinali L, Moraitis D, 2007, A practice-based approach to using acoustics and technology in musicianship training, Pages: 61-64

Digital audio tools can be used to facilitate many aspects of traditional note-based music making, but one of the challenges they present is also found in their potential to open up new opportunities for the shaping and deconstruction of sound in ways that are difficult to assimilate with traditional Western notation-based models of musical materials. Development of an understanding of the musical use of these new materials may require an expanded view of the nature of musicianship. This paper presents reflections on an attempt to address this by teaching musicianship via principles of acoustics and psychoacoustics in the context of a music technology undergraduate degree.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00851530&limit=30&person=true&page=4&respub-action=search.html