42 results found
D'Cruz M, Patel H, Hallewell M, et al., 2017, Novel 3D games for people with and without hearing loss, 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017, Pages: 175-176
© 2017 IEEE. Over 90 million people in Europe currently suffer with hearing loss and with an aging community this is expected to rise significantly. Digital Hearing AIDS (HAs) offer real opportunities to enhance hearing capability in different acoustic contexts, however understanding functionalities and calibration can seem overly complex. The 3D Tune-In project has developed a 3D toolkit including a sound spatialisation algorithm and hearing/hearing loss simulators as the basis of five novel digital games addressing challenges of hearing loss and hearing education for children and older adults. Early evaluations have demonstrated the opportunities for hearing impaired groups, as well as the digital games community.
Isaac Engel J, Picinali L, 2017, Long-term user adaptation to an audio augmented reality system
Audio Augmented Reality (AAR) consists in extending a real auditory environment with virtual sound sources. This can be achieved using binaural earphones/microphones. The microphones, placed in the outer part of each earphone, record sounds from the user's environment, which are then mixed with virtual binaural audio, and the resulting signal is finally played back through the earphones. However, previous studies show that, with a system of this type, audio coming from the microphones (or hear-through audio) does not sound natural to the user. The goal of this study is to explore the capabilities of long-term user adaptation to an AAR system built with off-the-shelf components (a pair of binaural microphones/earphones and a smartphone), aiming at achieve perceived realism for the hear-through audio. To compensate the acoustical effects of ear canal occlusion, the recorded signal is equalised in the smartphone. In-out latency was minimised to avoid distortion caused by comb filtering effect. To evaluate the adaptation process of the users to the headset, two case studies were performed. The subjects wore an AAR headset for several days while performing daily tests to check the progress of the adaptation. Both quantitative and qualitative evaluations (i.e., localising real and virtual sound sources and analysing the perception of pre-recorded auditory scenes) were carried out, finding slight signs of adaptation, especially in the subjective tests. A demo will be available for the conference visitors, including also the integration of visual Augmented Reality functionalities.
Mascetti S, Gerino A, Bernareggi C, et al., 2017, On the evaluation of novel sonification techniques for non-visual shape exploration, ACM Transactions on Accessible Computing, Vol: 9, ISSN: 1936-7228
© 2017 ACM. There are several situations in which a person with visual impairment or blindness needs to extract information from an image. For example, graphical representations are often used in education, in particular, in STEM (science, technology, engineering, and mathematics) subjects. In this contribution, we propose a set of six sonification techniques to support individuals with visual impairment or blindness in recognizing shapes on touchscreen devices. These techniques are compared among themselves and with two other sonification techniques already proposed in the literature. Using Invisible Puzzle, a mobile application which allows one to conduct non-supervised evaluation sessions, we conducted tests with 49 subjects with visual impairment and blindness, and 178 sighted subjects. All subjects involved in the process successfully completed the evaluation session, showing a high level of engagement, demonstrating, therefore, the effectiveness of the evaluation procedure. Results give interesting insights into the differences among the sonification techniques and, most importantly, show that after a short training, subjects are able to successfully identify several different shapes.
Picinali L, Wallin A, Levtov Y, et al., 2017, Comparative perceptual evaluation between different methods for implementing Reverberation in a binaural context
Reverberation has always been considered of primary importance in order to improve the realism, externalisation and immersiveness of binaurally spatialised sounds. Different techniques exist for implementing reverberation in a binaural context, each with a different level of computational complexity and spatial accuracy. A perceptual study has been performed in order to compare between the realism and localization accuracy achieved using 5 different binaural reverberation techniques. These included multichannel Ambisonic-based, stereo and mono reverberation methods. A custom web-based application has been developed implementing the testing procedures, and allowing participants to take the test remotely. Initial results with 54 participants show that no major difference in terms of perceived level of realism and spatialisation accuracy could be found between four of the five proposed reverberation methods, suggesting that a high level of complexity in the reverberation process does not always correspond to improved perceptual attributes.
Mascetti S, Picinali L, Gerino A, et al., 2016, Sonification of guidance data during road crossing for people with visual impairments or blindness, INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, Vol: 85, Pages: 16-26, ISSN: 1071-5819
Mascetti S, Rossetti C, Gerino A, et al., 2016, Towards a Natural User Interface to Support People with Visual Impairments in Detecting Colors, 15th International Conference on Computers Helping People with Special Needs (ICCHP), Publisher: SPRINGER INT PUBLISHING AG, Pages: 171-178, ISSN: 0302-9743
Patel H, Cobb S, Hallewell M, et al., 2016, User involvement in design and application of virtual reality gamification to facilitate the use of hearing AIDS, Pages: 77-81
© 2016 IEEE. The 3D Tune-In project aims to create an innovative toolkit based on 3D sound, visuals and gamification techniques to facilitate different target audiences in understanding and using the varied settings of their hearing aid to attain optimum performance in different social contexts. In the early stages of project development, hearing aid (HA) users participated in activities to identify user requirements regarding the difficulties and issues they face in everyday situations due to their hearing loss. The findings from questionnaire and interview studies and identification of current personas and scenarios of use indicate that the project can clearly and distinctly s the requirements of people with hearing loss as well as improve the general public's understanding of hearing loss. Five Future Scenarios of use have been derived to describe how the technologies and games to be developed by the 3D Tune-In project will address these requirements.
Battey B, Giannoukakis M, Picinali L, 2015, Haptic control of multistate generative music systems, Pages: 98-101
© 2015 Battey et al. Force-feedback controllers have been considered as a solution to the lack of sonically coupled physical feedback in digital-music interfaces, with researchers focusing on instrument-like models of interaction. However, there has been little research applied to the use of force-feedback interfaces to the control of real-time generative-music systems. This paper proposes that haptic interfaces could enable performers to have a more fully embodied engagement with such systems, increasing expressive control and enabling new compositional and performance potentials. A proof-of-concept project is described, which entailed development of a core software toolkit and implementation of a series of test cases.
Eastgate R, Picinali L, Patel H, et al., 2015, 3D Games for Tuning and Learning about Hearing Aids, Hearing Journal, Vol: 69, Pages: 30-32, ISSN: 0745-7472
Gerino A, Picinali L, Bernareggi C, et al., 2015, Towards Large Scale Evaluation of Novel Sonification Techniques for Non Visual Shape Exploration, 17th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2015), Publisher: ASSOC COMPUTING MACHINERY, Pages: 13-21
Gerino A, Picinali L, Bernareggi C, et al., 2015, Eyes-free Exploration of Shapes with Invisible Puzzle, 17th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2015), Publisher: ASSOC COMPUTING MACHINERY, Pages: 425-426
Iliya S, Menzies D, Neri F, et al., 2015, Robust impaired speech segmentation using neural network mixture model
© 2014 IEEE. This paper presents a signal processing technique for segmenting short speech utterances into unvoiced and voiced sections and identifying points where the spectrum becomes steady. The segmentation process is part of a system for deriving musculoskeletal articulation data from disordered utterances, in order to provide training feedback for people with speech articulation problem. The approach implement a novel and innovative segmentation scheme using artificial neural network mixture model (ANNMM) for identification and capturing of the various sections of the disordered (impaired) speech signals. This paper also identify some salient features that distinguish normal speech from impaired speech of the same utterances. This research aim at developing artificial speech therapist capable of providing reliable text and audiovisual feed back progress report to the patient.
Iliya S, Neri F, Menzies D, et al., 2015, Differential evolution schemes for speech segmentation: A comparative study
© 2014 IEEE. This paper presents a signal processing technique for segmenting short speech utterances into unvoiced and voiced sections and identifying points where the spectrum becomes steady. The segmentation process is part of a system for deriving musculoskeletal articulation data from disordered utterances, in order to provide training feedback. The functioning of the signal processing technique has been optimized by selecting the parameters of the model. The optimization has been carried out by testing and comparing multiple Differential Evolution implementations, including a standard one, a memetic one, and a controlled randomized one. Numerical results have also been compared with a famous and efficient swarm intelligence algorithm. For the given problem, Differential Evolution schemes appear to display a very good performance as they can quickly reach a high quality solution. The binomial crossover appears, for the given problem, beneficial with respect to the exponential one. The controlled randomization appears to be the best choice in this case. The overall optimized system proved to segment well the speech utterances and efficiently detect its uninteresting parts
O'Sullivan L, Picinali L, Gerino A, et al., 2015, A Prototype Audio-Tactile Map System with an Advanced Auditory Display, INTERNATIONAL JOURNAL OF MOBILE HUMAN COMPUTER INTERACTION, Vol: 7, Pages: 53-75, ISSN: 1942-390X
Ulanicki B, Picinali L, Janus T, 2015, Measurements and analysis of cavitation in a pressure reducing valve during operation - a case study, Computing and Control for the Water Industry (CCWI2015)- Sharing the Best Practice in Water Management, Publisher: ELSEVIER SCIENCE BV, Pages: 270-279, ISSN: 1877-7058
Caraffini F, Neri F, Picinali L, 2014, An analysis on separability for Memetic Computing automatic design, INFORMATION SCIENCES, Vol: 265, Pages: 1-22, ISSN: 0020-0255
Menelas B-AJ, Picinali L, Bourdot P, et al., 2014, Non-visual identification, localization, and selection of entities of interest in a 3D environment, JOURNAL ON MULTIMODAL USER INTERFACES, Vol: 8, Pages: 243-256, ISSN: 1783-7677
O'Sullivan L, Picinali L, Feakes C, et al., 2014, Audio tactile maps (ATM) system for the exploration of digital heritage buildings by visually-impaired individuals - First prototype and preliminary evaluation, ISSN: 2221-3767
Navigation within historic spaces requires analysis of a variety of acoustic, proprioceptive and tactile cues; a task that is well-developed in many visually-impaired individuals but for which sighted individuals rely almost entirely on vision. For the visually-impaired, the creation of a cognitive map of a space can be a long process for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate to what degree this task can be performed off-site using a virtual simulator. We propose a tactile map navigation system with interactive auditory display. The system is based on a paper tactile map upon which the user's hands are tracked. Audio feedback provides; (i) information on user-selected map features, (ii) dynamic navigation information as the hand is moved, (iii) guidance on how to reach the location of one hand (arrival point) from the location of the other hand (departure point) and (iv) additional interactive 3D-audio cues useful for navigation. This paper presents an overview of the initial technical development stage, reporting observations from preliminary evaluations with a blind individual. The system will be beneficial to visuallyimpaired visitors to heritage sites; we describe one such site which is being used to further assess our prototype.
Picinali L, Afonso A, Denis M, et al., 2014, Exploration of architectural spaces by blind people using auditory virtual reality for the construction of spatial knowledge (vol 72, pg 393, 2014), INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, Vol: 72, Pages: 875-875, ISSN: 1071-5819
Picinali L, Afonso A, Denis M, et al., 2014, Exploration of architectural spaces by blind people using auditory virtual reality for the construction of spatial knowledge, INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, Vol: 72, Pages: 393-407, ISSN: 1071-5819
Caraffini F, Iacca G, Neri F, et al., 2013, A CMA-ES Super-fit Scheme for the Re-sampled Inheritance Search, IEEE Congress on Evolutionary Computation, Publisher: IEEE, Pages: 1123-1130
Caraffini F, Neri F, Cheng J, et al., 2013, Super-fit Multicriteria Adaptive Differential Evolution, IEEE Congress on Evolutionary Computation, Publisher: IEEE, Pages: 1678-1685
Picinali L, Chrysostomou C, Seker H, 2012, The sound of proteins, Pages: 612-615
Transforming proteins into signals, and analyzing their spectra using signal processing techniques (e.g., the Discrete Fourier Transform), has proven to be a suitable method for extracting information about the protein's biological functions. Along with imaging, sound is always found to be one of the helpful tools that have been used to characterize and distinguish objects, particularly, in medicine and biology. The aim of the current study is therefore to sonify (read "render with sound") these signals, and to verify if the sounds produced with this operation are perceived as similar if generated from proteins with similar characteristics and functions, therefore if the information gathered through sound can be considered biologically and medically meaningful, and perceived as such. The approach taken was applied to distinguishing the influenza related proteins, namely H1N1 and H3N2 protein sets. The study, for the first time, reveals that sonification of the proteins allows to clearly separate the protein sets. This promising approach could be further utilized to open new research fields in biology and medicine. © 2012 IEEE.
Picinali L, Feakes C, Mauro D, et al., 2012, Tone-2 tones discrimination task comparing audio and haptics, Pages: 19-24
To investigating the capabilities of human beings to differentiate between tactile-vibratory stimuli with the same fundamental frequency but with different spectral content, this study concerns discrimination tasks comparing audio and haptic performances. Using an up-down 1 dB step adaptive procedure, the experimental protocol consists of measuring the discrimination threshold between a pure tone signal and a stimulus composed of two concurrent pure tones, changing the amplitude and frequency of the second tone. The task is performed employing exactly the same experimental apparatus (computer, AD-DA converters, amplifiers and drivers) for both audio and tactile modalities. The results show that it is indeed possible to discriminate between signals having the same fundamental frequency but different spectral content for both haptic and audio modalities, the latter being notably more sensitive. Furthermore, particular correlations have been found between the frequency of the second tone and the discrimination threshold values, for both audio and tactile modalities. © 2012 IEEE.
Picinali L, Feakes C, Mauro DA, et al., 2012, Spectral discrimination thresholds comparing audio and haptics for complex stimuli, Pages: 131-140, ISSN: 0302-9743
© Springer-Verlag Berlin Heidelberg 2012. Individuals with normal hearing are generally able to discriminate auditory stimuli that have the same fundamental frequency but different spectral content. This study concerns to what extent it is possible to perform the same differentiation considering vibratory tactile stimuli. Three perceptual experiments have been carried out in an attempt to compare discrimination thresholds in terms of spectral differences between auditory and vibratory tactile stimulations. The first test consists of assessing the subject’s ability in discriminating between three signals with distinct spectral content. The second test focuses on the measurement of the discrimination threshold between a pure tone signal and a signal composed of two pure tones, varying the amplitude and frequency of the second tone. Finally, in the third test the discrimination threshold is measured between a tone with even harmonic components and a tone with odd ones. The results show that it is indeed possible to discriminate between haptic signals having the same fundamental frequency but different spectral. The threshold of sensitivity for detection is markedly less than for audio stimuli.
Picinali L, Ferey N, Et Al, 2012, Advances in Human-Protein Interaction - Interactive and Immersive Molecular Simulations, Protein-Protein Interactions - Computational and Experimental Tools, Editors: Cai, Hong
Picinali L, 2011, Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind, Pages: 1311-1316, ISSN: 2221-3767
Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. Focusing on the needs of the blind, the creation of cognitive maps for spaces such as home or office buildings can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate to what point this task can be performed offsite, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for preparing for navigation in new and unknown environments. A comparison of three learning scenarios has been performed: in-situ real displacement, passive playback of recorded navigation (binaural and Ambisonic), and active navigation in virtual auditory environment architecture. For all conditions, only acoustic cues are employed. This research is the result of collaboration between researchers in psychology and acoustics on the issue of interior spatial cognition.
Picinali L, Etherington A, Feakes C, et al., 2011, VR Interactive Environments for the Blind: Preliminary Comparative Studies, Joint Virtual Reality Conference (JVRC 2011) of euroVR and EGVE, Publisher: TECHNICAL RESEARCH CENTRE FINLAND, Pages: 113-115, ISSN: 0357-9387
Picinali L, Katz FGB, 2011, Spatial Audio Applied to Research with the Blind, Advances in Sound Localization, Editors: Sturmillo, ISBN: 9789533072241
Ménélas B, Picinalli L, Katz BFG, et al., 2010, Audio haptic feedbacks for an acquisition task in a multi-target context, Pages: 51-54
This paper presents the use of audio and haptic feedbacks to reduce the load of the visual channel in interaction tasks within virtual environments. An examination is made regarding the exploitation of audio and/or haptic cues for the acquisition of a desired target in an environment containing multiple and obscured distractors. This study compares different ways of identifying and locating a specified target among others by the mean of either audio, haptic, or both feedbacks rendered simultaneously. The analysis of results and subjective user comments indicate that active haptic and combined audio/haptic conditions offer better results when compared to the audio only condition. Moreover, that the association of haptic and audio feedback presents a real potential for the completion of the task. ©2010 IEEE.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.