PhD in VR and Room Acoustics

IJLRA (Institut Jean le Rond d’Alembert, UMR 7190 CNRS – Sorbonne Université) and IRCAM (Institut de Recherche et Coordination Acoustique/Musique, UMR 9912 STMS IRCAM – CNRS – Sorbonne Université) are looking for new candidates for the 3-year PhD thesis “Navigation aid for the visually impaired: Virtual Reality acoustic simulations for interior navigation preparation”.

The PhD student will participate in the creation and evaluation of a training system application for visually impaired individuals.

Duration: 3 years

Closing date: 1 October 2018

Degree Level: Master in Computer Science, Acoustics, Architectural Acoustics, Multimodal Interfaces, or Audio Signal Processing

Major(s): Virtual reality, 3D audio, spatial sound, spatial cognition, room acoustics, visual impairments, navigation aid

 

Job Summary

 

Doctoral thesis

Navigation aid for the visually impaired: Virtual Reality acoustic simulations for interior navigation preparation

 

Laboratories: IJLRA (Institut Jean le Rond d’Alembert, UMR 7190 CNRS – Sorbonne Université) and IRCAM (Institut de Recherche et Coordination Acoustique/Musique, UMR 9912 STMS IRCAM – CNRS – Sorbonne Université)

Doctoral schoolÉcole Doctorale Sciences Mécaniques, Acoustique, Électronique et Robotique (SMAER): ED 391

Discipline: Acoustics (Virtual Reality, Audio, Interaction, Aide Handicap)

Co-supervision: Brian KATZ (DR-CNRS, IJLRA) et Markus NOISTERNIG (CR, IRCAM)

Keywords: Virtual reality, 3D audio, spatial sound, spatial cognition, room acoustics, visual impairments, navigation aid

Research context: This thesis project is placed in the context of the ANR 2018-2021 project RASPUTIN (Room Acoustic Simulations for Perceptually Realistic Uses in Real-Time Immersive and Navigation Experiences). In the domains of sound synthesis and virtual reality (VR), much effort had been placed on the quality and realism of sound source renderings, from text-to-speech to musical instruments to engine noise for use in driving and flight simulators. The same degree of effort cannot be seen with regards to the spatial aspects of sound synthesis and virtual reality, particularly with respect to the acoustics of the surrounding environment. Room acoustic simulation algorithms have for decades been improving in their ability to predict acoustic measurement metrics like reverberation time from geometrical acoustic models, at a cost of higher and higher computational requirements. However, it is only recently that the perceptual quality of these simulations are being explored beyond their musical applications. In real-time systems, where sound source, listener, and room architecture can vary in unpredicted ways, investigation of the perceptual quality or realism has been hindered by necessary simplifications to algorithms. This project aims to improve real-time simulation quality towards perceptual realism.

The capability of a real-time acoustic simulation to provide meaningful information to a visually impaired user through a virtual reality exploration is the focus of the project. As a preparatory tool prior to visiting a public building or museum, the virtual exploration will improve user’s knowledge of the space and navigation confidence during their on-site visit, as compared to traditional methods such as tactile maps.

The thesis work entails participating in the creation and evaluation of a training system application for visually impaired individuals. Tasks involve the development of an experimental prototype in collaboration with project partners with a simplified user interface for the construction of virtual environments to explore. Working in conjunction with a selected user group panel who will remain engaged in the project for the duration, several test cases of interest will be identified for integration into the prototype and subsequent evaluations. The prototype will be developed by the thesis student in collaboration with Novelab (audio gaming) and IRCAM/STMS-CNRS (developers of the audio rendering engine). Design and evaluation will be carried out in collaboration with the Centre de Psychiatrie et Neurosciences and StreetLab/Institut de la Vision. The ability to communicate in French would be beneficial, but is not mandatory at the start of the project.

Evaluations will involve different experimental protocols in order to assess the accuracy of the mental representation of the learned environments. From the point of view of the metrics relation preservation, participants will have to carry out experimental spatial memory tests as well as onsite navigation tasks.

Candidate profile: We are looking for dynamic, creative, and motivated candidates with scientific curiosity, strong problem solving skills, the ability to work both independently and in a team environment, and the desire to push their knowledge limits and areas of confidence to new domains. The candidate should have a Master in Computer Science, Acoustics, Architectural Acoustics, Multimodal Interfaces, or Audio Signal Processing. A strong interest in spatial audio, room acoustics, and working with the visually impaired is necessary.   It is not expected that a candidate will have already all the skills necessary for this multidisciplinary subject, so a willingness and ability to rapidly step into new domains, including spatial cognition and psychoacoustics will be appreciated.

Domaine: Réalité virtuelle, Audio, Interaction, Aide Handicap

Dates:  Preferred starting date from 1-Nov-2018 to 20-Dec-2019, and no later than March-2019.

Application: Interested candidates should send a CV, transcript of Master’s degree courses, a cover letter (limit 2 pages) detailing their motivations for pursuing a PhD in general and specifically the project described above, and contact information for 2 references that the selection committee can contact. Incomplete candidatures will not be processed.

Application deadline: Complete candidature files should be submitted to brian.katz@sorbonne-universite.fr and markus.noisternig@ircam.fr before 1-Oct-2018.