SIVE 2015

at IEEE Virtual Reality 2015


Sonic interaction design is defined as the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. This field lies at the intersection of interaction design and sound and music computing.

In the virtual reality community, the focus on research in topics related to auditory feedback has been rather limited when compared, for example, to the focus placed on visual feedback or even on haptic feedback. However, in communities such as the film community or the product sound design community it is well known that sound is a powerful way to communicate meaning and emotion to a scene or a product. 

The main goal of this workshop is to increase among the virtual reality community the awareness of the importance of sonic elements when designing virtual environments. We will also discuss how research in other related fields such as film sound theory, product sound design, sound and music computing, game sound design and computer music can inform designers of virtual reality environments. Moreover, the workshop will feature state of the art research on the field of sound for virtual environments.





Call for Papers


We expect participants to submit a research paper (4 to 6 pages using IEEE template) outlining their current research in the field of interactive sound for virtual environments.

Topics can include, but are not limited to:

1) Sound synthesis and design for virtual environments

2) Sound modelling and rendering for virtual environments

3) Sound spatialisation 

4) Headphones and speakers reproduction

5) Binaural sound and head-related transfer functions

6) Gestural control of sound in virtual reality

7) Multisensory (audio-visual), (audio-haptics) interactions

8) Personalisation and customisation of virtual auditory displays 

9) Navigation and way-finding through sonification

10) Evaluation of user experience and sound quality

The submission website is:

Papers should be 4-6 pages in length and prepared using the IEEE Computer Society conference style format described at:

Posters should be 2 pages in length and prepared according to the same template.

For accepted oral presentations, authors must prepare a 15-20 minute oral presentation to be delivered during the workshop.

For accepted poster presentations, authors must prepare a short 2 minute oral presentation to be delivered during the workshop, plus a poster.

Important dates


Abstract submission: January 26th

Paper submission: February 2nd

Notification of acceptance: February 18th (early bird registration: Feb. 28th)

Submission of final paper: March 10th

Workshop: March 24, 9:00-15:00

Review Committee


Federico Avanzini, University of Padova

Braxton Boren, Princeton University

Stefano Delle Monache, University IUAV of Venice

Cumhur Erku, Aalborg University Copenhagen

Michele Geronazzo, University of Padova

Amalia de Götzen, Aalborg University Copenhagen

Davide Andrea Mauro, University IUAV of Venice

Stefania Serafin, Aalborg University Copenhagen



 09:00-10:30: Welcome + oral presentations (O1)

 10:30-11:00: Break

 11.00-12.30: Oral presentations (O2) + Poster/Demo craze (PDC)

 12:30-14:00: Lunch

 14:00-15:00: Demos (D) & Posters (P)

 15:30-... : 3DUI competition



Stefania Serafin is currently Full professor in sound for multimodal environments at Aalborg University Copenhagen.  She received a PhD degree in computer-based music theory and acoustics from Stanford University, in 2004, and a Master in Acoustics, computer science and signal processing applied to music from Ircam (Paris), in 1997. She has been a visiting professor at the University of Virginia (2003), and a visiting scholar at Stanford University (1999), Cambridge University (2002), and KTH Stockholm (2003). She was principal investigator for the EU funded project Natural Interactive Walking, and Danish delegate for the EU COST Action on Sonic Interaction Design. Her main research interests include sound models for interactive systems and multimodal interfaces, and sonic interaction design.

Rolf Nordahl is currently associate professor in Medialogy at Aalborg University Copenhagen. He is principal investigator for the EU funded project Natural Interactive Walking, and has earlier done seminal work on the EU-project BENOGO (the project was focused on HMD- based photo-realistic VR). Likewise, he is a recognized member of the expert panel under the Danish Evaluation Institute as well as being member of various steering committees. He frequently publishes both journal and conference papers and is also giving talks at international level. Lately, he was invited to run a special series of lectures at among other places Yale University (Connecticut). His research interests lie within VR, (Tele)-Presence, Sonic Interaction Design, developing new methods and evaluation-techniques for VR, and Presence and Games.

Amalia de Götzen is currently assistant professor at Aalborg University in Copenhagen. She graduated in Electronic Engineering at the University of Padova in 2002 and got a PhD in Computer Science from the University of Verona in 2007. She also carried out musical studies obtaining a diploma in pianoforte in 1996 and a diploma in Electronic Music in 2003 at the Conservatorio C. Pollini of Padova.

Since 2002 she is working on the field of Sound and Music Computing. She has been the coordinator of the Sound and Music Processing Lab SAMPL of the Conservatorio of Padova in collaboration with the Department of Information Engineering of the University of Padova.

Cumhur Erkut has received his Dr.Sc.(Tech.) degree in Acoustics and Audio DSP (EE) from the Helsinki University of Technology (TKK), Espoo, Finland, in 2002. Between 1998 and 2002, he has worked as a researcher, and between 2002 and 2007 as a postdoctoral researcher at the Laboratory of Acoustics and Audio Signal Processing of the TKK, where he has contributed to various national and international research projects. Between 2007 and 2012, as an Academy Research Fellow, Dr. Erkut has conducted his research project and team Schema-SID [Academy of Finland, 120583], and has contributed to the COST IC0601 Action Sonic Interaction Design (SID). In 2013, he has joined the Institute of Inclusive Science and Solutions at the University of Eastern Finland, and contributed to research on developing interactive technologies for special-need children and elderly. From July 2013 onwards, he has been appointed as an assistant professor at the Medialogy, Aalborg University Copenhagen.

Federico Avanzini is currently Assistant Professor at the University of Padova. He received his Ph.D. degree in computer science from the University of Padova in 2002. During 2001 he has been visiting researcher at the Helsinki University of Technology. Since 2002 he has worked at the University of Padova, first as a post-doc researcher and then (2005) as Assistant Professor. His main research interests concern algorithms for sound synthesis and processing, non-speech sound in human-computer interfaces, multimodal interaction.

Dr. Avanzini has been key researcher and principal investigator in several national and international research projects. He has authored more than 100 publications on peer-reviewed international journals and conferences, and has served in several program and editorial committees. He was the General Chair of the 2011 International Conference on Sound and Music Computing, and is currently Associate Editor for the international journal Acta Acustica united with Acustica.

Michele Geronazzo received his M.S. degree in Computer Engineering and his Ph.D. degree in Information & Communication Technology from the University of Padova, in 2009 and 2014, respectively. He is currently postdoctoral research assistant at the Dept. of Information Engineering of the University of Padova where he is with the "CSC - Sound and Music Computing Group" and he is involved in the PADVA project no.CPDA135702 (main topics: headphone-based 3D audio systems and customization of HRTFs). His main research interests involve binaural spatial audio modeling and synthesis, multimodal virtual/augmented reality, and sound design for HCI in mobile devices.



For more information, contact sts at



With the support of: 


Sound and Music Computing Network 

Personal Auditory Displays for Virtual Acoustics

Facebook page

Facebook page