Affective Audio Synthesis for Sound Experience Enhancement
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
|Title of host publication||Experimental Multimedia Systems for Interactivity and Strategic Innovation|
|Editors||Ioannis Deliyannis, Petros Kostagiolas, Christina Banou|
|Publication status||Published - Aug 2015|
|Publication type||A3 Part of a book or another research book|
With the advances of technology, multimedia tend to be a recurring and prominent component in almost all forms of communication. Although their content spans in various categories, there are two protuber- ant channels that are used for information conveyance, i.e. audio and visual. The former can transfer numerous content, ranging from low-level characteristics (e.g. spatial location of source and type of sound producing mechanism) to high and contextual (e.g. emotion). Additionally, recent results of published works depict the possibility for automated synthesis of sounds, e.g. music and sound events. Based on the above, in this chapter the authors propose the integration of emotion recognition from sound with automated synthesis techniques. Such a task will enhance, on one hand, the process of computer driven creation of sound content by adding an anthropocentric factor (i.e. emotion) and, on the other, the experience of the multimedia user by offering an extra constituent that will intensify the immersion and the overall user experience level.
- Emotion, Emotion recognition, Audio synthesis