Let’s reconnect – AES Fall online sessions

We are right in the middle of the AES Fall 2021 and while we are sorry to not meet in person, we are happy to contribute to the extensive online program.

The Audio Engineering society (AES) did it again: to compensate for the missed opportunity to meet in person at the AES Convention in Las Vegas, they filled a whole month with online events for the professional audio community. Experts from Fraunhofer and the AudioLabs will also contribute to the program and we invite you to join the live sessions or watch the on-demand videos:

Live: Perceptual Evaluation of Interior Panning Algorithms Using Static Auditory Events
Award: Best peer-reviewed paper at AES
Wednesday
, October 20 • 1:30pm – 2:00pm
Speakers: Thomas Robotham • Andreas Silzle • Anamaria Nastasa • Alan Pawlak • Jürgen Herre

Interior panning algorithms enable content authors to position auditory events not only at the periphery of the loudspeaker configuration, but also within the internal space between the listeners and the loudspeakers. In this study such algorithms are rigorously evaluated, comparing rendered static auditory events at various locations against true physical loudspeaker references. Various algorithmic approaches are subjectively assessed in terms of; Overall, Timbral, and Spatial Quality for three different stimuli, at five different positions and three radii. Results show for static positions that standard Vector Base Amplitude Panning performs equal, or better, than all other interior panning algorithms tested here. Timbral Quality is maintained throughout all distances. Ratings for Spatial Quality vary, with some algorithms performing significantly worse at closer distances. Ratings for Overall Quality reduce moderately with respect to reduced reproduction radius and are predominantly influenced by Timbral Quality.

MPEG-H Audio production workflows for a Next Generation Audio experience in broadcast, streaming and music
Available from Wednesday, October 19 • 9:00am until: Thursday, December 30 • 6:00pm
Speakers: Yannik Grewe • Philipp Eibl • Christian Simon • Matteo Torcoli • Daniela Rieger • Ulli Scuda

MPEG-H Audio is a Next Generation Audio (NGA) system offering a new audio experience for the audience in various applications: Object-based immersive sound delivers a new degree of realism and artistic freedom for immersive music applications, such as the 360 Reality Audio music service. Advanced interactivity options enable improved personalization and accessibility, including solutions to create object-based features out of legacy material, e.g. deep-learning-based dialogue enhancement. ‘Universal delivery’ allows for optimal rendering of one production over all kinds of devices and various distribution ways, e.g. broadcast or streaming. All of these new features are achieved by adding metadata to the audio, which is defined during production and offers content providers flexible control of interaction and rendering options. Thus, new possibilities and requirements during the production process are imposed. In this paper, examples of state-of-the-art NGA production workflows are detailed and discussed, with special focus on immersive music, broadcast, and accessibility.

Production Tools for the MPEG-H Audio System
Available from Wednesday, October 19 • 9:00am until: Thursday, December 30 • 6:00pm
Speakers: Yannik Grewe • Philipp Eibl • Daniela Rieger • Ulli Scuda

Next Generation Audio Systems, such as MPEG-H Audio, rely on metadata to enable a wide variety of features. Information such as channel layouts, the position and properties of audio objects or user interactivity options are only some of the data that can be used to improve consumer experience.

Creating these metadata requires suitable tools, which are used in a process known as “authoring”, where interactive features and the options for 3D immersive sound rendering are defined by the content creator.

Different types of productions each impose their own requirements on these authoring tools, which lead to a number of solutions appearing in the market. Using the example of MPEG-H Audio, this paper will detail some of the latest developments and authoring solutions designed to enable immersive and interactive live-and post productions.

Informed postprocessing for auditory roughness removal for low-bitrate audio coders
Available from Wednesday, October 20 • 9:00am until: Thursday, December 30 • 6:00pm
Speakers: Steven van de Par • Sascha Disch • Andreas Niedermeier • Bernd Edler

In perceptual audio coding using very low bitrates, modulation artifacts can be introduced onto tonal signal components, which are often perceived as auditory roughness. These artifacts may occur for instance due to quantization errors or may be added when using an audio bandwidth extension, which sometimes causes an irregular harmonic structure at the borders of replicated bands. Especially, the roughness artifacts due to quantization errors are difficult to mitigate without investing considerably more bits in encoding of tonal components. We propose a novel technique to remove these roughness artifacts at the decoder side controlled by a small amount of guidance information transmitted by the encoder.

Check out all Fraunhofer presentations, panels and product news on our website.

Header image (c) AES.org

Twitter – Fraunhofer IIS