Immersive Sound

“I would like to congratulate you on the apparent naturalness, at least from a first hearing on modest loudspeakers. The source movement was easily detectable and fairly easy to locate.”

Francis Rumsey

Chair, Technical Council at Audio Engineering Society


“It was great to see how the project has evolved and I'm really impressed by the result. By way of comparison we checked out an ambisonic system later in the day. Having had your demo in the morning the lack of location precision in the ambisonic  system was very apparent.”

James Hall,

Jawbone Inc.


Imagine a group of fans cheering for their team at the Olympics from a local pub; or aficionados of classical music following a broadcast from the Royal Opera at home, they want to feel transposed to the venue by experiencing a faithful and convincing auditory perspective of the scene they see on the screen. Imagine having a surround sound system with room simulators that actually sound like the spaces they are supposed to emulate, or watching a 3D nature film in a home theatre where the sound closely follows the movements one sees on the screen. Imagine a video game capable of providing a convincing dynamic auditory perspective that tracks a moving game player and responds to his actions, with game characters and virtual objects moving and the acoustic environments changing. Place all this in the context of visual technology that is moving firmly in the direction of 3D-capture and rendering, where enhanced realism, spatial accuracy and detail are the key features.

Our technology for perceptual sound field recording, reconstruction and synthesis enables all these spatial sound applications. Commercial surround sound has so far been characterised mainly by 5.1-channel systems that aim to deliver front-biased spatial sound with rear effects. 5.1 systems do not allow for 360-degree spatial accuracy. Advanced solutions such as wave field synthesis (WFS), on the other hand, employ hundreds of channels, making them impractical for mainstream consumer applications, and are too cumbersome for sound engineers to use. The solution we provide is a major advance in the area of spatial sound. It employs a pragmatic compromise in complexity while delivering accurate spatial cues using multichannel stereophony. This makes it an excellent candidate for commercial hardware and software development, improving the spatial realism of reproduction as well as being backward-compatible with existing 5.1-channel systems.


Patent portfolio Status


  • Audio Signal Processing Method and System - US granted (US 8,184,814); EU pending 
  • Microphone Array - US granted (US 8,976,977)
  • Electronic Device with Digital Reverberator & Method - US granted (US 8,908,875)




  • Spatial accuracy of ambisonics while overcoming its stability limitations using a conceptually simple framework
  • Scalable and reconfigurable to any number of channels and a diverse set of channel lay-outs
  •  High timbral quality of sound
  •  Very low computational complexity to allow super real-time synthesis of dynamic scenes in virtual and augmented reality applications




  • Broadcasting
  • Sound production
  • Gaming
  • Virtual Reality & Augmented Reality
  • Architectural Design
  • Acoustic Performances
  • Sound Installations




In many conventional multichannel systems, the desired spatial features are attained through manual mixing and artificial manipulation of audio material. This requires high-end equipment, intervention of a sound engineer, and long production processes making them infeasible for scenarios such as live broadcast. Moreover, spatial cues like localization and envelopment are achieved through artificial panning and reverberation, thus impairing the consistency between the actual and the reproduced sound field. First-order Ambisonics provides an elegant solution to the problem, however the reproduction accuracy can be maintained only in a narrow optimal listening area. Higher-order Ambisonics (HOA) overcomes this problem by providing increased flexibility and enlarged listening area, however it requires careful calibration, and recording and reproduction of audio material for HOA is not straightforward. Similarly wave-field synthesis (WFS) accurately reproduces the desired wave front over a wide listening area, but the number of loudspeaker channels is too high for mainstream adoption.


The KCL technology captures inter-channel time and level differences. These yield a perceptually veridical rendering of the direction of sound sources as well as all corresponding reflections and reverberation which characterise the venue. This is achieved by means of higher order microphone directivity patterns that are tuned to underlying psychoacoustic laws.


The KCL design can be implemented as a physical microphone array, for recording and broadcasting applications, or in a virtual form synthesis of sound fields capable of rendering a convincing illusion of presence in a desired virtual space. A technology for super-real-time software implementation geared towards virtual and augmented reality applications, as well as a new class of low-cost underlying microphones have been developed too.


Patent Information:
Physical Sciences
For Information, Contact:
Pushkar Wadke
King's College London
Zoran Cvetkovic
Enzo De Sena
Huseyin Hacihabiboglu