Immersion in extended reality (XR) environments is no longer limited to the visual aspect alone. To enrich the user experience and make it more natural, we are now exploring several sensory modalities: vision, audio, haptics, and even olfactory. While the ambition is clear, its implementation remains fraught with pitfalls.

As part of the HEAT research project, Immersion has carried out a pre-pilot of a new multisensory device.

A multimodal immersive experience

In a multimodal experience, each sensory modality must be not only individually convincing, but also perfectly coherent with the whole device. A haptic rendering that is poorly synchronized with the visuals, a mismatch between a spatialized sound source and its actual position, or a smell that is out of synch with the narrative context, risk shattering the illusion and provoking cognitive dissonance in the user.

This synchronization requirement calls for rigorous real-time management, which is all the more complex when combining modalities with very different dynamics. While image and sound adapt to high refresh rates, haptic and olfactory rendering are subject to slower physical constraints, making their dynamic modulation more difficult.

Each modality relies on technologies with varying levels of maturity and standards. While graphics and audio rendering engines benefit from decades of optimization, haptic and olfactory rendering remain experimental, often limited to fragmented, proprietary solutions.

This heterogeneity raises major interoperability challenges. How can we guarantee minimal latency and smooth rendering when it comes to combining 3D engines, audio spatialization systems, complex haptic feedback and olfactory diffusers? How can we design robust software architectures capable of orchestrating these multiple flows without compromising the performance and stability of XR applications?

Why this link-up with the Sensorama machine?

In 1962, long before the rise of virtual reality, inventor Morton Heilig developed the Sensorama, a revolutionary immersive machine designed to stimulate several senses simultaneously.

Conceived as one of the first extended reality devices, the Sensorama offered a multisensory experience, combining stereoscopic 3D images, stereo sound, vibrations, scent diffusion and airflow to immerse the user in a realistic simulation.

One of the most striking scenarios, for example, involved a motorcycle ride through the streets of New York, where you could feel the wind on your face, road vibrations and even urban smells.

Despite its technological lead, the Sensorama was never a commercial success due to its high cost and size, in a market that was not yet receptive to this type of immersive experience. Today, however, it is recognized as a forerunner of virtual reality (VR) and extended reality (XR) technologies, laying the foundations for modern immersive experiences.

For this first experiment, the HEAT project’s multimodal chair exploits three sensory dimensions: sound, wind and vibration, to assess their impact on immersion.

From prototype to adoption: what’s at stake for tomorrow?

Beyond technical constraints, multimodality in XR requires real sensory design work. It’s not just a question of adding stimuli, but of thinking about their complementarity and perceptive impact. Some combinations enhance immersion, while others can be counterproductive. As part of the HEAT project, Immersion aims to explore these challenges through three use cases: e-learning, theater and opera. For this first pilot, the teams have developed a test device centered on an element potentially common to all three domains: the chair.

A demonstration of this prototype will be organized as part of the TEI scientific conference, of which Immersion is a partner.

Would you like to work with Immersion on a research project? Contact us.