Asset 2productpagearrow_downarrow_downarrowsbasic-demo-iconAsset 6productpagecarcheckbox-iconclosecommunitycontrollercookingdjfooter_mailfooter_facebookfooter_linkedinfooter_mailfooter_mappinfooter_twitterhand_finger_circle_iconhand-globe-sparkle-iconAsset 5productpageintegrate-iconkitlayer-iconleft_arrow_greenleft_arrow_whiteleft_arrowmagic-demo-iconnews-facebooknews-linkedinnews-twitterphone_2pinplugandplayquick-iconright_arrow_greenright_arrow_whiteright_arrowscrollAsset 4productpagesupportAsset 3productpagewebcam

Haptics | An expert view on multi-modal interfaces

Multi-modal interfaces are increasingly widely understood to be key to intuitive user experiences.  Jenny Grinsted, Content Writer at Ultrahaptics, recently spoke to Vincent Hayward, Professor of Tactile Perception and Technologies at the University of London about the science behind multi-modal interfaces, and the role haptics (i.e. the science of touch) can play in them. 

Vincent is one of the world’s leading haptics academics and a fellow of the Institute of Electrical and Electronics Engineers. Over the past decade he has developed a computational theory of tactile perception that is grounded in the physics of mechanical interactions. 

Vincent is also the CSO of Paris-based start-up Actronika SAS, a fellow haptics company. Actronika brings HD haptics to the market using a highly versatile platform composed of high-bandwidth proprietary actuators controlled by a physical and perceptual engine. It is active in industries such as automotive, mobile, home automation, and medical devices. Actronika and Ultrahaptics are both partners in the H-Reality H2020 project, together with Birmingham University, TU Delft and CNRS.

Read on for a deep dive into the nature of human perception, the special role of touch and the promises and challenges of multi-modal interfaces.  

 

Thanks for talking to us. I’d like to start by asking how you would define the term “multi-modal”. The term is used frequently in discussions around the future of interface design, but I’m interested in hearing how you define it from a scientific perspective.

In this context, the term “modal” refers to “sensory modalities”. Today, the sensory modalities are defined as sight, hearing, touch, olfaction, taste, thermal perception, vestibular perception (or the sense of balance) and proprioception (or sense of your configuration in space). Each of these may then include sub-modalities such as net load, vibration, pain or itching.

Model of a human brain

Your brain puts different streams of information together to form your perception of the world.

Multi-modal means something that applies to more than one sensory modality.

It’s important to understand that sensory modalities are not the same thing as “perception”, which is usually itself multi-modal. For example, when you write on paper with a pencil you feel the paper, not the vibrations of the pen nor the load you apply. If you pay detailed attention to the vibrations of the pen or to the applied load, then you actually lose the perception of the paper.

Your perception here is mediated by several sub-modalities, and your brain puts these streams of information together to form your perception of a sheet of paper.

 

Can you talk a little bit about the history of multi-modal interfaces?

Any interface that is engineered to stimulate more than one modality is actually a “multi-modal” interface. So, technically, a 1950’s television set is a multi-modal interface, since you experience a programme using both vision and hearing.

old television set

Technically, an old-fashioned television set is a multi-modal interface.

In the early 1990s, computer HMI researchers became interested in multi-modal interfaces in a broader sense, that is, interfaces that enable interaction beyond a keyboard and mouse. Today’s multi-modal interfaces not only engage several sensory modalities, but also integrate new interaction modalities such as voice control, eye-movement detection, gestures, haptics, and so on.

Interactive multi-modal technologies have been around for some time. For instance, in the field of teleoperation, technologies have been available since the 1950s that enable a human operator to see, hear, and feel what a remote robot manipulator would experience. However, computer scientists became interested in these ideas only rather recently.

 

Why are multi-modal interfaces important?

It seems to be almost self-evident that multi-modal interfaces are better than mono-modal interfaces (such as a standard telephone), and that several modes of interaction are better than one.

From a sensory perspective, the mammalian brain, of which the human brain is the most evolved version, has adapted to integrate information from many modalities to an extraordinary degree.

Take, for example, the simple sensation of heaviness. At first sight, you would think that it boils down to estimating the load of gravity when lifting an object. It has been known for more than a century, however, that the visual aspect of an object has a huge influence on the sensation of heaviness. In the so-called “size/weight illusion”, the sensation of heaviness can vary by more than 20% according to the visual appearance of objects.

Box being wrapped with brown paper

The sensation of heaviness can vary by more than 20% according to the visual appearance of objects.

Another familiar example is television. You perceive the voice of the people you see on the screen to come from their mouths. In reality, however, the speakers are often located on the sides of the TV set. In fact, if you hide the screen, you will immediately realise that the sounds do not in fact come from the screen. This is the so-called “ventriloquist” effect.

There are many examples like these that demonstrate the highly-integrated nature of our perceptual experience.

But, as we all know, multi-modal interfaces are not necessarily better. When badly designed, they can be disorienting and confusing.  It is a matter of the task being performed, the quality of integration of the modalities, and a slew of other factors.

 

All of our senses have their own unique features. At Ultrahaptics, we’re interested in the particular role that touch plays. Can you explain what you see as some of the unique features of touch?

The sense of touch, like other senses, integrates sub-modalities. Your sense of touch collects information from any soft tissue in your body: muscles, tendons, sub-cutaneous tissues, skin and so on. This includes information about pressure, vibration, pain, temperature, movement and position, among others.

Touch is inextricably intertwined with the manipulative and the locomotive functions in mammals. To be able to walk effectively, it is absolutely essential to be able to assess the mechanical properties of the ground you are on. Think of walking on a trampoline, on a carpet, or on an oil slick.

Similarly, holding and manipulating objects requires you to “know” what the object is, its inertial tensor, the material it is made of, its mass, and so on. Without touch, manipulation becomes nearly impossible or excruciatingly clumsy. Think for example of trying to put a key into a keyhole when your fingers are cold. Your muscles are intact, but because the low temperature numbs your skin it becomes difficult to perform the action.

Man holding a car key

Trying to put a key in a keyhole when your fingers are cold is difficult because the low temperature numbs your skin.

From a strictly sensory viewpoint, touch is very good at detecting very small things, very quickly. A physical feature smaller than a micrometre is easily detected when passing your finger over it.

Cognitively, touch is special is many ways. For example, in a recent study we showed that in cases of high sensory ambiguity (which is frequently the case for messages transmitted by displays), touch provides a higher sense of confidence than vision although its performance may be lower. In other words, humans are confident in the reality of the world more through touch than any other sense.

 

Finally, what are some of the benefits of adding haptic technology to HMIs and incorporating touch into user experiences?

It’s been reported in behavioural studies that people may feel anxious when interfaces have realistic graphical features but lack the corresponding tactile experience. It is as if they felt they were missing some key information about the “reality” that was given to them.

In other words, it is as if the sensory feedback that was given did not match their expectation, causing a disconnect. An important role of providing high-quality haptics in HMI is to satisfy the sensory expectations of users.

It is simplistic to say that adding haptics to HMI directly translates to better raw performance. Most of the movements we execute are pre-planned; the human brain is often called a “prediction machine”, especially when it comes to interaction with people and machines.

The brain collects sensory information far quicker than it can make changes to pre-planned actions, so just adding tactile sensations cannot be said to improve raw performance on an instant-by-instant basis. However, sensory information of all sorts informs the brain’s predictions. For example, we are used to receiving a rich stream of tactile information when interacting with real-world objects. If tactile information is missing from an HMI, this confuses the brain’s predictions and can make HMI harder to use.
Planting a cactus hands in soil

We are used to receiving a rich stream of tactile information when interacting with real-world objects.

Haptics also performs functions such as silent alerts and providing information that vision or hearing are poor at collecting, such as the position of objects relative to the body. For the same reason, in many use-cases for haptics, vision and hearing are already busy with important tasks such as steering a vehicle. In examples such as these then touch provides an alternative channel for communication and interaction.

A final, and far from insignificant, reason is simply that well-designed haptics make HMI more enjoyable to use.

 

Thanks again for taking the time to speak to us. I think we’re all looking forward to seeing what role haptic technology will play in the next generation of multi-modal interfaces.

Yes, I agree. The future of multi-modal interfaces looks very exciting, and it seems clear that haptic technology is going to play a big role in it.  

 

Vincent Hayward is a professor (on leave) at Sorbonne Université in Paris. He is a Fellow of the IEEE, interested in haptic device design, human perception, and robotics. Since January 2017, he has been a Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship. He is also the CSO of Actronika SAS, a Paris-based startup company specializing in haptics. Actronika is active in industries such as automotive, mobile telephony, home automation, and medical devices by providing a highly versatile platform that enables clients to incorporate haptics within their products and UIs.

About Ultrahaptics

Ultrahaptics’ breakthrough technology uses patented algorithms to control ultrasound waves, enabling the creation of tactile sensations in mid-air. No controllers or wearables are needed: the “virtual touch” technology uses ultrasonic speakers to project shapes, textures and effects directly onto the user’s hands. Ultrahaptics’ technology is used across a range of sectors, including automotive, digital signage and immersive experiences.

Join our mailing list to receive exclusive updates from the Ultrahaptics team.

By clicking sign up, you agree to our privacy and cookie policy.

This is the team modal