Blog

Emotive UI

UX

4 min read


Posted by Tady Walsh on March 27, 2017

Emotive UI

I was having lunch lately with Martin, our MD and our conversation turned to fitness, or if you like, our concern with our ages and the need to stay active. We discussed the various apps and trackers we use to monitor “how we’re doing”. Martin showed me an app that reads your pulse by placing your finger over the camera of the phone and allowing the flash LED shine into your finger, amplifying the colour change in your bloodflow. Of course, in the nature of this nerdiness, our thoughts turned to how this can be used alternatively in the future. This leads us to the question of how Emotive UI will form a part of our lives in the very near future.

When we think of User Interaction (UI), most of us think of screens, keyboards and mice. This is a nice starting point, but people with assistive technology needs will also think of speakers, microphones, pedals, joysticks, eye-trackers and many, many more. These are all physically interactive processes that take input from users and translate them into machine tasks: make the machine do something based on this action. But what about when we give the machines autonomy? What about when we allow the machines to sense what is going on in our minds?

After a talk I gave on web accessibility a number of years ago, I had an interesting discussion with Josh Holmes from Microsoft about some of the work being done with the Kinect. At the time, Kinect had been out for a couple of years as a gaming interface, watching users movement in relation to tasks to be performed in the game. Even today, it’s still marketed as an add-on to the XBox One and mostly seen as a gaming accessory. However, tech blogs and writers were starting to explore the further possibilities of Kinect, in how it could become part of our daily lives. Josh had his Kinect rigged to a machine in his home, that could facially recognise who was in the room. It would also recognise, based on that facial scan, what mood that person was in and adjust the house to suit. If Josh was scowling, it would turn on some loud and trashy metal, dim the lights and (someday in the future) poor him a beer but if Josh was in a good mood, it would turn on something lighter and more upbeat and (again someday in the future) poor him a whiskey or a wine. This way his house "recognises him" and reacts to him based on his moods and perceived needs. Voice recognition* could do the same “person detection”, but I think mood detection is more difficult through vocal analysis. These adaptations based on mood are a new wave of Emotive UI, interactions that we make without even thinking about them. We have a lot of this technology already with Amazon Alexa and Google Home, but the emotive aspect is the missing piece of the puzzle.

During our meal, Martin and I discussed how taking the blood flow sensor and adding it to different situations would enable us to better improve certain qualities of life. Imagine in future you are driving, stressed by traffic and work and wanting to get home. Your steering wheel will detect your blood flow, detect your heart rate and prevent you from suddenly slamming the accelerator, or maybe regulate your speed to prevent you driving faster than the speed limit (by the way, I searched and yes, there is a patent application for this already). A heads up display (HUD) built into your future windscreen might suggest turning into a filling station to take a break and get a coffee. This may even recognise a calendar date, or direction you are headed based on time of day and send a text to your loved ones telling them "I'm running late due to terrible traffic". A Kinect type camera might change your music choice to better calm your mood. A voice tempo and timbre recognition system, listening to you shout at the "idiots, how do you have a licence," would detect your anxiety and drop the AC, to cool off the physical environment. These are extreme examples (personally, my car telling me what to do would drive me even more insane) but they illustrate that, in time, adding emotive contexts to UI could help save lives.

But let’s get back to the web. How will this affect our use of the computer in future? Well it’s only fair to assume that, as emotive properties are available, they will eventually become available to the browser. In less than 10 years, web browsers will be able to detect frustrations with not being able to find the next step in a process. This can be used to provide an assistive cue to say “This is what you should be doing next” or ask "What can I help you with?". They could detect you are angry, so when you access your gas supplier’s website, it will know you want to make a complaint and immediately present you with contact numbers or an email form. YouTube could use it on comments, so when someone irately pounds their comment into the text box, a confirmation might appear that suggests they sleep on it and, if they really want to say this to this person, maybe it could wait till morning? Again, these are extreme examples but they serve to illustrate a purpose.

User Interaction is a constantly evolving state. It’s one where we are constantly thinking about how will the user interface with our product. The more tools are available to us in the future, to help our users achieve their goals, the better that User Experience will be. Should we be making these decisions for our users? I can’t answer that yet, only time will tell. There are greater ethical questions around this type of interaction and you can absolutely bet they will be abused. But they’re coming, so we should prepare for their implementation in the most assistive way we can.

* Just prior to releasing this article, Amazon announced they are working to get Alexa to distinguish between different voices.

About the Author

Tady Walsh
Tady Walsh

Tady is a Frontend UX Developer at Arekibo. He has worked in Arekibo for over 13 years and has a keen interest in our customers experience. He has spoken at events on subjects of UX and Accessibility and is recognised for his expertise in this domain.