Rapid technological development in areas such as consumer electronics, automotive, robotics, home automation and wearable devices has greatly changed the way we interact with machines/computers. Human-machine interface began as "computer interface" as early computers were not interactive.
But gradually, "human machine interface" became "human machine interaction". Adding interactive features to human-machine interface is the first revolution of HMI. Over the last century, human-machine interface has transitioned from the earliest text user interface (TUI) / command line interface, to graphical user interface (GUI), and now on the way to natural user interface (NUI), with natural, intuitive, immersive and intelligent beyond-touch methods/ techniques. Natural user interface is the second revolution of HMI, which we are experiencing at the moment.
Artificial intelligence (AI) plays an important role in the development of human-machine interactions, especially via machine learning / deep learning. AI is evolving from handcrafted-knowledge-based reasoning to statistical learning on big data. In near future AI will also have the capability of contextual adaptation. This master class will also introduce how machine learning works and what role AI is playing in human-machine interactions.
All the technologies within human-machine interaction category are undergoing revolutions. These include peripherals such as keyboards and game console controllers, touch screens, and beyond touch modalities such as voice, speech & conversational interactions, gesture, vision, force sensing, haptics, perceptive computing and many others. The competitive markets are pushing device and software players to rapidly innovate. Next-generation user interfaces are a core feature enabling them to differentiate from traditional players and offering the customers with amazing user experiences.
The first "touch display" was reported by E.A. Johnson in 1965, using a cathode-ray-tube (CRT) monitor with a capacitive touch overlay. In recent decade, touch-screen technologies have been widely deployed to offer an intuitive user experience. There are many different technologies that can detect touch inputs on a display, including capacitive, resistive, acoustic and optical techniques. Working principles of touch sensing technologies will be explained. Advantages, disadvantages, system architecture and integration approaches will be compared.
In addition, on-cell & in-cell integration/ embedded touch and corresponding changes to the value chain will be analysed.
Speech-based interface provides seamless interactions between people and devices. For instance, "Give me the direction to LAX" will display the map and driving directions to the Los Angeles International Airport. While via traditional interfaces users have to switch between multiple command windows, typing in text inputs to get the results. (Automatic) speech recognition, an important technology in voice interface, has been developed for decades and it was only until recently the recognition accuracy improved rapidly to an applicable stage. Natural language understanding as well as speech synthesis expand the usage of speech-based HMI.
Vision plays a dominating role in human interactions with the world. Displays are already primary interfaces in our daily life, especially with the integration of touch screens to act as two-way interaction devices. 3D vision and gesture interactions are making displays more powerful.
This session will focus mainly on emerging haptic technologies through case studies. Corresponding applications and players will be introduced as well.
It will be difficult to find another single dominating human-machine interaction technology after touch. Natural and intuitive human-machine interactions need to be multimodal, combining gesture, speech, eye gaze, facial expressions, emotions and touch. Besides the above-mentioned input modalities, emerging brain-computer interface technologies and new keyboard technologies will also be briefly discussed.
This masterclass will guide you to explore the next big opportunity in human-machine interactions that can be applied in consumer electronics, automotive, robotics, home automation, wearables, etc. It will cover explanations and assessment on disruptive technologies, as well as case studies, vertical application market status & analysis, value chains, opportunities and challenges. It is designed for those who want to get depth understanding of disruptive technologies in user interfaces, who need to expand their existing business areas and who would like to learn the big picture to assess the markets, challenges and opportunities.
Please join IDTechEx Analysts for lunch and networking from 1:00pm-2:00pm
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Compiled by Dr Xiaoxi He, Senior Technology Analyst, IDTechEx.
Timings and the agenda are subject to change
Since 1999 IDTechEx has provided independent market research, business intelligence and events on emerging technologies to clients in over 80 countries. Our clients use our insights to help make strategic business decisions and grow their organizations. IDTechEx is headquartered in Cambridge, UK with additional offices in Boston, USA; Berlin, Germany; Tokyo, Japan; Seoul, Korea. Learn More