Design Patterns in Voice Interfaces
Do VUI patterns exist? Are they useful? Where can we find them? In graphic interfaces, we are used to talking about design and interaction patterns. We use them all the time to deal with any user need. Those patterns are nothing but well-known solutions that users are already familiar with. The boom of voice interfaces as a “new” interaction channel makes us wonder if there is any similar artifact in voice design that we can use in our VUIs (voice user interface). Well, the short answer to the question is a big YES. In this post I will talk about the long answer, explaining what a VUI design pattern is, different types of patterns, and some examples. But let’s start from the beginning. According to Wikipedia, “a pattern is a regularity in the world, in human-made design, or in abstract ideas. As such, the elements of a pattern repeat in a predictable manner”. We have countless examples everywhere, from the tiles in our kitchen to most of the interactions we have on webpages every day. There are sources like ui-patterns where most patterns are documented and can be consulted for better use of each pattern. If we dive deep into the definition, a pattern is a structure that we can replicate and get a predictable result from. That predictability helps creators to not reinvent the wheel each time, but also users, helping them understand how things work when they face a new interface. In GUI (graphic user interfaces), patterns are conventions we have accepted after time and experience. Since graphic interfaces are a human invention with no direct representation in nature, all the interactions are artificially made up. However, the patterns we can use are not pure inventions but adaptations that take advantage of how the brain works and how it interprets the world around us. Somehow, patterns hack users’ brains so they can better understand how to proceed when they interact with a flat-screen. Because of that, we have tons of studies explaining how the brain works in perception, like the Gestalt laws that have been expanded in some others such as Hick, Fitts, or Zeigarnik laws. All those studies help creators design interactions with a predictable result. In this artificial construction, there are patterns that win and lose popularity over time. These ups and downs happen thanks to technical evolutions, social trends, and factual results proving the positive and negative performance of a pattern. Similar to what we have in GUI, in the voice interaction world we need to understand how the human brain works so we can create optimized interactions in order to address any goal that we might have. Unlike graphic interfaces, conversational interactions are not a technology artifact that humans have created in the last years. The conversation is a core task in social behavior that we learn from our early days. Because of that, interaction patterns happen all the time in oral communication and they are replicated in voice technology (example of common patterns used by leaders). Our work, as designers, is to define the different elements that take part in a voice interface, understand how they work outside technology, and adapt them in our interactions. Given the channel specifications, I have dealt so far with 3 different types of interaction patterns in VUI: Narrative patterns. » Read More
Like to keep reading?
This article first appeared on uxdesign.cc. If you'd like to keep reading, follow the white rabbit.