Designing Personalities for Intelligent User Interfaces

Posted

Curo (http://kimtaery.com/works/work-a)
by Alice Fang, Elizabeth Han, and Taery Kim

Kai & Curai (http://tilokrueger.com/work/kai/)
by Anukriti Kedia, Tilo Krueger, and Josh LeFevre

Abby (https://maddycha.com/abbi)
by Maddy Cha, Alissa Chan and Sharon Lee

Chef (https://designawards.core77.com/consumer-technology/96498/Blue-Apron-Conversational-AI-for-Culinary-Experiences)
by Anna Boyle, Deepika Dixit, Yiwei Huang and Amrita Khoshoo

Educator/s:
Institution:
Level: , , ,
Duration: 6 weeks
Category: , , , , , , ,
Filed Under: , , , , , , , , , ,
Bookmark Project

Project Brief

Recent advancements in speech recognition technologies have yielded intelligent (AI-powered) products like Amazon Alexa and Google Home, and have also made numerous existing products conversational. Beyond chatbots, we can use our natural language through speech when interacting with our laptops, smartphones, watches, kitchen appliances, and vehicles. Interfaces, however, are far from being solely dependent on voice input. Speech and visuals together bring more nuanced and rich experiences in user interaction, and it is the responsibility and challenge of the designer to determine the most appropriate balance. A new design space is forming around shaping the personality of intelligent interfaces. The distinctive visuals, movements, and voices of virtual assistants define the personalities that impact user experiences, as well as brand experiences. 

In groups of three or four, select a brand/company that is currently without any voice user interface (VUI). You will design the personality of your chosen brand/company’s intelligent and multimodal interface (touch and voice input; and visual and voice feedback) to provide compelling and novel experiences.

 

Questions to Consider:

  • VUI yields a new mode of interaction, yet, it requires consideration of whether all interactions should be conversational. What might be plausible situations or appropriate tasks to be dealt with through VUI? 
  • Unlike graphical user interfaces (GUI), VUI itself is invisible. Therefore, it is key to harness graphics to inform users about the current state of the system. How might designers effectively communicate the dynamic contexts of VUI using sound and motion graphics? How might designers deal with variations of form while maintaining its integrity?
  • VUI’s require designers to consider screen real estate and user flow. How might VUI affect the design of an existing product?
  • The interface can be an assistant that carries out given tasks or an intelligent agent that autonomously intervenes when deemed necessary. What might be the primary role of your interface and the appropriate scope of its autonomy? 
  • Designers can build systems that mediate interactions among people (not solely those between an individual user and system). What does your interface do, if anything, for human-to-human interaction? 

 

Warm-up Exercises:

  • Choose one existing GUI and one VUI, and create user experience flow diagrams for each. Identify areas of opportunities and limitations.
  • Choose an existing virtual assistant and analyze the variations of visual indicators to comprehend successful time-based visual communication principles.

Learning Objectives

  • Examine existing interfaces to identify the opportunities and pain points
  • Integrate audial and visual feedback to design the personality of an intelligent interface
  • Build a system that can effectively communicate the states of an interface through motion
  • Devise a dynamic visual system that continuously maintains its integrity over time
  • Articulate the interactions between a human and computer and between humans

Deliverables

  • Customer journey map / UX flow diagram
  • Video showing the motion states on a quadrant chart 
  • High-fidelity concept video showing scenarios (MP4, no longer than 3 minutes) OR interactive prototype (produced with p5.js and p5.speech.js; Adobe XD; DialogFlow, or VoiceFlow)

Readings/Resources

  • Arx, P. von. (1983). Film + design: Explaining, designing with and applying the elementary phenomena and dimensions of film in design education at the AGS Basel School of Design. Chapman & Hall. 
  • Freyer, C., Noel, S., & Rucki, E. (2011). Digital by design: Crafting technology for products and environments. Thames & Hudson. 
  • Kolko, J., & Connors, C. (2011). Thoughts on interaction design. Morgan Kaufmann.
  • Pearl, C. (2017). Designing voice user interfaces: Principles of conversational experiences. O'Reilly. 
  • Platz, C. (2020). Design beyond devices: Creating multimodal, cross-device experiences. Rosenfeld.

Reflections

Between 2018 and 2020, I assigned the project with slight modifications across several courses, including Computational Design Thinking, Graduate Interaction Design Studio I, and Junior Communications Studio IV in CMU’s School of Design. There were modifications in the project focus and final outcomes, considering the varying levels and skills of the cohort. (See the variation under DELIVERABLES.) 

The main objective of the project was to encourage students to take a research-based approach to interface design and to discover the values in the multimodality of interfaces, rather than simply harnessing new technology for technology's sake. The project was an effective way to address how integral verbal and visual communication might affect user experience. By diagramming the UX flow, the students defined how their systems would perform in diverse situations while fulfilling users’ needs and goals. It encouraged students to continuously and critically view the high-level interactions of their systems while designing the micro-interactions in prototyping. 

Each group conceived a bespoke personality for their system that communicated with users using audio and visual outputs. They leveraged processes of designing for a flexible visual identity to give form to the systems, while also prioritizing clarity of communication over beauty, as they were directly related to usability. Graphics and movements were used to signify the varying states of the systems (e.g., listening, notifying, speaking). Students were asked to produce a matrix chart based on two axes (e.g., the urgency of a situation, tone of a message) to map their various states. It was a useful method to organize the kinetic variations of visual indicators, such as the visual representation of their assistants/agents, with consideration of form and context.  

The project provided interesting topics for classroom discussion, including determining the desirable levels and contexts for AI, shaping the specific role of an intelligent system, and designing systems that enable people to collaborate/co-create.

 

Related projects