Designing AI Interfaces: A Practical Guide

·
August 29, 2024

AI has moved from being a futuristic concept to becoming a familiar part of our daily lives. As an agency, we find ourselves at the forefront of this new wave of AI, and as designers, we face the unique challenge of defining how users make sense of this new technology. Through our project experience, we've developed insights into the challenges of aligning AI initiatives with business objectives while ensuring they resonate with users responsibly and truthfully. We've compiled a shortlist of what we learned from previous projects that dig into ways to craft AI practically and responsibly from various points of a project timeline and ways to effectively design AI user interfaces now and in the future.

Not everything has to be AI

Let's start at the top. Not everything benefits from having AI. When we're approached with an idea to incorporate AI without a clear understanding of its underlying “why,” it quickly becomes a nonstarter. As with any new technology, product creators should apply AI with intention and strategy over novelty.

We navigate this question by understanding the problem space before jumping into development. We understand users’ problems by conducting user research including participatory design, co-design, and close collaboration with stakeholders. Investing in a dedicated research phase to uncover core user needs will save cost and time later in the development pipeline. 

Not everything benefits from having AI. Sometimes, the solution lies in the human touch of an actual expert or a simple usability improvement to the existing system.

AI can quickly feel like the panacea to every user problem. However, AI might be overkill. Sometimes, the solution lies in the human touch of an actual expert or a simple usability improvement to the existing system. First, a clear case for using AI should be made, and then it can be brought to life, not vice versa. 

Plan for AI operationalization early

With a rationale for AI in place, how will it come to life? Is there a viable roadmap to develop and train a new AI system? We've found it crucial to involve stakeholders early to assess the feasibility of the AI and differentiate it from more straightforward machine learning or rule-based systems. We recommend working closely–and early–with an AI engineering advisor or team to map out the unique contributions of AI and define its specific inputs and outputs. Ultimately, the team should be able to confidently answer the “why” to fuel the “how” behind these innovation efforts.

Consider the end-user perspective on AI expectations and then work backward to define its scope. Common things that get overlooked are the data types you will use to train your model, the required training volume, and the necessary testing to craft a smooth and anticipated experience. For example, to create a hypothetical AI therapist app, a product founder must first understand the end-user's desire for empathy, personalization, and confidentiality in their therapeutic experience. Users will expect the AI to understand their emotions, provide relevant advice, and adapt its responses to their unique circumstances over time, mimicking a human therapist's ability to offer support and guidance. To achieve this, the founder must define the app's scope by focusing on building a natural language processing model that can recognize and respond to a wide range of emotional cues, conversational contexts, and psychological states. Training the model will require diverse datasets, including anonymized transcripts of therapy sessions, emotional speech patterns, mental health forums, and user feedback on therapeutic effectiveness. A substantial volume of data—potentially millions of conversation snippets—is needed to ensure the model learns nuanced human emotions and conversational dynamics. Once implemented, rigorous testing with varied user demographics and real-world scenarios will be essential to refine the AI's conversational flow, ethical boundaries, and adaptability, ensuring it provides a supportive, non-judgmental, and responsive user experience that feels both natural and therapeutic. From this example, it's clear that the requirements are much more profound than meets the eye, and working through these details from the outset will help develop the most realistic plan with the highest chance of success. 

From a business standpoint, it's essential to think ahead about how we'll capture the effectiveness of AI. What are the key metrics that the AI will optimize? Some examples might be faster form completion rates through dynamic questions or higher user engagement driven by more contextual information. Whatever they may be, defining these attributes will challenge the product team to clarify the details of AI expectations in advance. Combating over-confidence in a tool still in its infant stages will help ensure that the end solution delivers on its promises.

Designing around interface ambiguity

Neural network-based AI systems, particularly those using machine learning models, often improve and become more refined with increased user engagement. As users interact with the AI product, the system collects more data, which can be used to train and fine-tune the models, resulting in more personalized and informative outputs. This creates a dynamic where the content and experience evolve based on user input, making the AI experience more engaging and tailored.

However, this flexibility also presents challenges for designers. With neural networks that can generate diverse and unpredictable outputs, designers face a vast array of possibilities and potential outcomes. This can feel like "opening Pandora's box" because it introduces complexity and uncertainty into the design process—designers may find it challenging to anticipate every possible interaction or outcome and must design systems that can adapt to and manage this variability effectively.

Approaching AI interfaces as building a scaffold can be the first step to unblocking design ideation. Apps with dynamically served content via AI must handle these scenarios gracefully. We must consider the flexibility of how a UI populates with content and create a framework that users can easily interpret and interact with. The same design principles apply to interfaces featuring generative text elements. How can we create dynamic interfaces with generated text content that doesn't break the UI? What limitations need to be imposed on the AI to maintain design consistency? Designers and product managers must consider these questions when designing AI-enabled experiences.

Approaching AI interfaces as building a scaffold can be the first step to unblocking design ideation.

Adopting the scaffold approach can be liberating from a design standpoint, as it allows us to separate the UI design from the AI engine supplying the content in a deployed state. Ensuring that all design intentions are maintained is critical during the AI software development testing phase.

We recently tackled a project involving an AI agent that would provide dynamic content based on direct-app interactions and indirect data collection through a wearable device. We envisioned this product as an ever-changing and ever-optimizing relationship between the end-user and the AI. We explored evolving AI personalities, where the system learns and adapts its tone and style based on direct and indirect user input. If a user seems to exhibit improved behavior with one tone, keep developing down that path; if they don't, shift to find something that resonates. Aggregating what works and doesn't across all users creates a network effect for the system where the larger it gets, the better the AI influences user behavior.

Prioritizing user agency and privacy

It can be easy to get swept up in AI's capabilities, primarily when those capabilities lie in the amount of data you feed the algorithm. Now more than ever, it's crucial to continually reevaluate how to protect user agency and privacy when working with AI.

Recently, we worked with a Fortune 100 company to envision an AI product to help parents predict future health issues in their children and provide actions that they can take to mitigate issues from developing. In this scenario, the AI pulls data from interactions with the app and data from exams with their primary care provider. The challenge for us as designers was setting the appropriate stage for human and AI collaboration that outputs accurate results while maintaining user agency. An AI health assistant also introduced an exciting problem of representing a health expert without undermining an actual trip to the doctor's office. 

We examined this human/AI relationship in a separate user study, which specifically examined how users prefer to receive diagnostic test results—via an AI agent or a real doctor. (Spoiler: participants consistently chose the actual doctor.) This preference was consistent among both millennial and Gen Z participants in our study. While AI was considered acceptable as a background tool to support human interaction, it was not favored as the sole means of communication. Furthermore, AI was seen as a helpful touchpoint between visits with healthcare providers. Embracing these findings and product positioning, we advocate for positioning AI as an extension of a human expert, providing insights while preserving the crucial role of human professionals. For our Fortune 100 project, this vision was present not only in our product concept but also in the visual design, content writing, and brand strategy, all of which presented AI as a trustworthy and reliable tool. Our goal was to empower both patients and families with AI tools and to enhance doctors' capabilities, enabling them to better interact with patients.

We advocate for positioning AI as an extension of a human expert, providing insights while preserving the crucial role of human professionals.

Throughout our design process, we carefully consider how AI would gather information about users, emphasizing transparency and respecting privacy concerns and international privacy regulations in every design decision. We achieve this by establishing internal design principles and discussing the consequences of monitoring sensitive information with business stakeholders and potential users through testing. In our designs, we include moments that verify user privacy and design speed bumps to introduce friction in decisions about AI data collection. For the children's experience, we focused our design on creating an interface that conveyed the AI's capabilities while remaining friendly and approachable. The aim was to evoke a healthcare professional's depth and breadth of knowledge without being intrusive, intimidating, or overly analytical. We drew parallels with other health services and interviewed doctors to guide our expectations.

We see the value in positioning AI as an extension of a human expert, providing insights while preserving the crucial role of human professionals. This vision was reflected not only in our product concept but also in the visual design, content writing, and brand strategy, all of which presented AI as a trustworthy and reliable tool. We validated this approach through a separate user study, which specifically examined how users prefer to receive diagnostic test results—via an AI agent or a real doctor. (Spoiler: participants consistently chose the actual doctor.) Our goal was to empower both patients and families with AI tools and to enhance doctors' capabilities, enabling them to better interact with patients.

Looking Ahead

New opportunities in AI make it an exciting time to be a designer. We interact with fresh ideas that challenge our assumptions and revisit prior design techniques to complement the abilities of this new technology. As AI continues to advance quickly, we need to be intentional about why we should facilitate these decisions and the downstream effects it may have on not only engineering and business development but also our own humanity and personal agency.

Share this article