AI and Data Science

How AI is Reshaping UX and HCI for the Next Generation of Designers

7 February 2025
Professor Per Ola Kristensson

Today we share the round of questions that Per Ola Kristensson, Professor of Interactive Systems Engineering, Department of Engineering at the University of Cambridge, replied to the attendees in the recent live webinar about Human-Computer Interaction.

An insightful Q&A to learn more about how AI is Reshaping UX and HCI for the Next Generation of Designers where he presents how the course he leads, Human–Computer Interaction (HCI) for AI Systems Design, approaches this new landscape.

The interview explores AI’s role in Human-Computer Interaction (HCI), showcasing how AI-driven tools, including agentic AI, streamline design processes. It highlights ethical concerns like bias and the need for systematic risk assessment. The discussion extends to AI-driven HCI applications in various industries, emphasising accessibility for non-technical professionals. Function modelling is introduced as a structured approach to AI system design. The Q&A also addresses career opportunities in HCI, noting that AI expertise is not essential but interdisciplinary knowledge and systematic frameworks are key to effective human-AI system design.

How do you envision the role of AI in augmenting HCI design processes?

That’s a great question, and there’s a lot of active research on this topic. We’ve actually developed a system that accelerates design thinking. Instead of spending four weeks on a traditional design process, we can now use agentic AI—multiple large language model (LLM) agents trained as personas representing different stakeholders—to generate a rich set of documents and decision-making materials in just a couple of hours.

However, there are challenges to consider. As we discuss in the course Human–Computer Interaction (HCI) for AI Systems Design, bias is a significant issue in design. These models are often trained on readily available data, which might not represent all perspectives. Overcoming this bias is critical for producing effective and inclusive AI-driven design solutions.

How should ethical considerations be integrated into HCI for AI systems?

We explore this from multiple angles in the course. Early on, we introduce systematic approaches to designing for automation, which inherently include ethical considerations. Later, we dedicate an entire module to governance, covering fairness, bias, and traditional risk assessment principles.

A key part of this involves systematically mapping out human-AI systems. By defining clear system boundaries and applying structured risk assessment techniques, we can better understand the ethical implications and mitigate risks before deployment.

Can the skills learned in this course be applied to non-technical industries like hospitality or education?

Absolutely. In fact, I frequently use these industries as examples in my live sessions. AI-driven HCI techniques are broadly applicable, whether you're optimizing customer experiences in hospitality or enhancing digital learning tools in education.

How suitable is this course for someone with little design experience?

I don’t assume prior expertise in design or AI. We start by defining design principles and gradually build up from there. Throughout the Human–Computer Interaction (HCI) for AI Systems Design course, we revisit these foundational concepts, ensuring that everyone—regardless of their background—can engage with and benefit from the material.

What is function modelling, and how does it apply to human-AI systems?

Function modelling is a systematic approach to designing human-AI systems. Instead of jumping straight to a solution, we first identify the core functions the system needs to perform. For example, if a system must supply energy, we can consider different solutions like solar power, batteries, or AC mains.

By mapping out functional architectures, we can apply automation frameworks, conduct risk assessments, and ensure that the system is both adaptable and effective. This method helps avoid common pitfalls in system design, leading to more robust AI solutions.

Can this course support research in responsible AI, particularly in finance?

I believe so. I can confidently say that the approach used in the course —drawing on methodologies from design, engineering, and risk management—is unique and valuable. Many professionals haven’t been exposed to systematic human-AI system design, and this course provides a strong foundation for applying these principles across various fields, including finance.

In fact, I’m currently writing a book for Cambridge University Press on this very topic, highlighting its growing importance in industry and academia.

I’d like to build and test my AI-related product idea within the course. Would this be relevant for me?

This course focuses on developing comprehensive design documentation rather than building software. While we won’t be coding or conducting mathematical analyses of deep neural networks, you will gain the tools to systematically design and plan AI-driven systems. If your goal is to refine your product concept and create a structured implementation roadmap, this course will be valuable.

How can designers anticipate and manage unexpected outcomes in AI-driven systems?

Great question. Unanticipated outcomes are a major concern in AI design, which is why systematic approaches are essential. By mapping out system functions and applying structured frameworks, designers can better predict potential risks.

We also discuss critical design—the practice of questioning whether certain technologies should be developed at all. Additionally, we explore appropriation, where users adapt systems in ways designers didn’t foresee. Designing with flexibility in mind allows users to meet their needs while maintaining system integrity.

How does AI contribute to making digital products more accessible?

AI-driven design methods are highly applicable to accessibility challenges. Although accessibility isn’t a specific focus of this course, it’s an area I’ve worked in extensively. I invented dwell-free iTyping, a gaze-based communication system for non-speaking individuals with motor disabilities, which has been integrated into Tobii Dynavox’s assistive technology products.

Additionally, my CHI 2020 Best Paper Award-winning research demonstrated how systematic design techniques can help envision more inclusive technologies. These methods, including function modelling and parameterisation, can be used to design accessible AI-driven systems.

How does this course differ from standard UX or interaction design courses?

We don’t teach conventional interaction design. Instead, we introduce systematic processes like function modelling, mixed-initiative interfaces, and human-machine teaming—concepts that many interaction designers aren’t typically exposed to.

The course’s project-based approach also allows learners to apply these methods to their own professional contexts, making the learning experience highly relevant regardless of industry or background.

What is Human-Computer Interaction (HCI), and what’s the next step after this course?

HCI is fundamental to how we interact with technology. Some say control theory is a philosophy of life—I’d argue that HCI is the philosophy of life for technology. It’s about designing systems that help people achieve their goals efficiently and effectively.

Historically, designers focused on crafting interfaces in software like Photoshop. But today, AI-driven systems are much more complex, involving unpredictable user behaviour and emergent properties of machine learning models. That’s why a systematic approach is crucial.

For those looking to continue learning, options include pursuing a master’s degree, delving into deep learning architectures, or focusing on product management strategies for AI-driven design teams.

What career opportunities exist for HCI professionals without an AI background?

Many HCI professionals start without an AI background and upskill over time. The exciting thing about AI is that it’s an interdisciplinary field—contributions aren’t limited to those with deep coding or mathematical expertise.

What’s important is understanding the emergent properties of AI systems, their architecture, and the risks involved. Designers, strategists, and product managers all play critical roles in shaping AI-enhanced user experiences.

This Q&A session provided valuable insights into the evolving landscape of HCI and AI. You can watch the full session on the video below:

https://youtu.be/RepyCO41EoU?si=5arf-qVLqs0HhSlM(Opens in a new window)

If you're interested in joining the next webinar, where Professor Kristensson will engage with attendees again, sign up for our newsletter to stay informed about the dates of next HCI upcoming events.

Professor Per Ola Kristensson

Professor of Interactive Systems Engineering, Department of Engineering, University of Cambridge
Per Ola leads the Intelligent Interactive Systems Group at the Cambridge Engineering Design Centre. He is also a co-founder and co-director of the Centre for Human-Inspired Artificial Intelligence at the University of Cambridge.