Overfitting and the Influence of UX Designers on Business Decisions

The phenomenon of overfitting in the context of UX design. My interpretation of Griffiths and Christian's book.

Overfitting and the Influence of UX Designers on Business Decisions
Photo by Tamara Gak / Unsplash

[The article below is a translated by AI version of the original polish blog post]

One of the areas that significantly supports businesses in achieving their goals is user experience, and today no one denies that. After reading the chapter "Overfitting" from Tom Griffiths and Brian Christian's book "Algorithms to Live By: The Computer Science of Human Decisions" I understand that UX designers are exposed to the risk of overfitting, primarily known in the context of machine learning. This can lead to incorrect business decisions


Short definition

Overfitting in machine learning is a situation in which a model or learning algorithm becomes overly tailored to the training data, to the extent that it loses the ability to generalize its functions or actions to new, unknown data outside of the training set.

Wikipedia says: "In mathematical modeling, overfitting is the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably"

Overfitting is a problem because the goal in machine learning is to create a model that generalizes and performs well on data it hasn't seen before.

In my opinion, being aware of the occurrence of such a phenomenon allows us to also consider it in a context of other fields, such as UX design, and through analogy, critically assess and guard against certain risks.

Creation of complex and detailed prototypes in early phases


The first example of actions susceptible to overfitting is the creation of complex and detailed interface designs in the early iterations. Based on my experience I think this happens particularly when the starting point for the designer is data and information chaos that hasn't undergone proper analysis or prioritization, or hasn't been subjected to processes called "Occam's razor" (for instance, the designer has been analyzing the data for too long, uncovering ever newer information and facts, continuously incorporating them into the design). This chaos can be reflected in the interface by adding more elements in response to newly discovered information.

When working on a solution concept, especially in the initial iterations, it's crucial to focus on the most important features and options and test them as quickly as possible, and then iterate and add details. In an era of advanced Design Systems with ready-made components, the design process allowing the creation of precise and detailed prototypes from the start. But that's why the described risk is significant and it may impact the time spent on prototype development, subsequently delaying the implementation of the solution by dev teams.

Extremely Wide Information Availability


This is also related to the phenomenon of extremely wide information availability. Current solutions enable the collection of virtually unlimited amounts of information about users and their behavior. In my opinion, a designer is exposed to the same risk as someone training machine learning models, which is overfitting the collected data to the project by overinterpreting it and introducing additional variables or dimensions, rather than focusing on the overall problem that the data should address.

Easy access to users and the ability to conduct (too) many iterations of testing designed prototypes can lead to contradictory results. Designer should conclude prototype testing and introduce the solution to the market as soon as the prototype is functionally ready and can addressed defined goals. Then based on the users feedback iterate and improve the solution. In principle, tests allow for the identification of crucial usability issues, but too many iterations can lead to a focus on project details, yield conflicting conclusions, and ultimately lead to designer confusion.

Ignoring Users Diversity


The last of the risks is disregard for user diversity, a problem that is often raised in the context of training ML models and, in my opinion, is reflected in interface design. A designer has their own characteristics, opinions, way of being and may unconsciously favor specific user groups, exposing the needs of others to discrimination. This often leads to issues with the accessibility of designed solutions or the neglect of "edge cases," which can result in real business losses. This phenomenon involves an excessive alignment with a particular user group, which negatively affects those underrepresented in the process.

Critical thinking is one of the key competencies that designers should continuously cultivate. Critical thinking, along with the ability to make decisions (even in conditions of uncertainty), allows for challenging assumptions and verifying them as quickly as possible.

Mistakes made by designers due to overfitting can result in real financial losses for the business: prolonging the ideation phase, delaying solution development, or exposing the company to the cost of developing unnecessary or inadequate solutions. Therefore, it's essential to be aware of the existence of such a mechanism.

It seems to me that reading Griffiths and Christian's book should be helpful not only for those working with models or algorithms but for anyone who has any direct impact on the business outcomes of their organizations.