Blog

AI is designed by people – Why cognitive and gender bias matter more than we think

Aila Kronqvist Enterprise Designer, Solita

Published 02 Feb 2026

Reading time 3 min

AI systems are often described as objective, data-driven, and neutral. When problems arise, we tend to blame biased data, flawed models, or insufficient testing. 

But long before an AI model is trained, people make design decisions:

  • What problems are worth solving?
  • Whose needs matter?
  • What counts as “good enough”?

My doctoral research on the Finnish IT sector shows that cognitive and gender-based biases already shape these decisions, before any data is collected or algorithms are built. For companies working at the intersection of software, design, and AI, this matters more than ever.

Bias in AI rarely starts with algorithms

In AI development, bias is often framed as a technical issue:

  • Skewed datasets
  • Insufficient representation
  • Unintended correlations

These are real problems. But they are usually symptoms, not root causes.

Design choices made earlier in the process—problem framing, user assumptions, prioritisation—are shaped by the people in the room. When development teams are homogeneous, certain perspectives become “default” without being recognised as such.

AI systems don’t amplify bias by accident. They amplify what was already there.

What my research reveals about AI-related design work

My research combines narratives from IT professionals with quantitative survey data. While the study isn’t limited to AI projects, the findings are highly relevant for AI development environments.

1. Cognitive load shapes who gets to influence AI design

Many women in technical roles described spending significant mental energy on navigating credibility, justification, and visibility in design discussions. This creates a persistent cognitive load unrelated to the actual technical challenge.

In AI projects where uncertainty, abstraction, and ethical ambiguity are already high, this extra load reduces participation exactly where diverse perspectives would be most valuable in:

  • Defining risks
  • Questioning assumptions
  • Anticipating unintended consequences

2. Human-centred design doesn’t neutralise bias on its own

Design thinking and human-centred design are often positioned as safeguards against biased technology. My findings challenge this assumption.

While many professionals are familiar with design thinking principles, awareness doesn’t automatically translate into inclusive practice. Without diversity and reflective discussion, even human-centred methods can reinforce existing viewpoints.

In AI development, this can mean:

  • Focusing on “average users” who resemble the team
  • Over-trusting intuition instead of structured user research
  • Mistaking familiarity for understanding

3. AI magnifies early design assumptions

AI systems scale decisions. What begins as a small design shortcut or unexamined assumption can later affect thousands or millions of users.

When certain voices are marginalised in early design phases, AI doesn’t correct this. It institutionalises it.

From an ethical and business perspective, this isn’t just a fairness issue. It is a risk management and quality issue.

Why this matters for software and design companies

For organisations working with AI, software, and design, technical excellence alone is no longer sufficient. Responsible AI requires:

  • Inclusive problem framing
  • Cognitively sustainable design environments
  • Teams that can challenge their own assumptions

My research suggests that improving AI outcomes isn’t only about better data or models but about how people participate in design work.

Moving from awareness to practice

One of the strongest lessons from my research is this: Bias mitigation in AI cannot be outsourced to tools or checklists. It must be built into everyday design practice.

Practical steps include:

  • Treating team diversity as a design asset, not a HR metric
  • Creating explicit spaces for questioning assumptions
  • Recognising cognitive load as a design constraint
  • Combining human-centred design with structured reflection

Closing

AI isn’t neutral because the design isn’t neutral. If we want AI systems that are trustworthy, inclusive, and resilient, we must start by examining who gets to shape them and under what conditions.

In the next posts in this series, I will focus on: Clients and impact: How these design choices affect customers, users, and trust in technology

Interested for more? Read my research or check out what Tivi wrote about it. And see how we do modern software development