Design for Humans, Not Machines: Why Your AI Strategy Must Start with User Needs
Padma, 14 October 2025
AI doesn’t know your customer.
It doesn’t know your clients, your users, or your staff.
It doesn’t understand your values, your tone, or your mission.
AI isn’t sentient — it predicts. It guesses. It gets things wrong. And it’s not designed to design for humans.
So if you want to build an AI strategy that works, start with the people you serve.
Begin with user needs
Before you feed prompts into a model, take a step back.
Ask: what do humans need?
Think about where and how technology can genuinely help people do the things they already need to do – not the things you assume they want. That mindset creates real impact because it’s grounded in how people actually behave, not how we imagine they behave.
Bias and error can enter at any stage of an AI system’s life cycle — from data collection and annotation to model evaluation and monitoring. As researchers have noted, “biases are introduced through human decisions at every stage of development”, making inclusive, human-centred oversight essential.
(Source: National Library of Medicine, 2023)
When your starting point is real people and their real contexts, you reduce the risk of designing solutions for the wrong problems.
Test real tasks, not ideas
Don’t ask people what they want. Watch what they do.
Run task-based usability tests. Ask participants to think out loud as they work through a task. Observe where they hesitate, get confused, or find success. Then ask how they felt about their experience.
This matters because what people say and what they do often don’t align.
In one large study of 19,576 task observations, researchers found that 29% of participants failed their assigned task — but 14% of those failures still rated their satisfaction at the maximum score.
(Source: MeasuringU, 2020)
That mismatch shows why self-reported satisfaction data alone can mislead you. You need to observe real behaviour, collect qualitative feedback, and triangulate everything with other evidence about user experience — both with your organisation and similar journeys elsewhere.
Map journeys, gaps and goals
Once you’ve watched people in action, map out the steps they take to complete key tasks.
Identify the gaps – where they drop off, get stuck, or look for a workaround.
Then set data-informed goals that map directly to what people need, when they need it, and how they move through a journey to complete a task online.
This approach turns your insights into measurable change: improving user satisfaction, efficiency, and trust. It also creates the foundation for responsible AI integration – because you know what problems are worth solving.
Build and adapt your AI strategy
When you base your AI strategy on real human insight, you can shape:
– AI prompts that reflect real-world needs
– AI governance that protects your values and your users
– AI workflows that make sense for your teams
– Outcomes that benefit real people, not just algorithms
Even in well-intentioned AI systems, humans adapt their behaviour in unexpected ways. In a real-world study of professional tennis umpires using AI-assisted line-calling, for example, researchers found that “AI oversight reduced total mistakes but shifted the types of errors umpires made.”
(Source: ArXiv, 2024)
This echoes findings from cognitive science: people are more likely to over-trust AI when it looks confident, even when it’s wrong. Design interventions known as “cognitive forcing functions” – prompts that make people pause and reflect – can reduce blind acceptance and improve decision quality.
(Source: ArXiv, 2021)
These insights highlight why AI governance and workflow design matter. Technology alone won’t deliver fairness or accuracy – your processes, principles, and people will.
Design for the future. Design for humans.
The future of digital isn’t just automated. It’s intentional.
Designing for humans means using content strategy and content design to connect what people need with what your organisation delivers — responsibly, accessibly, and with purpose.
That’s where technology becomes transformative.
Because real impact starts with understanding real people.
References
- Nielsen Norman Group. AI-Powered Tools for UX Research: Issues and Limitations.
- MeasuringU. When People Fail But Still Say They’re Satisfied.
- National Library of Medicine (PMC). Human-Centered Design to Address Biases in Artificial Intelligence.
- ArXiv. Human Oversight of AI Systems: Lessons from Tennis Umpires.
- ArXiv. Cognitive Forcing Functions Reduce Overreliance on AI Suggestions.
