Data Vs. Findings Vs. Insights In UX<\/h1>\nVitaly Friedman<\/address>\n 2025-05-27T13:00:00+00:00
\n 2025-05-30T15:03:14+00:00
\n <\/header>\n
In many companies, data, findings, and insights<\/strong> are all used interchangeably. Slack conversations circle around convincing data points, statistically significant findings, reliable insights, and emerging trends. Unsurprisingly, conversations often mistake sporadic observations<\/strong> for consistent patterns.<\/p>\nBut how impactful is the weight that each of them carries? And how do we turn raw data into meaningful insights<\/strong> to make better decisions? Well, let\u2019s find out.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Data \u2260 Findings \u2260 Insights. Short video by NN\/g<\/a> explains the differences between them.(Large preview<\/a>)
\n <\/figcaption><\/figure>\nWhy It All Matters<\/h2>\n
At first, it may seem that the differences are very nuanced and merely technical. But when we review inputs and communicate the outcomes of our UX work<\/strong>, we need to be careful not to conflate terminology — to avoid wrong assumptions, wrong conclusions, and early dismissals.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Raw data points is random and inconclusive. For it to be valuable, it must be turned into insights. Cartoon by Hugh MacLeod<\/a>. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nWhen strong recommendations and bold statements<\/strong> emerge in a big meeting, inevitably, there will be people questioning the decision-making process. More often than not, they will be the loudest voices in the room, often with their own agenda and priorities that they are trying to protect.<\/p>\nAs UX designers, we need to be prepared for it. The last thing we want is to have a weak line of thinking<\/strong>, easily dismantled under the premise of \u201cweak research\u201d, \u201cunreliable findings\u201d, \u201cpoor choice of users\u201d — and hence dismissed straight away.<\/p>\nData \u2260 Findings \u2260 Insights<\/h2>\n
People with different roles — analysts, data scientists, researchers, strategists — often rely on fine distinctions to make their decisions. The general difference is easy to put together:<\/p>\n
\n- Data<\/strong> is raw observations (logs, notes, survey answers) (what was recorded<\/em>).<\/li>\n
- Findings<\/strong> describe emerging patterns in data but aren\u2019t actionable (what happened<\/em>).<\/li>\n
- Insights<\/strong> are business opportunities (what happened + why + so what<\/em>).<\/li>\n
- Hindsights<\/strong> are reflections of past actions and outcomes (what we learned in previous work<\/em>).<\/li>\n
- Foresights<\/strong> are informed projections, insights with extrapolation (what could happen next<\/em>).<\/li>\n<\/ul>\n
<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Business value emerges from turning hindsights into insights, then insights into foresights. (Image source: The Hidden Truth of Business<\/a>) (Large preview<\/a>)
\n <\/figcaption><\/figure>\nHere\u2019s what it then looks like in real life:<\/p>\n
\n- Data \u2193<\/strong>
\nSix users were looking for \u201dMoney transfer\u201d in \u201cPayments\u201d, and 4 users discovered<\/strong> the feature in their personal dashboard.<\/li>\n- Finding \u2193<\/strong>
\n60% of users struggled to find<\/strong> the \u201cMoney transfer\u201d feature on a dashboard, often confusing it with the \u201cPayments\u201d section.<\/li>\n- Insight \u2193<\/strong>
\nNavigation doesn\u2019t match users\u2019 mental models for money transfers, causing confusion and delays. We recommend renaming sections or reorganizing the dashboard<\/strong> to prioritize \u201cTransfer Money\u201d. It could make task completion more intuitive and efficient.<\/li>\n- Hindsight \u2193<\/strong>
\nAfter renaming the section to \u201cTransfer Money\u201d and moving it to the main dashboard, task success increased by 12%<\/strong>. User confusion dropped in follow-up tests. It proved to be an effective solution.<\/li>\n- Foresight \u2193<\/strong>
\nAs our financial products become more complex, users will expect simpler task-oriented navigation<\/strong> (e.g., \u201cSend Money\u201d, \u201cPay Bills\u201c) instead of categories like \u201cPayments\u201d. We should evolve the dashboard towards action-driven IA to meet user expectations.<\/li>\n<\/ul>\nOnly insights create understanding<\/strong> and drive strategy. Foresights shape strategy, too, but are always shaped by bets and assumptions. So, unsurprisingly, stakeholders are interested in insights, not findings. They rarely need to dive into raw data points. But often, they do want to make sure that findings are reliable<\/strong>.<\/p>\nThat\u2019s when, eventually, the big question about statistical significance<\/strong> comes along. And that\u2019s when ideas and recommendations often get dismissed without a chance to be explored or explained.<\/p>\n<\/div>\nBut Is It Statistically Significant?<\/h2>\n
Now, for UX designers, that\u2019s an incredibly difficult question to answer. As Nikki Anderson pointed out<\/a>, statistical significance was never designed for qualitative research<\/strong>. And with UX work, we\u2019re not trying to publish academic research or prove universal truths.<\/p>\nWhat we are<\/em> trying to do is reach theoretical saturation<\/strong>, the point where additional research doesn\u2019t give us new insights. Research isn\u2019t about proving something is true. It\u2019s about preventing costly mistakes before they happen.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Consequence cheat sheet by Nikki Anderson<\/a> for turning findings into insights. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nHere are some useful talking points<\/strong> to handle the question:<\/p>\n\n- Five users per segment<\/strong> often surface major issues<\/strong>, and 10–15 users per segment usually reach saturation. If we\u2019re still getting new insights after that, our scope is too broad.<\/li>\n
- \u201cIf five people hit the same pothole and wreck their car, how many more do you need before fixing the road?\u201d<\/li>\n
- \u201cIf three enterprise customers say onboarding is confusing, that\u2019s a churn risk<\/strong>.\u201d<\/li>\n
- \u201cIf two usability tests expose a checkout issue, that\u2019s abandoned revenue<\/strong>.\u201d<\/li>\n
- \u201cIf one customer interview reveals a security concern, that\u2019s a crisis waiting to happen<\/strong>.\u201d<\/li>\n
- \u201cHow many user complaints exactly do we need to take this seriously?\u201d<\/li>\n
- \u201cHow much revenue exactly are we willing to lose before fixing this issue?\u201d<\/li>\n<\/ul>\n
And: it might not be necessary to focus on the number of participants, but instead, argue about users consistently struggling with a feature<\/strong>, mismatch of expectations, and a clear pattern emerging around a particular pain point.<\/p>\nHow To Turn Findings Into Insights<\/h2>\n
Once we notice patterns emerging, we need to turn them into actionable recommendations. Surprisingly, this isn\u2019t always easy — we need to avoid easy guesses and assumptions<\/strong> as far as possible, as they will invite wrong conclusions.<\/p>\nTo do that, you can rely on a very simple but effective framework to turn findings into insights: What Happened + Why + So What<\/strong>:<\/p>\n\n- \u201cWhat happened\u201d<\/strong> covers observed behavior and patterns.<\/li>\n
- \u201cWhy\u201d<\/strong> includes beliefs, expectations, or triggers.<\/li>\n
- \u201cSo What\u201d<\/strong> addresses impact, risk, and business opportunity.<\/li>\n<\/ul>\n
To better assess the \u201cso what\u201d part, we should pay close attention to the impact of what we have noticed on desired business outcomes. It can be anything from high-impact blockers and confusion to hesitation and inaction<\/strong>.<\/p>\nI can wholeheartedly recommend exploring Findings \u2192 Insights Cheatsheet<\/strong> in Nikki Anderson\u2019s wonderful slide deck<\/a>, which has examples and prompts to use to turn findings into insights.<\/p>\nStop Sharing Findings \u2014 Deliver Insights<\/h2>\n
When presenting the outcomes of your UX work, focus on actionable recommendations and business opportunities<\/strong> rather than patterns that emerged during testing.<\/p>\nTo me, it\u2019s all about telling a good damn story<\/strong>. Memorable, impactful, feasible, and convincing. Paint the picture of what the future could look like and the difference it would produce. That\u2019s where the biggest impact of UX work emerges.<\/p>\nHow To Measure UX And Design Impact<\/h2>\n
Meet Measure UX & Design Impact<\/a> (8h), a practical guide for designers and UX leads<\/strong> to shape, measure, and explain your incredible UX impact on business. Recorded and updated by Vitaly Friedman. Use the friendly code \ud83c\udf9f IMPACT<\/code><\/strong> to save 20% off<\/strong> today. Jump to the details<\/a>.<\/p>\n\n
\n 
\n <\/a>
\n<\/figure>\n\n\n
\n 2025-05-30T15:03:14+00:00
\n <\/header>\n
But how impactful is the weight that each of them carries? And how do we turn raw data into meaningful insights<\/strong> to make better decisions? Well, let\u2019s find out.<\/p>\n <\/p>\n <\/a> At first, it may seem that the differences are very nuanced and merely technical. But when we review inputs and communicate the outcomes of our UX work<\/strong>, we need to be careful not to conflate terminology — to avoid wrong assumptions, wrong conclusions, and early dismissals.<\/p>\n <\/p>\n <\/a> When strong recommendations and bold statements<\/strong> emerge in a big meeting, inevitably, there will be people questioning the decision-making process. More often than not, they will be the loudest voices in the room, often with their own agenda and priorities that they are trying to protect.<\/p>\n As UX designers, we need to be prepared for it. The last thing we want is to have a weak line of thinking<\/strong>, easily dismantled under the premise of \u201cweak research\u201d, \u201cunreliable findings\u201d, \u201cpoor choice of users\u201d — and hence dismissed straight away.<\/p>\n People with different roles — analysts, data scientists, researchers, strategists — often rely on fine distinctions to make their decisions. The general difference is easy to put together:<\/p>\n <\/p>\n <\/a> Here\u2019s what it then looks like in real life:<\/p>\n Only insights create understanding<\/strong> and drive strategy. Foresights shape strategy, too, but are always shaped by bets and assumptions. So, unsurprisingly, stakeholders are interested in insights, not findings. They rarely need to dive into raw data points. But often, they do want to make sure that findings are reliable<\/strong>.<\/p>\n That\u2019s when, eventually, the big question about statistical significance<\/strong> comes along. And that\u2019s when ideas and recommendations often get dismissed without a chance to be explored or explained.<\/p>\n Now, for UX designers, that\u2019s an incredibly difficult question to answer. As Nikki Anderson pointed out<\/a>, statistical significance was never designed for qualitative research<\/strong>. And with UX work, we\u2019re not trying to publish academic research or prove universal truths.<\/p>\n What we are<\/em> trying to do is reach theoretical saturation<\/strong>, the point where additional research doesn\u2019t give us new insights. Research isn\u2019t about proving something is true. It\u2019s about preventing costly mistakes before they happen.<\/p>\n <\/p>\n <\/a> Here are some useful talking points<\/strong> to handle the question:<\/p>\n And: it might not be necessary to focus on the number of participants, but instead, argue about users consistently struggling with a feature<\/strong>, mismatch of expectations, and a clear pattern emerging around a particular pain point.<\/p>\n Once we notice patterns emerging, we need to turn them into actionable recommendations. Surprisingly, this isn\u2019t always easy — we need to avoid easy guesses and assumptions<\/strong> as far as possible, as they will invite wrong conclusions.<\/p>\n To do that, you can rely on a very simple but effective framework to turn findings into insights: What Happened + Why + So What<\/strong>:<\/p>\n To better assess the \u201cso what\u201d part, we should pay close attention to the impact of what we have noticed on desired business outcomes. It can be anything from high-impact blockers and confusion to hesitation and inaction<\/strong>.<\/p>\n I can wholeheartedly recommend exploring Findings \u2192 Insights Cheatsheet<\/strong> in Nikki Anderson\u2019s wonderful slide deck<\/a>, which has examples and prompts to use to turn findings into insights.<\/p>\n When presenting the outcomes of your UX work, focus on actionable recommendations and business opportunities<\/strong> rather than patterns that emerged during testing.<\/p>\n To me, it\u2019s all about telling a good damn story<\/strong>. Memorable, impactful, feasible, and convincing. Paint the picture of what the future could look like and the difference it would produce. That\u2019s where the biggest impact of UX work emerges.<\/p>\n Meet Measure UX & Design Impact<\/a> (8h), a practical guide for designers and UX leads<\/strong> to shape, measure, and explain your incredible UX impact on business. Recorded and updated by Vitaly Friedman. Use the friendly code \ud83c\udf9f <\/p>\n
\n <\/figcaption><\/figure>\nWhy It All Matters<\/h2>\n
<\/p>\n
\n <\/figcaption><\/figure>\nData \u2260 Findings \u2260 Insights<\/h2>\n
\n
<\/p>\n
\n <\/figcaption><\/figure>\n\n
\nSix users were looking for \u201dMoney transfer\u201d in \u201cPayments\u201d, and 4 users discovered<\/strong> the feature in their personal dashboard.<\/li>\n
\n60% of users struggled to find<\/strong> the \u201cMoney transfer\u201d feature on a dashboard, often confusing it with the \u201cPayments\u201d section.<\/li>\n
\nNavigation doesn\u2019t match users\u2019 mental models for money transfers, causing confusion and delays. We recommend renaming sections or reorganizing the dashboard<\/strong> to prioritize \u201cTransfer Money\u201d. It could make task completion more intuitive and efficient.<\/li>\n
\nAfter renaming the section to \u201cTransfer Money\u201d and moving it to the main dashboard, task success increased by 12%<\/strong>. User confusion dropped in follow-up tests. It proved to be an effective solution.<\/li>\n
\nAs our financial products become more complex, users will expect simpler task-oriented navigation<\/strong> (e.g., \u201cSend Money\u201d, \u201cPay Bills\u201c) instead of categories like \u201cPayments\u201d. We should evolve the dashboard towards action-driven IA to meet user expectations.<\/li>\n<\/ul>\nBut Is It Statistically Significant?<\/h2>\n
<\/p>\n
\n <\/figcaption><\/figure>\n\n
How To Turn Findings Into Insights<\/h2>\n
\n
Stop Sharing Findings \u2014 Deliver Insights<\/h2>\n
How To Measure UX And Design Impact<\/h2>\n
IMPACT<\/code><\/strong> to save 20% off<\/strong> today. Jump to the details<\/a>.<\/p>\n
\n
\n <\/a>
\n<\/figure>\n