Skip to main content
There’s a class of product decisions that you simply cannot validate with static mockups. Anything where the value depends on the data itself - personalisation, analytics, recommendations, reporting - needs to be tested with real data to get a real reaction. Live data prototyping is building just enough of a feature to connect to real data, without building everything around it 📊

Why static fails here

Imagine you’re designing a new analytics dashboard. You build a beautiful Figma prototype with placeholder numbers. You test it with users. They say it looks great. Then you ship it with real data - and it’s immediately obvious that most metrics show zeroes for new users, the date ranges don’t make sense for their usage patterns, and the “top performing” chart shows data they don’t recognise. The prototype passed. The product failed. The gap was the data 😬

What it involves

Live data prototyping means connecting a rough, often unstyled implementation to real production or staging data so that users see their actual information - not placeholders. It doesn’t have to be polished. Unstyled HTML with real data is more useful for many testing scenarios than a pixel-perfect mockup with fake numbers. The question you’re answering is: when users see their actual data in this structure, does it make sense? Does it change anything for them?

When to use it

  • Analytics and reporting features - The value is entirely in what the data reveals. Static mockups can’t replicate that.
  • Personalisation and recommendations - “Here are your top items” only makes sense with your actual items.
  • Data-heavy workflows - Anything where users need to recognise their own content to evaluate the design.
  • Edge cases at scale - Real data exposes edge cases (empty states, outliers, unusual formats) that fake data hides.

The practical approach

You don’t need to build the full feature. A common pattern is to build a rough internal tool or script that renders real data in the proposed format - enough to put in front of five users and watch their reactions. Feature flags are useful here: build the prototype behind a flag, show it to a small cohort, gather reactions, iterate before wider release. This is close to a live experiment but with more explicit observation rather than pure metric tracking. Pair this with feasibility testing - if the data pipeline needed to support this is uncertain, test that assumption in parallel before investing in the prototype 🔧 Lesson learned: the first time you test a data feature with real data and watch a user say “that number is completely wrong” - and realise the data is correct, their mental model just doesn’t match yours - is humbling. And very, very useful 👀