Skip to main content
You’ve done the research. You’ve defined the problem. You’ve come up with a solution you believe in. Now comes the uncomfortable question: does anyone actually want it? 😬 Value testing is how you find out. It’s the practice of putting a solution in front of real users - before you’ve fully built it - to test whether it delivers the value you think it does. Not whether it works technically. Not whether it looks good. Whether it solves the problem.

What you’re testing

Value testing asks one core question: does this solution address the customer’s need in a way they find valuable? That sounds obvious, but it’s surprisingly easy to skip. Teams get attached to their solutions, stakeholders want to see progress, and “let’s ship it and see” feels faster than another round of testing. The problem is that finding out post-launch that users don’t value your solution costs ten times more than finding out during discovery. Teresa Torres is clear on this in Continuous Discovery Habits: value testing is not about validating that you were right. It’s about learning fast enough to course-correct before it’s expensive.

How to run a value test

The method depends on what you’re testing and how much you’ve built, but the principle is consistent: expose users to your solution (or a simulation of it) and observe whether it helps them accomplish the goal. Prototype testing - Show users a prototype and give them a realistic task. Don’t explain how it works. Don’t help them. Just watch. Where do they get stuck? Do they complete the task? Do they feel like the problem is solved? Fake door / smoke test - Before building, test whether users will take an action that implies they value the solution. A button that doesn’t exist yet. A landing page with a sign-up. Measures intent, not just opinion. Concierge test - Deliver the value manually, without the product. If users don’t find it valuable when a human does it for them, they won’t find it valuable when software does it either. Cheap way to test before building anything. Live experiment - Ship a minimal version to a subset of users and measure whether it changes their behaviour. The most reliable signal, but also the most expensive to set up.

What good signal looks like

You’re not looking for users to say “I like it” - that’s just politeness. You’re looking for:
  • They complete the task without help or confusion
  • They reach for it again unprompted
  • It changes something about how they work (even in a small test)
  • They’re willing to pay for it, refer it, or give up something to get it
The absence of these is also signal. If users complete the task but shrug, or say “that’s nice” and move on, you may have a usability win but a value miss.

The most common mistake

Testing the solution instead of the value. Teams often run value tests that measure whether users can use the product (usability) rather than whether using it makes their life better (value). Those are different questions. Lesson learned: a user who completes every task in your prototype but wouldn’t pay 1 eur for it has told you something important. Listen 👂