Using “Fake-Door” testing to make better product decisions
What is Fake-door testing?
It essentially is a variant of A/B or multi-variate testing, and is becoming increasingly popular among startups and Internet companies (TripAdvisor claims to be using Fake-Door testing extensively, and I am sure other smart companies do so as well).
Fake door testing is a technique used by companies and product managers to avoid having to spend a lot of time (and money) building out a new product or feature before knowing whether it’s something users are going to want to use. It is an easy way to tell whether or not you should invest time and money into your next great product idea, based on a quantitative assessment of whether it will be used by your current user base.
One caveat here is that fake door testing is only statistically relevant if you have an existing user base, and you are able to collect enough data for the results to be statistically significant.
The way it works is as follows: a certain very small percentage of your end-users (say 1%) are directed to or shown a new button / page / user-interface, while the remaining 99% of users don’t see it. These 1% of users do not know that the feature they are attempting to access doesn’t exist yet…and so they are the proverbial guinea pigs as part of this “fake-door” test that leads nowhere.
What’s happening in the background is that every action taken by this 1% cohort (clicks, form submits, mouse overs, etc) is being tracked.
Once enough data has been accumulated, it’s easy to figure out if the feature is desirable or not and whether or not it will be used, before having to invest several man-months into building it out.
Do you or your company use fake-door testing? How and what are you using it for? I’d love to hear from you in the comments.