You can’t really call that a “successful” product launch!
There’s a quite famous proverb that goes like, “If a tree falls inside a jungle and there was no one to see it, did the tree even fall?”
Similarly, when you launch a new product/ feature, but if nobody heard about it, can you really call it a successful launch?
Sure, you must have put in a lot of effort in sizing the market, preparing the Go-to-Market (GTM) strategy and doing competitive research etc etc, but then, until you get at least a few people using that product, you haven’t really achieved anything.
Many people assume a launch to be successful when all of the planned initiatives are executed without any hassles. But that’s simply checking off the list. (Can’t emphasize enough how extensively this practice is followed)
It’s similar to writing a blog wherein you do the preliminary keyword research, then write the blog, prepare image briefs and finally publish it. While it might look like the job done, you can’t really call that a successful initiative if that blog isn’t helping you drive new visitors to your website. Even when it’s appearing in the Google SERPs.
So what really constitutes a successful launch?
While there are multiple moving parts that you’ll have to consider on your way to a successful launch, such as, the target audience, the TAM (total addressable market), positioning, your competitors, etc.,
The only way to tell if it was successful is if enough (a pre-decided %age) people start using it — referred to as Product Adoption!
The scenario today is such that bringing products to market or being able to successfully launch multiple products is seen as a key skill for Product Managers and Marketers alike, and especially a core functionality under the larger Product Marketing role.
But the problem is a lot of people simply don’t know how to find out if a launch was successful or not? And lately, I’ve seen this question pop up in multiple communities, where people are scampering for defining “successful launches”.
The way I look at launches is via a 2-stage process,
- Product Adoption — Where x% of people start using the product post-launch, within a certain time period.
- Product Usage — Where y% of x continue to use the product after the adoption time period.
(have used the term product here representing both — larger products and smaller features)
The reason why most launches fail is the launch plans don’t outline an adoption goal — the one “critical step” (inside your app/ software). And a few times when they do, the respective team(s) don’t take proactive efforts to meet that goal.
Say, you’re a Netflix, and your latest feature is viewers being able to rate what they watch. And when they do this, your platform’s algorithm would show (or not show) similar content. Your launch objective is to have “an increasing number of users start rating what they watch”.
So, you set a goal for it, say, “20% of users start rating what they watch (doing the critical step), within the first 30 days of the feature launch.” And if this goal is met, then increase this to say, “30% of total users start rating what they watch, within the next 30 days.” (I usually take a relatively easier initial target and then increase it progressively)
The number depends upon a lot of factors such as, importance of the feature to the core functionality of your product, the demand patterns of the feature [that must have come up during your initial research], is it a paid add-on, the space your product operates in, complexity of the overall product, etc.
This way, when you set a tangible goal, you can very easily map out if your launch has been a success or not.
But the real problem starts when the goal isn’t met. Ideally, there should already be an action plan to rectify when this happens. In the above scenario, few ways to get viewers to start rating is through push notifications and in-app messages, highlighting the importance of the feature for their own good, or building some kind of social proof around it.
But there’s a catch!
People starting to use something doesn’t mean they will continue to do so. And there’s no other way to understand if your feature/ product is driving value for your users if they don’t continue to use it after first adopting it.
Which brings me to stage 2 of the process.
After getting people to adopt the product, the next biggest challenge is to have them continuously use it, and as a result, derive value from it. This is again monitored through a “second critical step”.
Reason being, it’s a commonly known fact that within the first 7 days, ~70% of the apps are uninstalled, which means that even if your app shows a respectable adoption rate, the bottom line of generating revenue from it won’t happen as people didn’t use it.
A more practical way to look at this is, say, you’re a Zoom and your latest product is “Zoom for Edtech”, where you’ve added a whole bunch of features such as a whiteboard, breakaway rooms, polling, hybrid classes, etc.
Now, you set an adoption goal of, say, 10% in the first 30 days (smaller because it’s a product as opposed to a feature) and you even go on to achieve that adoption goal within the stipulated time. Next, as stated above, you start making plans to increase that adoption level by X% in the next few days.
But then, what if that initial 10% didn’t like the product and say, after a period of 60 days, you find out that out of this group, < 2% are still using it. That’s why you start keeping an eye on the usage pattern.
If you keep adding users to the initial adopters list but if only a few of them still use the feature after a certain timeperiod, then it becomes a leaky bucket with no value-addition for the users.
The way to go about product usage is, out of the 10% of people who started using the product in the first 30 days, you set a target that at least 5% of them will continue to do so by the end of the next 30 days. And then with every new adoption target, you start increasing the usage target, accordingly.
In case of new feature launches, you can also go a step further to understand how well it’s received by your users.
Bringing the same Netflix’s rating feature into the picture, you can also set a target that out of the 20%, say, 10% of users who started using the rating feature, would rate at least 50% of the shows they watch (second critical step). And if this group displays identical behaviour for a few more new feature launches, the group can be attributed as potential loyalists or even for beta-testing of future launches. This would give you a further breakdown of how useful the feature is turning out to be for specific cohorts of your users.
In the case of product launches specifically, adoption can also be measured through revenue goals, and if there are paid add-ons, then even usage can be measured through this.
If you’re a Product Marketer, measuring your launches is going to be equally important to devising the launch plans. And not just for your company. But to evaluate your own efficacy as someone with expertise to direct and drive successful product launches as well. But not every new feature needs to be measured this way. Differentiating between those that directly affect the user experience against the ones that don’t should be helpful in this regard!!