Does reducing fees increase revenue: The Takeaway [7/7]
Posts in the series: Intro, 1/7: Context and TAM, 2/7: A conceptual model, 3/7: Externalities and breaking even, 4/7: Dynamics over time, 5/7: Testing approach, inferring results, next steps if test fails , 6/7: Metrics , 7/7: Takeaway, Google Sheets with data
The Takeaway
The above exercise established among other things that we may test only one allotment every 3 months and that our ability to infer the incremental revenue at a given allotment from adjacent allotments may also be limited.
Selecting which allotment to test first is therefore quite consequential. If our initial allotment test results in positive incremental revenue, then subsequent tests may not be necessary.
To pick the allotment we want to test, we may use a combination of heuristics, beliefs, our conceptual model and our budget.
To begin with, we may want to keep a clear separation between Consumer seller and Store tiers. For that, we may want to keep the free allotments for Consumer sellers to be less than half of that of Basic stores i.e. less than 75 free allotments. Our conceptual model tells us that about 91% of the sellers may not be able to scale beyond 70 listings anyway; and so, per Chart 5 from Section 2, we should be able to get between 89% to 95% of maximum possible listings at 70 allotments anyway.
Within the bounds of 75 free allotments, a reasonable start may be to test with 40 free allotments. This tells Consumer sellers the simple story that their free listings are doubling. And given the distribution from Chart 4, this would mean half the listers who would have paid in a counterfactual 20 allotment world, would be fully subsidized.
Per our conceptual model, 40 free allotments may result in a 19% to 24% increase in Consumer seller listings. This translates to 66M to 93M incremental listings in absolute terms-a sizable lift.
An additional benefit of using a lower allotment such as 40, is that it can help us learn how conservative or relaxed sellers may be with their listings budget. If we offer too high an allotment then sellers may be maxed out on their scaling capacity and there would not even be any need or scope to pay for more listings. The listing budget signal gained at lower allotment may then help us pick the next allotment, as explained in Section 8.
The allotment increase is also low enough to not trigger Basic Store downgrades even per our aggressive model from Table 3.
While Chart 15, indicates that this allotment may require ASP and conversion to hold fairly steady in order to breakeven in a conservative seller scaling scenario, it is worth mentioning that per our model the risk in terms of listing fees loss is within -$19M and -$27M, which is a lot lower than the maximum possible loss of -$42M.
In case the test does not meet the success criteria, then as mentioned in the Section 8, we look for indicators to whether sellers are being conservative or relaxed with their listings budget. Counterintuitively, but referring to Chart 12 and as explained in Section 8, we test a higher allotment if sellers are conservative and a lower allotment if sellers are relaxed with their spend.
Finally, a to-do item before the test launch is to decide on the “Kill Criteria” for the test. In the Metrics section it was mentioned that the numbers alone will not tell us if something is good or bad, without prior experience or a philosophy to ground us in. Therefore before running the test it is imperative to agree on thresholds for specific metrics such as the Risk Metrics identified in Section 9, exceeding which the test would be stopped.
Posts in the series: Intro, 1/7: Context and TAM, 2/7: A conceptual model, 3/7: Externalities and breaking even, 4/7: Dynamics over time, 5/7: Testing approach, inferring results, next steps if test fails , 6/7: Metrics , 7/7: Takeaway, Google Sheets with data