The Impact of Zero-Price Promotions on Sales: A 1 Abstract

advertisement
The Impact of Zero-Price Promotions on Sales: A
case study of Amazon Appstore
Harshal Chaudhari
December 19, 2015
1
Abstract
The mobile app economy is growing at an unprecedented pace with over half a
million independent developers fighting for the attention of same potential customers. Highly competitive environment for the developers and the appstores
alike, has led to new innovative marketing strategies like free-mium apps, payper-minute apps, in-app purchase options etc. along with traditional ones like
paid advertising. Amazon, after making a late entry in the app economy, has
captured considerable marketshare by providing attractive deep discounted promotions like ‘Free App of the Day’. Even though such promotions offer highly
sought after ‘visibility’ to the participating developers, they only benefit if the
incremental post promotion sales offset the cost of free ‘giveaways’ on the day of
promotion. In this paper, I perform a rigorous statistical analysis to show that
exposure received during such promotions does not always lead to incremental
future sales. Moreover, the utility of the promotions strongly depend on the
app characteristics themselves.
2
Introduction
Today, there are over 1.6 million apps in the Google Play Store, 1.5 million
on Apple Appstore, and over 400,000 on the Amazon Appstore. The growth
of mobile apps in only increasing with Play Store in particular, doubling the
number of apps since 2014. Apple announced in July, 2015 that over 100 billion apps have been downloaded from its Appstore. The total app downloads
covering all app marketplaces is expected to exceed 268 billion by 2017. In
these tremendously high volumes of downloads, it’s not only the app publishers
fighting to get a foothold, but also the market places themselves competing over
acquiring and retaining the market share of new customers. Amazon Appstore
has done particularly well, developing a reputation for providing the publishers
with highest average revenue per app, outperforming Google Play Store in this
regard. 1 .
The growth of the app market places provide a great opportunity for the
researchers to study innovations, pricing and promoting strategies, impacts of
user ratings and reviews on the sales of the apps. However, our understanding
1 http://www.developereconomics.com/which-app-stores-should-you-use/
1
(a)
Popular Appstores
(b)
Amazon Appstore: Growth
Figure 1: App market places.
of these aspects is severely limited due to unavailability of downloads data. In
fact, even the app publishers themselves get aggregated data from these market
places. The app developers themselves are not willing to divulge this data to
maintain their competitive advantage. As a result, researchers have to rely on
publicly available data on the app market place websites for their studies.
In this paper, I use the publicly available data from the Amazon Appstore
for Android 2 to hypothesize that zero-price promotions have different impacts
on the downloads and ratings of apps, based on their categories and ranking
at the time of promotion. I use the Amazon ‘Sales Rank’ data to explore the
relationship between the app downloads and promotional offers. In particular,
I use the data of all the apps participating in ‘Free App of the Day’ promotion
on Amazon Appstore, to quantify the impact of this promotion on their future
downloads.
To identify the causal impact of zero-price promotion on the downloads of
the app, I use the difference-in-differenes (DD) strategy. The DD strategy identifies the impact of promotion on the treated apps by comparing against the
baseline of apps that were not actively promoted by Amazon during the same
time period. Note that, I have no access to data regarding the marketing budgets of the publishers, and assume that ratings and features on the Amazon
‘Top Selling’ list are the major driving forces behind adoption by new users of
the platform. The DD specification will allow me to control for app-specific
characteristics as well as the time-varying effects that affect all the apps on the
Appstore. To test the validity of various assumptions and accuracy of the estimate, I would perform a series of robustness checks.
Following the app-centric analysis, using the reviews data for all the pro2 http://www.amazon.com/mobile-apps/b?node=2350149011
2
moted apps, I wish to study the dataset from a user-centric view. There have
been claims from the publishers3 that users downloading the free app on promotional day tend to be more critical with their ratings, thereby affecting the
future downloads of the app due to irrecoverable rating slumps following promotion. With the extensive reviews dataset of over 6,000 paid apps, I compare
the reviews left by ‘free’ users against those of users who have paid for downloading the same app. Finally, using the results from the above analysis, I aim
to propose sound marketing strategies for app publishers.
3
Related work
Appstore ratings and reviews are the lifeblood of any mobile app, they are the
first impression a potential customer has of an app in the often-blind app discovery process. There is extensive amount of literature regarding the effect of
review ratings on the sales of products in related economies. Luca[1] applies a
regression discontinuity technique to conclude that 1-star increase in Yelp rating leads to 5-9% increase in the revenue. Similarly, Anderson & Magruder[2]
apply similar methods to show the 19% decrease in reservation availability corresponding to a half star increase in Yelp rating. Engstrom & Eskil Forsell[3]
show that a 10% increase in displayed average rating leads to an improvement
in app rank on Google Play, but increases downloads by a mere 3%. However,
it should be noted that dynamics of the app reviewing process are radically
different than physical goods or services like Yelp. While reviewers on Yelp
tend to review the service immediately after the experience, there is a lag involved in the reviewing process of apps during which the user evaluates the
app. Furthermore, the ratings of same user might change favorably or unfavorably based on the updates to the app. A classic example of this change,
is the case of dating app ‘Tinder’ whose overall ranking on Appstore fell from
55th to 105th over 6 days, following introduction of a paid service in an update4 .
Some other works investigate the profitability of offering discounts. Notably,
Edelman et al.[4] shows that discounts are profitable only in narrow conditions
wherein the valuations of a customers with Groupons is significantly lower than
those of normal customers. Furthermore, an explanatory analysis carried out
by Spriensma[5] in the Distimo report shows a 22% increase in revenue when
already featured apps on Apple Store offer a sale. Surprisingly, 30 to 50% of
apps also experience a decrease in revenue! But perhaps, the most similar work
to the one proposed here is by Askalidis[6] (unpublished) that investigates the
impact of large scale promotions on sales and ratings. However, the major difference in his work and the one proposed here is, Askalidis uses the mention
of the key phrases like ‘app of the day’ in text reviews to estimate sales while
I propose to use a more robust indicator of sales in the ‘Sales Rank’ provided
by Amazon. Secondly, I believe that fixed effects like the lag of reviewing after
availing of the promotion, reviewing trends after app updates, etc. need to be
taken into consideration while estimating the aforementioned impacts. Albeit,
with similar objectives, I expect my work to be more robust in terms of proving
3 http://blog.shiftyjelly.com/2011/08/02/amazon-app-store-rotten-to-the-core/
4 http://www.businessinsider.com.au/tinder-plus-being-tested-in-europe-2015-2
3
Figure 2: Star Rating distribution
a causal relationship.
Work of Byers et al.[7] follows another direction of investigation, accounting
for the disturbance in Yelp reviews ratings due to Groupon offers. I wish to
investigate on similar lines, the effect of promotions on app reviews. However, I
will have to account for a major difference between the two platforms. While the
review rating distribution of Yelp businesses tend to be unimodal, the reviews on
Appstore tend to be much more extreme, with a J-shaped distribution, hence, I
need to appropriately account for this fundamental difference between the two
review platforms.
4
Data
For the purpose of this study, I combined three datasets viz. app metadata
and reviews from Amazon Appstore, app pricing and sales rank history from
keepa.com and Amazon’s ‘Free App of the Day’ promotion data from Twitter.
4.1
Amazon Appstore
The Amazon Appstore for Android is a third-party appstore for the Android operating system operated by Amazon.com. It was launched in March, 2011 and
is available in over 200 countries. For the purpose of this study, I limit myself
to the data of paid apps available on the US version of the appstore. During
the launch in March 2011, number of apps on the appstore was 3,800, and have
rapidly grown to over 400,000 apps by March 2015, as evident in Figure 1.
On Amazon Appstore, people can download apps, read and write app reviews. In order to download an app, users must first install ‘Amazon Appstore’
app on their android phones, and register for a free Amazon.com account with
a valid email address. Only registered users can then review any app (from 1-5
stars) and enter a text review. Further, Amazon verified badge appears next to
some reviews, which means that the reviewer purchased the app from Amazon
4
Variable
Obs.
Mean
Std
Min
Max
Star Rating
In App Purchase (boolean)
Price (cents)
Reviews
6040
6040
6040
6040
3.57
0.14
247.36
86.61
0.78
0.35
303.92
218.53
1
0
0
5
5
1
8400
5271
Table 1: App statistics
Appstore. The percentage of verified reviews, the age and helpfulness of the
review along with the raw star rating are some of the factors which Amazon
uses to calculate the star-rating of the app that is displayed for every app.
Once a review is submitted, anyone (with or without Amazon account) can
access the website and read the review. Along with the review data, variety of
attributes like app size, version, permissions, average customer ratings, current
sales rank, etc. are also prominently displayed for every app. Users choose to
review apps for variety of reasons. Amazon provides direct incentives in form
of deep discounts in across categories of Amazon website to users with a high
percentage of helpful reviews.
4.2
Keepa.com
Keepa is a popular Amazon Price Tracking Tool with a functionality to notify
its users of price changes for a particular product, to help them purchase the
product ‘at right time’. It also lets a user check price history of any product
across all categories of Amazon.
In order to get the historical data, a user has to enter ‘Amazon Standard
Identification Number (ASIN)’, an alphanumeric unique identifier of every product sold on Amazon. While the price history is available over the entire app
lifetime, Keepa has only started tracking the sales ranks of products beginning
Feb. 1, 2015. This limits our observational study to only consider the promotions post Feb. 1, 2015. Secondly, Keepa tracks sales rank of only products that
have a non-zero price (those not being offered for free). This leads to unavailability of sales rank on the exact day of promotion. However, since this study
aims to primarily compare the pre and post promotion performance of apps,
this data limitation does not have any implications on the results.
4.3
Twitter
Since it was launch in March, 2011, Amazon Appstore’s growth has been widely
attributed to its effective app marketing strategies, making it an attractive marketplace for app developers. Amazon’s ‘Free App of the Day’ increases visibility
of the promoted apps and it appeals to the bargain-conscious customers, thereby
helping Amazon acquire larger marketshare over time.
As a part of this promotion, every day, one paid app is offered for free.
Amazon also offers a very visible and valuable spot on Amazon Appstore’s
5
Variable
ASIN
Reviewer
Star Rating
Verified (boolean)
Comments
Upvotes
Downvotes
Obs.
Unique
522984
522965
522984
522984
522984
223762
223762
6040
351106
Mean
Std
Min
Max
3.89
0.93
0.05
7.30
3.03
1.44
0.24
0.45
38.18
10.06
1
0
0
0
0
5
1
73
5643
793
Table 2: Review statistics
webpage for free to any app participating in the promotion. Along with this,
Amazon uses its considerable marketing reach to promote the free app. For
example, Amazon Appstore’s official twitter account 5 tweets daily about the
promotion under the hashtag #FreeAppoftheDay. I used the Twitter API to
collect a list of urls of all such apps promoted since March 2011. Further, I
extracted the ASIN id of every promoted app using these urls.
4.4
Aggregating Data
I merged the aforementioned datasets using the ASIN id of every app. Table 1
provides the summary statistics for the paid apps data that I collected. Due to
space constraints, I limited myself to collecting only apps that have 5 or more
reviews. Only 14% of apps have in-app purchase option, wherein, the developers generate more revenue by providing extra services to customers after they
have already paid the upfront price of the app while downloading. While exact
statistics of in-app purchases are difficult to come by, the upfront cost paid while
downloading the apps still constitutes the major chunk of the developer revenue.
Table 2 provides the summary statistics for the reviews dataset. The above
6,040 apps above have over 0.5 million reviews. Amazon’s ‘Verified Purchase’
badge is present on over 93% of these reviews, and can be treated as genuine
reviews by users. Close to 33% of the reviews are written by repeat reviewers.
The average star rating of reviews is 3.89 stars. However, it is worth noting
that distribution of star ratings is not normal, but J-shaped instead, as evident
in the Figure 2. Nan et al. [9] attribute this asymmetric positively skewed distribution of Amazon review ratings to purchasing and under-reporting biases.
Only 43% of the reviews have upvotes or downvotes associated with them, with
average number of upvotes per review being over twice the average downvotes,
indicating positive voting bias.
Using the price history data gathered from Keepa.com, I found a total of
521 observations where the price of app was zero cents for a finite duration
of time. Interestingly, in our observational period from Feb. 1, 2015 till Nov.
14, 2015, only 210 apps participated in the ‘Free App of the Day’ promotion.
On further investigating, I found an additional 171 apps which had zero-price
promotions independently. In such independent zero-price promotions, the apps
5 https://twitter.com/amazonappstore
6
donot get the prime visibility on Amazon Appstore’s webpage and must rely on
their independent marketing to drive downloads. Some apps were zero-priced
more than once in our observational period. Since multiple promotions can have
confounding effects on sales, for the purpose of this study, I consider only the
first promotion for every app, leading to a total of 381 observations of interest.
5
5.1
Preliminary Investigations
Self-selection
In the Free App of the Day promotion, Amazon selects an app based on its proprietary performance metrics and offers the publishers an option to participate
in the promotion. This introduces two types of biases. Amazon is likely to select
apps that provide maximum incentive for users of their rival marketplaces to
switch over to Amazon; providing highly popular and good quality top ranked
apps would help Amazon achieve its objective. However, the final choice regarding the participation in the promotion is left with the developers. This choice
introduces another selection bias. While the developers who want to increase
the exposure of their apps are more likely to participate in the promotion, it
is not a straight forward decision, and can be influenced by a variety of factors
like performance of app in recent history, whether developers are planning to
push an update to their app in immediate future, etc.
Figure 3 shows a strong evidence for the presence of these biases. Since, it is
not possible for us to know when the developers are given the choice to participate in the campaign, or what percentage of developers agree to participate, our
view of the data is restricted to the apps that participated in the promotion. In
absence of any external factors like change of marketing strategies or updates to
the app, the daily downloads of an app do not vary greatly over short periods
of time like weeks, which in turn means low volatility in sales rank. In Figure 3,
I present a histogram of the number of promoted apps that are categorized in
bins according to their sales rank before the day of promotion. There is general
decreasing trend in the number of promoted apps per bin, as the sales rank
increases. This could be a result of Amazon only making offers of promotion to
top ranked apps, or only top developers, who can afford to give away for free
participating in the promotion, or a combination of these two factors.
5.2
Normalization of sales rank
In this study, ranks are an ordinal series of numbers, low rank representative of
good apps while high ranks represent bad apps. Amazon sales rank is calculated
based on a proprietary algorithm, depending on the recent download history of
an app. Brynjolfsson et al.[8] show that the sales rank follows a power law distribution with respect to downloads of the Amazon. As a result, few downloads
for high ranked app can bump its rank several places up, while equal downloads
for a low ranked app would make no noticeable change in its ranking. Hence,
it is important to normalize sales ranks of the apps to compare the effects of
promotion. Modeling the relationship between sales rank and daily downloads
as a power law, we have:
7
Figure 3: Self-selection bias in promoted apps
di,t = b × ri,t −a
di,t0 = b × ri,t0 −a
(di,t0 − di,t )
(ri,t0 −a − ri,t −a )
=
di,t
ri,t −a
a
ri,t
(di,t0 − di,t )
= a −1
di,t
ri,t0
(1)
where di,t denotes downloads of app i at time t and ri,t denotes its rank at
the same time. a and b are the power law parameters. The left side of the
equation (1) represents the percentage change in downloads from time t to t0 ,
if we know the parameter a. On careful observation, I discovered that Amazon
Appstore itself uses this normalization with a = 1 to decide which apps feature
on its ‘Movers and Shakers’ list6 . Garg. et al. [10] have compiled results of
various studies estimating the value of power law shape parameter on different
app marketplaces, all of which have estimated the value of a ranging from 0.9
to 1.2. While the estimation of exact impact of promotion is sensitive to the
chosen value of the shape parameter a, it may only affect the magnitude of the
estimated impact, the observed trends are however independent of the value of
a. Hence, in my analysis, I use value of a = 1.
To visualize the impact of promotion on the downloads, I use the equation
(1) above with baseline downloads as the downloads of the app before the day
of promotion. In Figure 4, the day of promotion of apps is indicated by the
timestamp ‘0’ and the values on the horizontal axis indicate the offset in days,
from the day of promotion. The vertical axis indicates the percentage change
in downloads from the baseline downloads on the day before the promotion. In
particular the plot traces the percentage change in downloads over the duration
6 http://www.amazon.com/gp/movers-and-shakers
8
Figure 4: Impact of promotion
of 10 weeks, centered around the day of promotion. We can observe that there
is over 100% increase in downloads following the zero-price promotion. Note
that there is a discontinuity in the plot on the day of promotion due to the
data limitation described in section 4.2. There seems to be a general falling
trend in the downloads before as well as after the promotion. This indicates a
strong presence of ‘Ashenfelter dip’, further helping us confirm the presence of
bias introduced due to self-selection by developers described above. This could
indicate that falling downloads over time influences the decision of a developer
regarding the participation in zero-priced promotion.
6
Heterogeneous impacts of Promotion
After observing strong evidence for the impact of promotion on the downloads
of apps, I investigate two questions regarding the heterogeneous impacts of zeropriced promotions. First, I test the hypothesis that promotions have differential
impact based on the ranking of the app. Next, I check whether the utility of
the promotion differs based upon the inherent app characteristics like its type
viz., game or non-game.
6.1
Differential impact based on rank categories
When an Amazon appstore user clicks on the name of the app shown as promoted app, he is presented with webpage displaying the app summary and
reviews. The Amazon sales rank of the app is prominently displayed on this
webpage. Hence, it is plausible that conversion rate on the day of promotion
would differ based upon the sales rank of the app, and in turn affect the post
promotion performance.
To study this effect, I categorized apps into 5 different categories shown in
Table 3, based on their rank before the day of promotion. Rationale behind
9
Category
Ranks
1
2
3
4
5
0-10
10-100
100-1,000
1,000-10,000
10,000+
Table 3: Rank categories
Figure 5: Impact based on rank categories
creating the categories on log scale is two fold. First, the relationship between
downloads and sales rank of the app is a power law as mentioned earlier. Second,
various psychological studies have shown a strong ‘anchoring bias’ in decision
making based on the number of digits in a number. Example, same users are
likely to perceive a vast difference in quality of two apps ranked 99 and 101,
which might affect the conversion rate.
Following the line of action described in previous section, Figure 5 plots the
average percentage change in downloads across different categories over a period
of 10 weeks centered around the day of promotion. It is surprising to see that,
on average, downloads of low ranked apps decrease post promotion, while the
higher ranked apps tend to experience over 150% increase in downloads post
promotion.
10
Figure 6: Impact : Games vs Non-games
6.2
Differential impact: Games vs Non-games
Amazon categorizes apps broadly into 21 categories based on the app content,
with over 60% of the apps categorized as games. This asymmetrical distribution of apps across categories means that category of games is highly competitive
compared to others. Hence, being featured as a top ranked app among games
should have a greater impact compared to other categories. Figure 6 plots the
average percentage change in downloads across broad categories of games and
non-games, over a period of 10 weeks centered around the day of promotion. We
can observe that games participating in zero-price promotion experience over
150% increase in downloads immediately after promotion, while the non-games
experience only half of that.
7
Regression Analysis
In the previous section, we see strong evidence in support of our hypotheses.
To quantify the impact of promotion on future downloads, I frame the problem
in form of an OLS regression. Due to the nature of the problem where impact
of promotion on different apps could be dependent on app themselves, we need
to account for the fixed effects of app. Further, the downloads of an app and
in turn, its rank can have seasonality effects, so I include the fixed effect of
time as well. Hence, our linear model to estimate the impact of promotion on
downloads takes the following form:
Percent.Change.Downloadskt = β0 + β1 Promotedkt
+ Appk
+ τt + ikt
11
(2)
The dependent variable is percent change in downloads compared to the downloads on the day before promotion calculated using equation (1). The co-efficient
of interest is β1 which indicates the change in dependent variable when the indicator variable Promoted takes the value 1, in other words, impact of promotion
on downloads. Appk and τt are included to control for the fixed effects. The
results of this regression are documented in the column 1 of Table 4.
To study the heterogeneous impacts of promotion described in the previous
section, I have two separate models as follows:
Percent.Change.Downloadskt = β0 + β1 Promotedkt
+ β2 Rank.Categoryk
+ β3 Promoted × Rank.Category
+ Appk + τt + ikt
(3)
Percent.Change.Downloadskt = β0 + β1 Promotedkt
+ β2 App.Categoryk
+ β3 Promoted × App.Category
+ Appk + τt + ikt
(4)
The co-efficient of interest in both the above cases are the interaction co-efficients
β3 . The results of these regressions are documented in Table 4. Lastly, our dependent variable is percent change in downloads compared to the downloads on
the day before the promotion, while our co-efficient of interest are categorical
variables. This difference leads to understating of the standard errors, hence we
cluster the standard errors at the app level.
8
Results
The regression results performed on the models described in the previous section are shown in Table 4. The models were fit on a panel data containing 70
days of observations on 381 apps that participated zero-priced promotion. Since
we had included fixed effects of apps and time, we have a separate co-efficient
for each app and each day offset from the day of promotion. However, for the
purpose of brevity I do not present these co-efficients in the Table 4. as they do
not affect the our co-efficients of interest. Since each of these models contained
intercept term, the co-efficient β0 represents the baseline population of nonpromoted apps, combined with base levels of the categorical variables included
in the model.
12
Table 4
Dependent variable:
Percent change in Downloads
(1)
(2)
∗∗∗
(3)
Non-promoted
3.815
(0.624)
0.939
(0.191)
4.610∗∗∗
(0.918)
Promoted
0.856∗∗∗
(0.113)
−0.978∗∗∗
(0.146)
−0.560∗∗∗
(0.211)
−0.560∗∗∗
(0.199)
3.651∗∗∗
(0.936)
2.466∗∗∗
(0.584)
1.036∗∗∗
(0.116)
Ranks 10-100
Ranks 100-1,000
Ranks 1,000 -10,000
Ranks 10,000+
∗∗∗
−0.667∗
(1.118)
Non-Game
Promoted : Ranks 10-100
0.433∗∗
(0.196)
Promoted : Ranks 100-1,000
1.111∗∗∗
(0.099)
Promoted : Ranks 1,000 -10,000
2.043∗∗∗
(0.104)
Promoted : Ranks 10,000+
2.649∗∗∗
(0.142)
−0.430∗∗∗
(0.057)
Promoted : Non-Game
Observations
R2
25,478
0.340
∗
Note:
13
25,478
0.359
p<0.1;
∗∗
p<0.05;
25,478
0.342
∗∗∗
p<0.01
8.1
Impact of promotion on downloads
To estimate the overall impact of promotion, we refer to the first column of
Table 4. The baseline population in this model represents simply non-promoted
apps. Our co-efficient of interest shows that there is average 85% increase in
downloads post-promotion across all apps.
8.2
Impact based on rank categories
Referring to the second column of Table 4, we see the evidence of differential
impact of promotion based on the rank of the app on the day before the promotion. The baseline population for this model is that of non-promoted apps
ranked between 1 to 10. Top apps ranked between 1 to 10, have a negative
co-efficient for the variable Promoted, thereby reiterating our plot in Figure 5.
One of the reasons that could explain this decrease in downloads post promotion
is market saturation. On the day of promotion, the top ranked apps experience
extremely high volumes of downloads thereby saturating the market, leading to
decrease in daily downloads post promotion. Our co-efficients of interest are the
interaction co-efficients. As the rank category increase, the magnitude of these
interaction co-efficients increases, thereby indicating that high ranked apps benefit out of the exposure from the promotion much more than the low ranked
apps.
8.3
Impact based on app categories
Third column of Table 4 provides evidence for our second hypothesis viz., games
benefit more than non-games after promotion. The baseline population this
model consists of non-promoted apps that are categorized as games on Amazon
Appstore. The negative co-efficient of Non-Game indicates that irrespective
of promotion, non-games are always contribute to much lesser downloads than
games. Both the categories experience an increase in downloads post promotion,
however, looking at the interaction co-efficient, non-games on average benefit
43% less than games. Popularity of games is typically associated with wordof-mouth propagation, while users are much less likely to recommend a nongame, for example, a weather utility app to their friends. This could explain
the significant difference in the post-promotion performance of these two app
categories.
9
Discussion
The overall message of this study is simple. Sales rank history of apps on the
Amazon Appstore offer insights into the performance of apps over time. While
it is evident that top developers with low ranked apps have participated in zeroprice promotions in larger numbers, they do not benefit from the exposure as
much as high ranked apps. Moreover, the app category itself plays a significant
role in the effect of such promotions on the future sales. On the consumer side,
these promotions are extremely beneficial for the bargain conscious users.
14
10
Future work
In its present state, this study merely shows evidence of correlation for our hypotheses. In order to show a causal relationship between differential impact of
zero-priced promotion on apps, I plan to use the difference in differences (DD)
strategy, using the apps that have never been promoted in their lifetime as a
control group compared against the treatment group consisting of apps that
have participated in these zero-priced promotions. While I have looked at only
the zero-priced promotions, many developers offer discounts over extended times
to drive sales. It remains to be seen which of the above two strategies help the
developers most, and the effect of these strategies on the competition.
Furthermore, it remains to be studied whether the these promotions help
Amazon Appstore earn new ‘loyal’ customers who contribute to their revenue
in future. With our dataset containing the reviews of every paid app on the
Appstore, we should be able to study the same. Another area of interest is the
comparison of review quality for the reviews left by ‘free’ downloaders as against
those users who paid for the app.
15
References
[1] M. Luca. Reviews, reputation, and revenue: The case of yelp. com. Technical report, Harvard Business School, 2011.
[2] M. Anderson and J. Magruder (2012). Learning from the crowd: Regression
discontinuity estimates of the effects of an online review database. The
Economic Journal, 122(563), 957-989.
[3] P. Engstrom and E. Forsell. Demand effects of consumers’ stated and revealed preferences. Available at SSRN 2253859, 2014.
[4] B. Edelman, S. Jaffe and S. D. Kominers. To groupon or not to groupon:
The profitability of deep discounts. Marketing Letters (2011): 1-15.
[5] G. J. Spriensma. The impact of app discounts and the impact of being a
featured app. Distimo Publication, 2012.
[6] G. Askalidis. The Impact of Large Scale Promotions on the Sales and
Ratings of Mobile Apps: Evidence from Apple’s AppStore (unpublished)
http://arxiv.org/pdf/1506.06857.pdf
[7] J. W. Byers, M. Mitzenmacher and G. Zervas. The groupon effect on yelp
ratings: a root cause analysis. In Proceedings of the 13th ACM Conference
on Electronic Commerce, pages 248–265. ACM, 2012.
[8] E. Brynjolfsson, Y. Hu and M. D. Smith. The longer tail: The changing
shape of Amazon’s sales distribution curve. Available at SSRN 1679991
(2010).
[9] H. Nan, J. Zhang and P. Pavlou. Overcoming the J-shaped distribution of
product reviews. Communications of the ACM 52.10 (2009): 144-147.
[10] R. Garg and R. Telang. Inferring app demand from publicly available data.
MIS Quarterly, Forthcoming (2012).
16
Download