ieee_pervasive_2010_shopping

advertisement
Submitting to IEEE Pervasive Computing Magazine
Shopping Time Monitoring at Physical Stores Using
Mobile Phones
Chuang-Wen You1, Chih-Chiang Wei2, Yi-Ling Chen3, Hao-Hua Chu2,4, Ming-Syan Chen1,3
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan, ROC1
Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC2
Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan, ROC 3
Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan, ROC4
cwyou@citi.sinica.edu.tw, b95032@csie.ntu.edu.tw, yiling@arbor.ee.ntu.edu.tw, hchu@csie.ntu.edu.tw,
mschen@citi.sinica.edu.tw
Abstract—This study proposes a phone-based shopping tracking system to monitor customers’ shopping time at
physical stores. The system first uses a place detection filter [1] to differentiate in-places (i.e., where a user stops for a
certain duration) from out-places (i.e., paths traveled between places) and then constructs in-place movement trajectory
from sensor signals recorded on mobile phones. The proposed system extracts spatial and temporal features embedded in
movement trajectory and classifies this in-place visit as shopping or non-shopping. To validate the accuracy of our
classifier, experimental results analyzing 630 hours of real data from 84 participants indicate that our system accurately
labeled motif groups with average F1 scores of 0.88 for shopping activities and 0.93 for non-shopping activities.
Index Terms—Shopping Time Monitoring, Mobile Phone Application.
I. INTRODUCTION
According to retail management theory, the shopping experience is typically aimed at maximizing either efficiency
or entertainment [5]. For many people, shopping is a form of entertainment. At the same time, an excessive amount
of time spent shopping can lead to overspending and personal financial crisis; this is especially true during hard
times such as the current economic downturn. Previous research [3] indicates that the amount of time spent shopping
at a store is proportional to the amount of money spent at that store. Therefore, tracking shopping time can help
make users aware of their shopping behaviors. Raising this awareness may also provide insights into how to cut
back on shopping activities thus reduce opportunities for spending money. A recently-published study [13] shows
that despite the growing popularity of online shopping, online retail sales accounted for only 5% of overall retail
sales in the U.S. in 2009. In other words, the majority of household purchases still come from shopping at brick-andmortar (called physical) stores. This study proposes a phone-based system to sense physical shopping activities
while tracking the amount of time spent shopping at physical stores. Given the ubiquity of mobile phones in our
everyday lives, this phone-based approach offers numerous practical benefits, including ease of deployment, low
cost, and continuous/mobile sensing.
Previous researchers [3][14] used human shadowing or video-based observation of in-situ shoppers to study
shopping behaviors at physical stores. Although these manual techniques are able to observe more subtle and/or
complex shopping behaviors, they often involve costly human labor or intrusive camera sensors, thereby making
data collection and analysis expensive and beyond the reach of most academic researchers. On the contrary, our
phone-based approach uses mobile phones to sense shopping activities. The ubiquity of mobile phones makes this
phone-based approach cost effective and feasible to collect and analyze shopping data.
Most physical retail centers arrange merchandise shelves in a spatial grid layout to guide shoppers through the store.
In general, store designs adopt a grid, racetrack, or free-form layout [9], each of which facilitates a specific kind of
shopper movement pattern and trajectory. Our phone-based shopping tracker first reconstructs the shopper’s
movement trajectory by analyzing accelerometer and digital compass signals from a shopper’s mobile phone. The
shopping tracker then analyzes and extracts temporal-spatial features from the shopper’s movement trajectory. If the
1
Submitting to IEEE Pervasive Computing Magazine
temporal-spatial features of the shopper’s movement trajectory and those induced by a store layout match, this
movement trajectory likely corresponds to shopping. Given a sufficient quantity of shopping and non-shopping
movement trajectories, the proposed system trains a binary classifier to accurately differentiate between the two.
The contribution of this study is in the design, prototypes, and evaluation of a phone-based shopping tracker to
detect how much time a user spends shopping. Experiments with over 630 hours (or 220,000 steps) of user
movement trajectories show an accuracy (measured by F1 score) of 0.88 for shopping activities and 0.93 for nonshopping activities. To our knowledge, this study offers the first phone-based solution for detecting the amount of
time a user spends shopping.
II. RELATED WORK
In previous studies, researchers have discussed how to augment the shopping experience or promote the use of
pervasive computing technology at stores [7]. However, to the best of our knowledge, none have proposed a mobile
phone system to track shopping behavior by analyzing a shopper’s trajectory. Meschtscherjakov et al. [4] proposed a
prototype display that depicts a dynamic visualization of customer activity in a retail store on a conventional map. In
addition, empirical studies on real shopping environments show the usefulness of pervasive computing technology
in physical shopping experiences.
Other trajectory-based studies detect specific targets based on trajectory analysis. Li et al. [2] proposed a trajectory
outlier detection algorithm to detect an anomalous moving target. Their algorithm extracts common patterns, called
motifs, from trajectories. A set of motifs forms a feature space in which the trajectories are placed. After
transformation into a feature vector, the trajectories are fed into a classifier and classified as either “normal” or
“abnormal.” In summary, current trajectory-based methods associate trajectories with global outdoor positions (e.g.,
GPS data) whereas our study classifies imprecise movement trajectory associated with relative indoor positions (e.g.,
stores, offices, etc.).
To find semantically meaningful places, researchers have proposed different variations of WiFi beacon
fingerprinting [1], GPS location history clustering [6] etc. The proposed system adopts the state-of-the-art approach
described in [1] to implement place discovery. By detecting the disappearance of representative beacons or stable
WiFi beacon fingerprinting, we can estimate the entry and exit times of places, including stores.
III. PROBLEM STATEMENT
Prior to proposing our solution we define the following terms used throughout this paper.
 Shopping time is the amount time spent on shopping trips. Specifically, we define this as the time difference
between arrival and departure from a store. (i.e., in-store shopping time) [15].
 Non-shopping time is the amount of time a user performs non-shopping activities. Non-shopping time includes
time spent at a non-shopping place (such as an office, a school, etc.) or traveling between places. The working
hours of store employees are also considered as non-shopping time.
 In-place refers to when a user stays at a fixed location for certain time duration.
 Out-place refers to when a user moves between places.
We first transform shopping/non-shopping time determination to a trajectory labeling problem. Then, design a
shopping tracker system to labeling each trajectory segments, and therefore achieve shopping/non-shopping time
determination.
DEFINITION 1 (IN-PLACE TRAJECTORY STREAM) The in-place trajectory stream is the reconstructed movement trajectory
of a user inside a (shopping or non-shopping) place based on phone sensor data. It consists of time-stamped series of
movement vectors Vi = {ti, li, di}, where ti ,li and di are the step elapsed time (i.e., duration), length, and direction of
2
Submitting to IEEE Pervasive Computing Magazine
the ith movement vector, respectively. An in-place trajectory stream is represented as a totally time-ordered infinite
sequence T = {V1, V2, …, Vn, …}, where Vn is a movement vector at a specific time tick n, and time tick n occurs
after time tick n-1.
We formulate the problem of measuring shopping time as a trajectory classification problem. An in-place trajectory
stream consists of a sequence of uniform-length trajectory segments called motifs, which are prototypical movement
patterns.
DEFINITION 2 (MOTIF). Given a relative trajectory stream T = {V1, V2, …, Vn, …}, the motif M = {Vj, Vj+1, …, Vj+k, …}
can be obtained, where time tick j is the starting index and k is the number of movement vectors in the motif. In this
work, non-overlapping motifs are obtained by limiting the value of j in several discrete values (incremented by k),
where j = {1, k+1, 2k+1, …}.
DEFINITION 3 (MOTIF GROUP). Given motifs in an in-place trajectory stream T, an essential motif group G = {Mt,
Mt+1, …, Mt+m, …} is obtained by grouping m consecutive motifs together, where t is the starting time tick index and
m is the motif group size. Similarly, non-overlapping motif groups are obtained by limiting the value of t in several
discrete values (incremented by m), where t = {1, m+1, 2m+1, …}.
Our system trains a trajectory classifier capable of classifying a given motif group as shopping or non-shopping. The
total shopping time is obtained by aggregating the elapsed time of each shopping-labeled motif. This approach
transforms the problem of measuring a person’s shopping time into the problem of classifying motifs in a motif
group as either shopping or non-shopping. After labeling each G (i.e., all of the motifs), we add up the durations of
all of the shopping motifs to determine the total shopping time.
IV. PHONE-BASED SHOPPING TRACKER
The proposed phone-based shopping tracker uses a place detection filter [1] to identify in-places from out-places.
This place detection filter first recognizes in-place motifs and then feeds these in-place motifs to the shopping
classifier to further identify in-place shopping movement trajectories. To estimate the amount of time the shopper
has spent shopping, the system adds up all of the shopping time intervals by subtracting their starting and ending
times for each in-place shopping motif. The proposed phone-based shopping tracker includes the following four
steps: (1) place discovery, (2) movement trajectory construction, (3) in-place trajectory classification, and (4)
shopping/non-shopping time aggregation. These four steps are described as follows.
In the first step, place discovery recognizes each visit to a place using the PlaceSense algorithm developed by Kim
et al. [1]. The PlaceSense algorithm continuously scans WiFi beacons near the user’s mobile phone. A stable radio
environment in which the WiFi beacons change little suggests an entrance to a place, and a subsequent change in the
WiFi beacons indicates a departure from that place.
The second step reconstructs users’ in-place movement trajectories for further analysis and recognition of shopping
behavior when a place is discovered. Accelerometer and digital compass readings from a mobile phone give
individual step movement vectors (i.e., relative distance and orientation) of a user. Connecting these movement
vectors sequentially yields the user’s trajectory and an in-place trajectory stream.
The third step classifies an in-place trajectory stream as shopping or non-shopping. An in-place trajectory stream is
first divided into motifs. Then, the classifier uses spatial (user’s orientation/direction) or temporal (step duration etc.)
features embedded in the segments of in-place trajectory streams to classify them as shopping or non-shopping.
Section V describes more details of the in-place trajectory classifier.
Finally, in shopping/non-shopping time aggregation, total shopping/non-shopping time is determined by adding up
time intervals of all of the shopping/non-shopping motifs after they have been labeled.
3
Submitting to IEEE Pervasive Computing Magazine
V. IN-PLACE TRAJECTORY CLASSIFIER
The goal of the in-place trajectory classifier is to label each motif group as either shopping or non-shopping. Given
motif groups (Gs) partitioned from an in-place trajectory stream, we first represent each motif as a bounding box.
Clustering all collected bounding boxes gives two categories of motif shapes: straight or curved motifs. Then,
different features are defined to capture spatial and temporal statistical characteristics embedded in the straight or
curved motifs of different Gs. A support vector machine (SVM) [11] is selected to classify basic Gs as either
shopping or non-shopping, based on the feature summarization performed for each motif. Finally, a majority voting
scheme rectifies the labels of misclassified motif groups with the label associated with majority of the motif groups
nearby within the same place.
Motif Shape Extraction
A motif is a prototypical movement pattern. To smooth out the noise in a motif introduced by measures of digital
compass and step length estimation, we can characterize a motif by its pattern or shape alone. Collected motifs are
first clustered to identify different motif shapes in an offline process, and the shapes of new motifs are then tagged in
an online process. Motif shape extraction process involves two steps. The first step is bounding box construction.
Given a motif, we compute the representative vector from the motif’s first step to its last step. The width of the
vector is then expanded to accommodate all other points. This bounding box allows us to smooth over noise in the
trajectory. Then, we use the length and the width of the resulting bounding box to represent the spatial size of the
motif.
The second step is Motif clustering and tagging. In the offline clustering process, all of the bounding boxes are
clustered using the K-means algorithm. The clusters are grouped to find the most representative patterns. From these
clusters we then extract motif shapes. Two clusters emerge: straight-trajectory motifs (SM) and curve-trajectory
motifs (CM).
Feature Extraction
We define features to capture spatial (i.e., user orientations/directions as opposed to absolute positions) and temporal
(step duration, etc.) statistics embedded in the SMs or CMs of different groups. A third feature group summarizes
sequence information from moving patterns using pattern dictionaries. Table 1 lists the features used.
Table 1. Features used in this work.
Groups
Spatial
Temporal
Pattern
dictionary
Features (in a motif group)
Step-direction features: Mean and variance of changes in step direction
in curved motifs.
Motif-direction features: Mean and variance of changes in motif
direction in curved motifs.
Straight-motif features: Total, mean, variance, maximum, and minimum
of all motif durations in straight motifs.
Curved-motif features: Total, mean, variance , maximum, and minimum
of all motif durations in curved motifs
Nth-percentile features: Mean, variance, maximum, and minimum of
total step durations of the first or last Nth-percentile data contained in
each four-motif window.
Pattern dictionary size, maximum and minimum occurrence counts, and
variance and mean of all occurrence counts.
4
Submitting to IEEE Pervasive Computing Magazine
Spatial features: Store layout designs generally fall into one of three general patterns: grid, racetrack, or free-form
[9]. Each of these designs dictates the way customers move throughout the store. Thus, customer movement
trajectories usually occur as specific traffic patterns with respect to changes in direction. The straight-motif and
curved-motif directional feature sets are calculated to statistically represent both microscopic (i.e., a single step) and
macroscopic (i.e., a motif) changes of direction as evidenced by specific turning behaviors in the training CMs. We
define the motif’s macroscopic direction as the direction of the last step in the motif. In this work, we extract four
features: the mean and variance of both micro- and macroscopic changes of direction in CMs.
Temporal features: Millonig et al. [8] analyzed customer behaviors in shopping centers by shadowing and
interviewing people. They found a group of customers who exhibit similar step-duration features due to specific
shopping behaviors: for instance, customers tend to stop to evaluate merchandises in stores. Similarly, representative
behaviors also exist in non-shopping places: people constantly move between different rooms or positions, and then
sit for a long time, presumably occupied with personal affairs. These observations lead us to believe that such
statistical step-duration features originate from behaviors such as walking straight (e.g., in aisles) or turning (e.g.,
between aisles). The macroscopic motif duration is defined by the total time spent in an individual motif, i.e.
summation of ti for all i contained in the motif. Thus, we define three sets of temporal features that take into account
the statistical characteristics of different motifs, which are (1) straight-motif, (2) curved-motif and (3) Nth-percentile
feature set. The five straight-motif features represent people walking straight in places, and include the total, mean,
variance, maximum, and minimum of all SM durations. The five curved-motif temporal features represent people
making turns in places, and include the total, mean, variance, maximum, and minimum of all CM durations. The 50
Nth-percentile features capture dynamic behaviors over a larger geographical extent: the mean, variance, maximum,
and minimum of total step durations of the first or last Nth-percentile data contained in each four-motif window
(FW), where n = 10,20,30,40, and 50.
Pattern dictionary: Due to space constraints in a physical store, shopper trajectories are characterized by smooth Uturns and nearly straight lines, which represent the random walk between straight aisles among shelves. To assess
these repetitive patterns of straight and curved motifs, we propose the following dictionary analysis illustrated in Fig.
1. First, the sequence of straight or curved motif in a single motif group is encoded as a character stream in which
each character represents either straight (S) or curve (C) motif shape. In the second step, a sliding window (with
window width w) scans the character stream for repetitive motif shape patterns. A pattern is defined as a sequence of
one successive character. For example, the pattern “SCCS” translates to straight (S)  curved (C)  curved (C) 
straight (S) with a sliding window size as four. During the scanning process, each new pattern is added to the
corresponding dictionary Dl. For a pattern already in the dictionary, its occurrence count is simply incremented by
one. We analyzed motif shape patterns from three to six motifs long (w = 3 ~ 6). The algorithm thus yields several
dictionaries, each of which contains the occurrence counts for all patterns found for a particular pattern length. From
each of these four dictionaries, we extracted five features for a total of 20 features: dictionary size, maximum and
minimum occurrence counts, and the variance and mean of all of the occurrence counts.
Fig. 1. Example pattern dictionary encoding for motif shapes pattern given w = 4.
5
Submitting to IEEE Pervasive Computing Magazine
Feature-Based Classification
The goal of feature-based classification is to classify all motif groups as shopping or non-shopping. This
classification scheme consists of three steps: (1) data preprocessing, (2) feature selection, and (3) classification. In
the first data preprocessing step, we first quantize the original direction value (0 ~ 359) into several discrete values
(multiples of 45) to smooth out sensory noise from the digital compass. We also normalize each feature. The second
feature selection step selects a feature selection algorithm based on F-score plus SVM, as described by Chen et al.
[12]. Performing feature selection on the training set selects a feature subset of an arbitrary size, which best
characterizes the statistical properties of the given target classes based on the ground truth labels. Finally, the last
classification step uses a support vector machine [11] to classify all of the motif groups as either shopping or nonshopping. Using the selected feature subset, SVM is trained by the training set to learn how to predict the class label
of each motif group in the test set.
Majority Voting
We used a label correction scheme to further improve classification accuracy. Since a place is a store or not a store,
the labels of adjacent motifs in this place should be consistent as shopping or non-shopping. Since this label
consensus should hold in all places, we added a majority vote to rectify the labels of misclassified motif groups with
the label associated with majority of the motif groups nearby. In the event of a tie, the original label was kept.
VI. EXPERIMENT
This section describes the experiments in this study to demonstrate the effectiveness of our in-place trajectory
classifier. We first describe the data collection procedure: how we gathered a representative set of daily shopping
and non-shopping data. Then, we provide performance metrics showing the accuracy of our classifier.
Data Collection
We recruited 84 participants (44 females) and collected their shopping and/or non-shopping data. The ages of these
participants ranged from 18 ~ 50 years (with a mean age of 24). Their occupations included housekeepers, insurance
salespeople, administrative assistants, students, etc.
Shopping data collection. To cover shopping movement across different types of stores, we selected representative
stores from each of four major types of shopping centers. These four types—supermarkets, outlet shops, retail
warehouses, and department stores/shopping malls—were categorized by the Department of Commerce in Taiwan
to represent the majority of overall retail sales [10]. Carrefour was chosen as the supermarket, Leeco Outlet as the
outlet shop, Costco and IKEA as the retail warehouse, and SOGO and Q Square as the department store/shopping
mall. We recruited participants who were planning to go shopping at any of these six stores and collected their
shopping movement trajectories. Prior to entering the stores, we asked each participant to carry an HTC Magic
mobile phone running our data collection program in the background. While they shopped, the data collection
program logged WiFi signals (BSSID), accelerometer and digital compass readings, and the timestamps for this
information. When each participant exited the store, they returned the HTC Magic phone to us, and we extracted the
shopper movement trajectories for analysis. These collected data were labeled as both in-place and shopping data.
Table 2 summarizes the collected shopping data.
Non-shopping data collection. Non-shopping data was collected from 26 participants who performed their everyday
non-shopping activities from 2 p.m. to 9 p.m. Most non-shopping data was collected on weekdays, resulting in
movement trajectory records of participants’ afternoon office activities at their workplaces (2 p.m. ~ 6 p.m.),
commutes from office to home, and household activities at home (6 p.m. ~ 9 p.m.). Since non-shopping data
includes commuting activities, we used the place discovery algorithm to locate in-place trajectories and labeled all
6
Submitting to IEEE Pervasive Computing Magazine
of the in-place trajectories as non-shopping data. We collected 630 hours of shopping and non-shopping data,
including 86 hours of shopping activities, 545 hours of non-shopping activities. The overall step count for the inplace non-shopping data was 223,343 steps, which is about twice the step count of the shopping data (122,193 steps).
Table 2. Collected shopping data
Store categories
Supermarkets
Stores
Carrefour
# of subjects
# of hours
15
22.92
Outlet shops
Leeco
Retail warehouses
Department stores
Costco, IKEA
SOGO2, Q Square3
21
30.18
10
18.53
Outlet1
12
13.96
Training and testing data. The collected data was divided into training data and testing data. We used the training
data to train our classifier. We also used testing data to measure the performance of our classifier in predicting a
given movement trajectory (from the testing data) as shopping or non-shopping. The entire dataset was split into ten
approximately equally-sized folds: nine folds for the training data and one fold for the testing data.
Evaluation Metrics
After training the classifier on the training set, we measured the classifier performance in labeling various
trajectories from the test set as shopping or non-shopping.
 Shopping classifier accuracy: We used the standard F1 metric to measure prediction accuracy. The F1 metric
is defined as the harmonic mean of recall and precision. Precision is the number of correctly classified
shopping (non-shopping) motif groups, Gs, over the total number of testing Gs. Recall is the number of
correctly classified shopping (non-shopping) Gs over the total number of real shopping (non-shopping) Gs.
 Shopping-time accuracy: Time accuracy measures the accuracy of the final aggregated shopping time for
shopping activities. The false positive rate measures the percentage of false shopping time from all
misclassified non-shopping motif groups.
Results
We conducted experiments to test the classification accuracy of our classifier. We used the leave-one-set-out
procedure to evaluate the classifier, i.e., we randomly distributed all of the data into ten folds, with one fold serving
as testing data and the other nine folds serving as training data. Table 3 summarizes the average precision, recall and
corresponding average F1 score.
Table 3. Average classification accuracy for shopping/non-shopping activities
Activity
Precision
Recall
F1 score
Shopping
0.91
0.86
0.88
Non-shopping
0.92
0.94
0.93
1
http://www.leecooutlet.com.tw/
2
http://www.sogo.com.tw/
3
http://www.qsquare.com.tw/
7
Submitting to IEEE Pervasive Computing Magazine
The average F1 scores for shopping and non-shopping activities using the classifier were 0.88 and 0.93 respectively.
The average precision, recall and average F1 score for non-shopping activities were slightly higher than that for
shopping activities. Since people spend a substantial amount of time in these non-shopping activities at home or
offices in their daily lives, a higher number of motif groups and votes generated from non-shopping activities
improves its classification accuracy in majority voting. The time accuracy of shopping time is 0.88 and the false
positive rate for the non-shopping time is a small value of 0.06, demonstrating that the proposed classifier can
indeed monitor shopping time correctly.
This set of experiments demonstrates how the number of steps recorded in a place affects performance. We expect
that the accuracy result of majority voting is correlated to the number of votes, and the number of votes depends on
the number of motif groups and the number of steps in a place. Figure 2 shows the average F1 scores measured by
taking the first x steps in each place to emulate the impact of number of steps. The red (blue) dotted line indicated
that the F1 score 0.88 (0.93) using all shopping (non-shopping) steps. The average F1 score improves with
increasing number of steps as shown in Fig. 2. Additionally, the marginal improvement in average F1 score
decreases as the number of steps increases.
Fig. 2. Average F1 score measured by taking the first x steps in each place. The two dashed lines are reference
results for shopping and non-shopping activities.
VII.
CONCLUSION AND FUTURE WORK
This study designs and implements a phone-based shopping tracking system to monitor shopping times at physical
stores. We propose a phone-based shopping tracker to monitor customer shopping times. The proposed method first
uses a place detection filter [1] to differentiate in-places from out-places and then recognizes these in-place motifs
constructed from sensor signals recorded on mobile phones. The proposed system transforms the problem of
monitoring a person’s shopping time into the classification problem of labeling motif groups as shopping or nonshopping. Our system then feeds these in-place motif groups to the in-place shopping classifier to identify in-place
shopping movement trajectories based on spatial and temporal features embedded in each motif. Since motif groups
in a place exhibits a consensus label, a majority voting scheme corrects misclassified motif groups that differ from
the correct labels of other motif groups in the same place. To validate the accuracy of our classifier, we collected
shopping/non-shopping data from participants recruited via the Internet. Results from 630 hours of real data show
that our classifier labeled motif groups with average F1 scores of 0.88 for shopping activities and 0.93 for nonshopping activities.
In future work, we will continue to improve the accuracy of the proposed system by adding new spatial and
temporal features to the shopping motif classification. We also plan to conduct a large-scale experiment with a large
number of users and traces to further validate the effectiveness of the proposed method in monitoring shopping
behavior in real life.
8
Submitting to IEEE Pervasive Computing Magazine
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
D. H. Kim, J. Hightower, R. Govindan, D. Estrin, “Discovering Semantically Meaningful Places from Pervasive RFBeacons,” Proc. of Int’l Conf. on Ubiquitous Computing (Ubicomp 2009), pp. 21-30.
X. Li, J. Han, S. Kim, H. Gonzalez, “ROAM: Rule- and Motif-based Anomaly Detection in Massive Moving Object Data
Sets,” Proc. of SIAM Int’l Conf. on Data Mining (SDM 2007), SIAM, pp. 273-284.
P. Underhill, “Why We Buy: The Science of Shopping,” Simon & Schuster.
A. Meschtscherjakov, W. Reitberger, M. Lankes, “Enhanced Shopping: A Dynamic Map in a Retail Store,” In Proc. of Int’l
Conf. on Ubiquitous Computing (Ubicomp 2008), pp. 336-339.
D. M. Lewison, “Retailing,” 6th Edition, Prentice Hall College Div.
C. Zhou, D. Frankowski, P. Ludford, S. Shekhar, and L. Terveen, “Discovering Personally Meaningful Places: An
Interactive Clustering Approach,” ACM Trans. Info. Syst., vol. 25, issue 3, 2007.
T. Yamabe, V. Lehdonvirta, H. Ito, H. Soma, H. Kimura, T. Nakajima, “Applying Pervasive Technologies to Create
Economic Incentives that Alter Consumer Behavior,” Proc. of Int’l Conf. on Ubiquitous Computing, Ubicomp (2009), pp.
175-184.
A. Millonig, G. Gartner, “Ways of Walking – Developing a Pedestrian Typology for Personalised Mobile Information
Systems,” Proc. of Int’l Symposium on LBS & TeleCartography, LBS 2008, pp. 26-28.
M. Levy, B. A. Weitz, “Retailing Management,” 7th Edition, McGraw-Hill Higher Education.
Department Commerce, Ministry of Economic Affairs, R.O.C., “Manual for Development and Operation Management of
Big-Sized Shopping Center. Department Commerce,” Ministry of Economic Affairs, R.O.C.
C.-C. Chang, C.-J. Lin, “LIBSVM : a Library for Support Vector Machines,” http://www.csie.ntu.edu.tw/~cjlin/libsvm
Y.-W. Chen, C.-J. Lin, “Combining SVMs with Various Feature Selection Strategies. Feature Extraction,” Studies in
Fuzziness and Soft Computing, pp. 315-324.
Retail Holiday Sales Improve after Dismal 2008, http://www.reuters.com/article/idUSTRE5BR0HP20091228/
Shopping Behaviour Xplained Ltd., http://www.sbxl.com/
C. W. Park, E. S. Iyer, D. C. Smith, “The Effects of Situational Factors on In-Store Grocery Shopping Behavior: The Role
of Store Environment and Time Available for Shopping,” Journal of Consumer Research, vol. 15, no. 4, 1989, pp. 422-433.
9
Download