Uploaded by w.d.cloutier

SCOR Measuring DSC Performance

advertisement
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1741-038X.htm
Measuring downstream supply
chain performance
Measuring DSC
performance
Horatiu Cirtita
Department of Management, Aleman Consulting, Bucharest, Romania, and
299
Daniel A. Glaser-Segura
Department of Management, Texas A&M University – San Antonio,
San Antonio, Texas, USA
Abstract
Purpose – Downstream supply chain (DSC) performance metrics provide a standard framework to
assess internal performance. DSC performance metrics can also help balance performance tradeoffs
among firms. The purpose of this paper is to develop a survey instrument to determine whether
observed performance metrics correspond to the literature and to determine if performance metric
systems are used to improve inter-firm performance.
Design/methodology/approach – The survey instrument used in this study was based on SCOR
performance attributes consisting of: delivery reliability, responsiveness, flexibility, costs, and asset
management efficiency. The survey was completed by 73 members of the Council of Supply Chain
Management Professionals (CSCMP) consisting of high-level managers representing US companies.
Findings – One factor explained the underlying one-dimensional structure of the surveyed
Supply-chain operations reference (SCOR) model as an internal metrics system but the authors did not
find convincing support for the notion that external performance metrics are used to coordinate
external, DSC inter-firm activities.
Research limitations/implications – A larger sample size would have allowed more insight into
the inter-relationships of the performance attribute variables. Moreover, the sampling plan limited
generalization beyond US firms.
Practical implications – Firms used a standardized performance metric system and did not “pick”
among metrics. In addition, firms used metrics independently of the decision to coordinate DSC activities.
Perhaps they first learn to coordinate the internal performance and later extend to DSC members.
Originality/value – The paper describes one of the few empirical studies of the SCOR model in US
industry.
Keywords United States of America, Supply chain management, Performance measures,
Performance metrics, Supply chain operations reference model, Surveys
Paper type Research paper
1. Introduction
In a downstream supply chain (DSC) consisting of manufacturers, transportation,
distribution, wholesale, retail, and end customers, members expect timely, reliable and
quality delivery of the right amount of products at low cost. A supply chain is broadly
defined here as all of the linked individual organizations that, by direct or indirect
means, lead to the delivery of a service or a good to a customer (Chopra and Meindl,
2004). We chose to focus on DSC activities as these involve supplier and end-customer
interaction as well as the customer’s decision to buy and stay loyal to a specific
company. Previous research has concentrated largely on upstream supply chain
performance with less attention on downstream activities. In short, we considered the
need to address this under-researched field.
Journal of Manufacturing Technology
Management
Vol. 23 No. 3, 2012
pp. 299-314
q Emerald Group Publishing Limited
1741-038X
DOI 10.1108/17410381211217380
JMTM
23,3
300
Supply chain performance metrics provide organizations with a standard
framework to assess supply chain performance including internal and external firm
links (Huang et al., 2004; Harrison and New, 2002). The use of internal linkage
performance metrics leads to elimination of non-value added activities, decreased
variance of orders, swifter product flows, more efficient use of time, material and
human resources, and reduction of the bullwhip effect (Frohlich and Westbrook, 2001;
Yu et al., 2001). The use of external linkage performance metrics leads to the creation of
end-customer value through closer integration activities and communication with
other member firms along the supply chain (Bowersox et al., 2000; Croxton et al., 2001).
DSC performance must also balance tradeoffs among firms and requires a common
performance metric for all DSC members (McCrea, 2006). The use of a DSC
performance metric should be considered a top management priority by those who
wish to support the strategy of firms along the supply chain rather than acting in
relative isolation ( Johnson and Pyke, 2000; McCrea, 2006). Performance metrics offer a
view of the DSC cost structure and allow opportunities for improvement. They also
keep track of service levels which allows for further development of supply efficiencies.
Finally, using metrics and communicating results allows members of a supply chain to
compete at a higher level and attract customers than other supply chains that
coordinate inter-firm activity to a lesser degree.
Coordinating performance metrics along the supply chain has been enhanced by
newer technologies, particularly comprehensive information systems involving
enterprise resource planning systems coupled to the internet (Wisner et al., 2008).
Customers along the DSC have greater visibility of the order cycle and can use
performance metrics to improve inter-firm coordination. It is the intent of this
exploratory study to provide empirical insight into these positions through a survey of
the membership of the Council of Supply Chain Management Professionals (CSCMP).
We will analyze whether the metric systems used in DSC firms correspond to the
metric systems that are discussed in the practitioner and academic literature and
determine if the DSC performance metric systems are used to improve inter-firm
performance among supply chain members.
2. Correspondence of DSC performance metrics: practice and academic
literature
We purposely chose performance metrics as the standard of evaluation as opposed to
the terms “performance measurement” and “performance measure” The term
“performance measure” carries a connotational definition that is vague, historical,
and diffused (Neely, 1999). Schneiderman (1996a, b) stated that measures and metrics
differ in the following way: measures consist of the broad set of infinite forms of
evaluating a firm’s process whereas metrics are a subset of the few measures actually
useful to improve a company’s efforts.
We also adopted the supply-chain operations reference (SCOR) model as a common
system of key DSC performance metric activities. Stewart (1995) presented the first
framework about the processes encompassed in the SCOR model, which were plan,
source, make, and deliver. He stated that the following performance attributes assess
supply chain effectiveness:
.
supply chain delivery performance;
.
supply chain flexibility and responsiveness;
.
.
Measuring DSC
performance
supply chain logistics cost; and
supply chain asset management.
These areas became key performance attributes in the SCOR model. The model is a
product of the Supply-Chain Council (SCC)[1]. Stephens (2001) presented an evolution of
the model that initially encompassed the processes presented by Stewart (1995) and
added the return process. The scope of the SCOR model includes all elements of demand
satisfaction starting with the initial demand signal (order or forecast) and finishing with
the signal of satisfying the demand (final invoice and payment).
The SCOR model, as used in our study, is comprised of five performance attributes which
the SCC defines as “characteristics of the supply chain that permit it to be analyzed and
evaluated against other supply chains with competing strategies” (Supply-Chain Council,
2003). The SCOR model used in this study is presented in Table I. The five attributes are:
(1) supply chain delivery reliability;
(2) supply chain responsiveness;
(3) supply chain flexibility;
(4) supply chain costs; and
(5) supply chain asset management efficiency.
301
Associated with each of the performance attributes are Level 1 metrics which are
objective measures by which an implementing organization can determine their
success in achieving their corresponding performance attributes.
Performance attribute
Performance attribute definition
Level 1 metrics
Supply chain delivery
reliability
The performance of the supply chain in
delivering the correct product, to the correct
place, at the correct time, in the correct
condition and packaging, in the correct
quantity, with the correct documentation, to
the correct customer
The velocity at which a supply chain
provides products to the customer
The agility of a supply chain in responding to
marketplace changes to gain or maintain
competitive advantage
The cost associated with operating the
supply chain
Delivery performance
Perfect order fulfillment
Line item fill rate
Supply chain
responsiveness
Supply chain flexibility
Supply chain costs
Supply chain asset
management efficiency
The effectiveness of an organization in
managing assets to support demand
satisfaction. This includes the management
of all assets: fixed and working capital
Source: Supply-Chain Council (2003, p. 7)
Order fulfillment lead
time
Supply chain response
time
Production flexibility
Cost of goods sold
Total supply chain
management costs
Value-added
productivity
Warranty/returns
processing costs
Cash-to-cash cycle time
Inventory days of supply
Asset turns
Table I.
SCOR model performance
attributes and associated
Level 1 metrics
JMTM
23,3
302
The supply chain delivery reliability performance attribute is measured by three Level
1 SCOR metrics that measure delivery performance, fill rate, and perfect order
fulfillment. The supply chain responsiveness performance attribute is measured by one
Level 1 SCOR metric that measures order fulfillment. The supply chain flexibility
performance attribute is measured by two Level 1 SCOR metrics that measure supply
chain response time and production flexibility. The supply chain cost performance
attribute is measured by four Level 1 SCOR metrics that measure cost of goods sold,
total supply chain management costs, value-added productivity, and warranty/returns
processing costs. The supply chain asset management efficiency performance attribute
is measured by a set of five Level 1 SCOR metrics that measure cash-to-cash cycle time,
inventory days of supply, and asset turns.
The SCOR model metric system is an innovation given its standardized approach to
assessment across organizations and industry types. The performance attributes, the
top tier of the SCOR metric system, evaluate the overall strategic organizational
activities in a supply chain context. These performance attributes follow the standard as
recommended by Schneiderman (1996) who stated that a metric system should contain
no more than five top-tier metrics given that a large number diffuses the focus of the
strategic activities. Gunasekaran et al. (2001, p. 72) echoed his position and stated, “Quite
often, companies have a large number of performance measures to which they keep
adding based on suggestions from employees and consultants, and fail to realize that
performance measurement can be better addressed using a few good metrics”.
Much of the research on the SCOR model metric system is based on modeling and
simulation research designs. For example, Huang et al. (2005) and Huang and Keskar
(2007) proposed multiple criteria decision making to select suppliers and to optimize
the supply chain using performance metrics from the SCOR model. Rabelo et al. (2005)
used a simulation for the SCOR model and focused on three major units:
(1) strategic business unit one for manufacturing;
(2) strategic business two for service; and
(3) customer requests for proposals and customer acquisition, loss, and recovery
for customer relations management.
Röder and Tibken (2006) created a model to evaluate different configurations of a
supply chain with different sets of parameters concentrating on production, inventory
and transportation material and information flows. Khoo and Yin (2003) created a
clustering design to analyze the business processes of the SCOR model from customer
orders to suppliers. Finally, Barnett and Miller (2000) developed a modeling tool,
e-SCOR, to be used for discrete event simulation for large-scale models with complex
performance parameters. In all of these, the various elements of the SCOR model metric
system worked systematically. They did not, however, provide an empirical measure
of a DSC performance metric system.
There are few empirical measures of a DSC performance metrics system, or more
specifically, of the SCOR model performance attributes. Burgess and Singh (2006)
employed a case study research design to see which factors determine supply chain
performance. Interviewing managers at 31 firms, they discovered social and political
factors including corporate governance, infrastructure, operations knowledge,
collaborative planning and architectural innovation that manifested an influence on
supply chain performance. Lockamy and McCormack (2004) in their survey of 90 firms
in 11 industry sectors measured the most used practices from plan, source, make, and
deliver decision areas related to perceived supply chain performance. They showed
that in the plan area, the demand planning process had the strongest relationship to
supply chain performance, followed by supply chain collaborative planning and
operations strategy planning team. Their survey instrument did not specify the five
SCOR model performance attributes or associated SCOR Level 1 metrics. As found by
Wang et al. (2010), few studies of the SCOR model exist in the academic literature:
H1. SCOR performance attributes, consisting of: (1) supply chain delivery
reliability, (2) supply chain responsiveness, (3) supply chain flexibility,
(4) supply chain costs, and (5) supply chain asset management efficiency, as
discussed in the literature, are consistent with empirical observations of DSC
performance metric practices.
3. Performance metrics and DSC coordination
According to Hausman (2003), the use of a DSC metric system leads to synergy of
inter-firm performance among supply chain members that facilitate the measure of
total supply chain performance as opposed to isolated functional “silo” measures.
Schneiderman (1996) further suggested that the top tier of a metric system should
measure both internal and external performance processes of DSC members. The
SCOR model addresses these internal and external schemas. For the SCOR model to
work among the DSC members, though, these metrics must be standardized (Lambert
and Pohlen, 2001).
Recent studies provide empirical understanding of SCOR-type performance metrics
and their relationship to DSC inter-firm performance. Gunasekaran et al. (2004)
assessed metrics based on elements of plan, source, make, and deliver as found in
Stewart (1995) and Gunasekaran et al. (2001). The results from their study provide
general support for the link between supply chain performance metrics to improve DSC
inter-firm performance and market position. Similarly, Lockamy and McCormack
(2004) found specific relationships between the plan, source, make, and deliver
elements of SCOR and inter-firm performance.
The available research, based on broad observations, provides a mixed view of the
use of SCOR-type performance metrics and their relationship to DSC inter-firm
performance. A study by Lee and Billington (1992) observed that firms do not use
performance metrics to manage DSC inter-firm performance to a large extent and,
when they do, they assess and improve performance in a way that sub-optimizes the
supply chain as a whole. Lambert and Pohlen (2001) stated that their experience with
firms has shown no support for the view that performance metrics are used for
inter-firm coordination along the supply chain. They stated that the metrics used are
for internal purposes only. McCrea (2006) provided several cases to support the notion
that firms are using performance metrics to help manage DSC inter-firm performance.
The findings of these three works were presented with no population sample data to
back up the assertions of the respective authors.
Several simulation and modeling studies (Barnett and Miller, 2000; Khoo and Yin,
2003; Rabelo et al., 2005; Röder and Tibken, 2006; Tang et al., 2004) provide support for
the notion that DSC performance metrics systems do contribute to inter-firm
coordination. Our search of the literature did not, however, provide any rigorous
Measuring DSC
performance
303
JMTM
23,3
304
empirical findings to measure SCOR-model performance attributes and their use
among DSC members:
H2. DSC coordination is positively related to the use of SCOR performance
attributes, consisting of: (1) supply chain delivery reliability, (2) supply chain
responsiveness, (3) supply chain flexibility, (4) supply chain costs, and
(5) supply chain asset management efficiency.
4. Methodology
In this section we discuss the study’s methodology. The survey instrument, number
and characteristics of subjects and application of the measurement tool are discussed
here.
4.1 Survey instrument
The survey instrument used in this study is the original work of the study’s authors.
The first part of the survey poses questions to determine the characteristics of the
respondents and the firms they represent. The second part of the instrument measures
performance attributes used in DSC environments. The instrument used here is based
on the literature with a particular focus on SCOR performance attributes consisting of:
.
supply chain delivery reliability;
.
supply chain responsiveness;
.
supply chain flexibility;
.
supply chain costs; and
.
supply chain asset management efficiency.
Each of these performance attributes is comprised of five to seven items for a combined
total of 27 items. For example, for the performance attribute of supply chain reliability
we used, “The ability to meet promised delivery date defined as on-time and in full
shipments”. The scale items employed a Likert seven-point scale (1 ¼ low importance
and 7 ¼ high importance). To view the scale items, please refer to the appendix.
To determine the relationship of a firm’s degree of DSC integration and the five
performance attribute metrics listed in the previous paragraph, we created a two-part
question that queried: (1) the origin of firm-initiated performance metrics in the order
cycle and (2) the point at which the firm stopped using performance metrics. The array
of choices for the two-part question ranged from (5) customer places order, (4) order
receipt, (3) order processed, (2) order shipped, and (1) order received by customer. The
degree of downstream supply integration was computed as the difference between the
initiation and the end of gathering the metrics. The maximum possible score was a 4
which expressed a high degree of downstream supply integration while the minimum
score of 1 was interpreted as low integration. For example, a company initiating their
DSC integration when the customer places an order and terminating at order shipment
would result in a degree of DSC integration score of 3. This item was treated as an
independent variable in relation to the five performance metrics.
4.2 Subjects and application of the measurement tool
For this study we surveyed 73 members of the CSCMP located in the USA (a majority
of the CSCMP membership is at the director level or above and these individuals are
responsible for formulating company strategy (www.cscmp.org)). It is this type of
respondent who is expected to have in-depth knowledge of organizational practices
(Sackett and Larson, 1990). They were asked to provide information on their
understanding of their firm’s performance metric practices and other relevant
organizational activities. The firms they represented are Fortune 500 firms and are
innovators of modern industrial practices.
The sample size for the original research design was based on a power analysis, as
suggested by Cohen (1988). The power analysis decision helps to avoid committing a
Type II error, representing the error probability of rejecting the alternative hypothesis
when the alternative hypothesis is true (Mazen et al., 1987). The survey was
administered via a common online survey service provider.
5. Analysis
The first data analysis procedure involved measures of the sample population’s
demographic attributes. The second procedure consisted of construct validation
analysis based on principal component analyses and internal reliabilities. The third
procedure provided measures of descriptive statistics including means and
correlations to test H1 and H2.
5.1 Measurement of demographic attributes
Demographic attributes of the sample population were measured and are exhibited in
Table II. The average respondent was a 47-year-old male (88 percent) with a master’s
degree (60.4 percent) who was largely responsible for high-level management of supply
chain activities. The firms they represented employed an average of 3,010 employees.
We also compared respondents’ demographic attributes with known values for the
population. The comparison shows similarities to that of the CSCMP population with
some minor differences, with the exception of average number of employees at
the corresponding locations as this data were not available in CSCMP literature. The
respondents are represented to a larger degree in the manager and director ranks than
the CSCMP population. These respondents are more directly responsible for the
planning and implementation of operational-level metric initiatives. The CSCMP
population, however, is also comprised of international non-US-based members
(11 percent) who tend to be weighted toward senior executive (e.g. CEO) males, with
advanced graduate degrees (e.g. 15 percent of the international population possesses a
doctoral degree) (http://cscmp.org/downloads/public/press/demographics.pdf).
5.2 Hypotheses tests
A construct validity procedure based on factor analysis and internal reliability was
used to test H1. A construct is a scientific concept described in abstract terms that
cannot be measured directly. Factor analysis and internal reliability were used to
measure the unidimensionality and reliability of the variables in relation to the
construct described in the literature (Venkatraman and Grant, 1986).
The first step involved a principal component analysis of the 27 items used to
measure the five DSC performance metrics. The initial solution provided eight factor
solutions with eigenvalues greater than or equal to one (Kaiser, 1960). Eigenvalues, in
this context, are the latent roots for a group of survey questions.
Measuring DSC
performance
305
JMTM
23,3
Attribute
306
Table II.
Demographic attributes
of CSCMP total and
population study sample
Individual
Age (years)
Gender
Male
Female
Educational level
High school diploma
Some university (community college)
Bachelor’s degree
Some graduate work
Master’s degree/
Advanced graduate degree/doctorate
Position in organization
CEO
President
Corporate Officer
Vice President
Director
Manager
Supervisor
Staff Specialist
Retired
Organizational
Number of employees at location surveyed
CSCMP population
data
Average
%
45.1a
b
Sample data
Average
%
46.9
82
18
87.7
12.3
1.0
9.7
36.2
8.5
34.1
6.6
2.7
6.8
31.5
N/A
49.3
9.6
5.6
5.8
5.5
21.5
29.9
26.4
1.3
3.7
0.3
0.0
1.3
0.0
9.6
35.6
43.8
4.1
5.5
0.0
3,010
a
Notes: Source of CSCMP average age from Richey and Autry (2009); all other data from CSCMP web
site, http://cscmp.org/downloads/public/press/demographics.pdf (accessed 10 August 2010); bdata for
the average number of employees at each location was not available in the literature
Under the broad analytical technique of factor analysis, an attempt is made to group
variables according to their hypothesized factors. Eigenvalues greater than or equal to
one are used to determine whether the hypothetical factors exist (Hair et al., 1992).
The survey items, however, did not load on the five performance attributes of the
SCOR model. Instead, except for two items, they all loaded on one large factor. To
verify the unidimensionality of the data, we also used a Scree test. According to Cattell
(1966), the Scree test is used to determine the number of factors in the data. Scree is a
geological term used to describe the rubble at the bottom of a cliff. In this test, the point
at which the degree of difference between factors “levels off” determines the number of
valid factors. The Scree test, as shown in Figure 1, provided a graphic solution in
which one factor explained the underlying one-dimensional structure of the survey
items. No further factor analytical tests were conducted.
The second part of the procedure measured the internal reliabilities of the
hypothesized scale items. These were evaluated using Cronbach’s alpha (a), which
examines the extent to which a survey item correlates with other questions measuring
the construct and is considered as the average correlation of the survey items
(Mentzer and Flint, 1997). The reliability analysis of the five DSC performance metrics
yielded alpha measures equal to or above 0.80 except for the supply chain delivery
reliability variable which resulted in an alpha of 0.66 (Table II). A minimum alpha of
0.70 can be used as convention for the alpha reliability measure (Nunnally and
Bernstein, 1994) with a lower cutoff of 0.50 used when dealing with new and
exploratory research (Nunnally, 1967).
A cursory reading of the means of Likert-type scales (1 ¼ not important to 7 ¼ very
important) indicated that respondents agreed with the notion of the importance of DSC
performance metric practices, as seen in Table III. The respondents rated supply chain
delivery reliability as the most important (mean ¼ 6.44) of the five performance
attributes followed by supply chain flexibility, supply chain asset management
efficiency, supply chain costs and least important was supply chain responsiveness
(mean ¼ 5.54). The standard deviations ranged from the narrowest for supply chain
delivery reliability (0.54) to the widest for supply chain responsiveness (1.11). The
relationship of means to standard deviations shows that as the mean approached the
maximum (very important ¼ 7), the standard deviation narrowed. This is a common
statistical occurrence since the variable values could not exceed the upper boundary.
Standard deviation refers to the variance of all the observations for a given variable.
In addition, we found statistically significant correlations among the five performance
attribute variables.
A review of correlations deterred further analysis of H2. The correlations between
the coordination of DSC and the five DSC performance attributes displayed low effect
sizes and yielded statistically insignificant results. We did not proceed with more
complex statistical analysis.
Measuring DSC
performance
307
Eigenvalue
6. Conclusions
The findings of this study provide an added degree of understanding of DSC
performance metrics and builds on the construct validity work of others
(Gunasekaran et al., 2004). The findings are based on a sample of 73 CSCMP
professionals, of which approximately half have senior management to vice president
positions and over half have a graduate degree and/or an advanced graduate degree. It
is they who would plan and be responsible for implementation of SCOR model type
performance metric practices found in the companies they represent.
The test of H1 provided us with partial support for the notion that the SCOR model
type performance metrics, as discussed in the literature, do agree with industrial
practice. Each of the five performance attribute variables contained questions directly
related to SCOR Level 1 metrics. The scale we developed did not load onto five factors;
instead, the items grouped into one large factor. The reliability test for each of the five
10
9
8
7
6
5
4
3
2
1
0
Component Number
Figure 1.
Scree plot of principal
components analysis
Table III.
Reliabilities, descriptive,
and correlation statistics
of variables
Note: Significant at: *p , 0.000
Coordination of DSC
SC delivery reliability
SC flexibility
SC responsiveness
SC cost
SC asset management
efficiency
0.66
0.80
0.80
0.81
0.80
Reliability
(a)
3.34
6.44
6.09
5.54
5.60
5.96
0.89
0.53
0.71
1.11
0.94
0.89
Value SD
0.012
0.113
0.092
0.134
0.046
Coordination of
DSC
0.576 *
0.413 *
0.503 *
0.472 *
Supply chain (SC) delivery
reliability
0.451 *
0.677 *
0.520 *
SC
flexibility
0.659 *
0.582 *
SC
responsiveness
308
Variable
0.709 *
SC
costs
JMTM
23,3
scales, however, was above the threshold as defined by Nunnally and Bernstein (1994)
except for supply chain delivery reliability (a ¼ 0.66) which was sufficient according
to Nunnally (1967) for exploratory purposes.
We found that the surveyed firms are using a standardized performance metric
system. We did not see a case of picking some and ignoring other performance
attribute metrics. Organizational managers seem to have accepted the SCOR model as
a comprehensive system. We also found that companies recognized the importance of
the performance metric systems within their organizations. That is, they are exercising
the internal activities of supply chain performance metrics. All five of the supply chain
performance attributes scored above the midpoint of 4 on a Likert seven-point scale
(1 ¼ low importance and 7 ¼ high importance).
We did not find convincing support for H2. The extent of DSC coordination was not
positively related to the use of DSC performance metrics in a statistically significant
manner. Conversely, in another question on the survey, slightly over half (54 percent)
of the respondents self-reported coordinating the order cycle from the moment the
customer places the order to the point at which the customer receives the order. The
evidence was not clear enough to state support for this position. It is our interpretation
that firms are using performance metrics independently of the decision to coordinate
DSC activities. Perhaps firms first learn to coordinate the internal performance metrics
and later extend external metrics with DSC members. Our findings partially support
Johnson and Pyke (2000) who stated that model metrics are used to coordinate strategy
along the supply chain but do not go so far as to backup the assertion of Lambert and
Pohlen (2001) that firms do not use performance metrics for inter-firm coordination.
Our study is constrained by several limitations. A larger sample size might have
allowed more insight into the inter-relationships of the performance attribute variables.
The findings of this study rest on one large combined scale. Considering the
exploratory nature of the study and the paucity of empirical research, we considered
this study a baseline for further study. Due to the nature of our sampling frame we find
that the use of performance metrics does occur among large well known US companies
but we cannot generalize to smaller firms along the supply chain, across industries, or
beyond the boundaries of the USA.
Considering the paucity of empirical research of performance metrics in the
literature, we considered many possible opportunities for future research. We will
discuss here the avenues relevant as extensions to our research as well as those that
discuss relevant questions for academics and practitioners in the field. Future research
should improve the research design to aim for greater discriminant validity among the
five performance attribute scales. This could be accomplished by a larger number of
responses. In addition, future research should compare the use of DSC performance
metrics among industrial sectors. There is no empirical understanding in the literature
of the difference in use of the various industrial sectors. Future research should also
look into what types of firms, based on their role in the supply chain, use performance
metrics and the degree of their use (Akyuz and Erkan, 2009).
In this study the respondents were largely manufacturers as the gateway to the
DSC. We do not know if warehousing, distribution, logistics, and retail firms rate DSC
performance metrics as importantly as manufacturers. A larger and more focused
research design would enable a study along the supply chain.
Measuring DSC
performance
309
JMTM
23,3
310
With new models of organizational cooperation in e-commerce environments, we
suggest research on the use of metrics to analyze the performance of information
technologies in DSC (Akyuz and Erkan, 2009). Along the same lines of measuring
performance, we strongly suggest studies looking into the relationship of the use of
performance metrics and organizational quality (Lin and Li, 2010).
Note
1. The SCC is a non-profit organization founded in 1996 and is open to organizations
interested in supply-chain management systems and practices innovation (Supply-Chain
Council, 2003, p. 1).
References
Akyuz, G.A. and Erkan, T.E. (2009), “Supply chain performance measurement: a literature
review”, International Journal of Production Research, Vol. 48 No. 17, pp. 5137-55.
Barnett, M.W. and Miller, C.J. (2000), “Analysis of the virtual enterprise using distributed supply
chain modeling and simulation: an application of e-SCOR”, Proceedings of the 32nd
Conference on Winter Simulation, Orlando, FL, 10-13 December, pp. 352-5.
Bowersox, D., Closs, D.J. and Stank, T.P. (2000), “Ten mega-trends that will revolutionize supply
chain logistics”, Journal of Business Logistics, Vol. 21 No. 2, pp. 1-16.
Burgess, K. and Singh, P.J. (2006), “A proposed integrated framework for analysing supply
chains”, Supply Chain Management: An International Journal, Vol. 11 No. 4, pp. 337-44.
Cattell, R.B. (1966), “The scree test for the number of factors”, Multivariate Behavioral Research,
Vol. 1, pp. 245-76.
Chopra, S. and Meindl, P. (2004), Supply Chain Management: Strategy, Planning and Execution,
2nd ed., Pearson Education, Upper Saddle River, NJ.
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd ed., Lawrence
Erlbaum Associates, Hillsdale, NJ.
Croxton, K., Garcia-Dastugue, S., Lambert, D. and Rogers, D. (2001), “The supply chain
management processes”, The International Journal of Logistics Management, Vol. 12 No. 2,
pp. 13-36.
Frohlich, M.T. and Westbrook, R. (2001), “Arcs of integration: an international study of supply
chain strategies”, Journal of Operations Management, Vol. 19 No. 2, pp. 185-200.
Gunasekaran, A., Patel, C. and McGaughey, R. (2004), “A framework for supply chain
performance measurement”, International Journal of Production Economics, Vol. 87 No. 3,
pp. 333-47.
Gunasekaran, A., Patel, C. and Tirtiroglu, E. (2001), “Performance measures and metrics in a
supply chain environment”, International Journal of Operations & Production
Management, Vol. 21 Nos 1-2, pp. 71-87.
Hair, J. Jr, Anderson, R., Tatham, R. and Black, W. (1992), Multivariate Data Analysis with
Readings, Macmillan, New York, NY.
Harrison, A. and New, C. (2002), “The role of coherent supply chain strategy and performance
management in achieving competitive advantage: an international survey”, Journal of the
Operational Research Society, Vol. 53 No. 3, pp. 263-71.
Hausman, W.H. (2003), “Supply chain performance metrics”, in Billington, C., Harrison, T., Lee, H.
and Neale, J. (Eds), The Practice of Supply Chain Management, Kluwer, Boston, MA,
pp. 61-73.
Huang, S.H. and Keskar, H. (2007), “Comprehensive and configurable metrics
for supplier selection”, International Journal of Production Economics, Vol. 10 No. 2,
pp. 510-23.
Huang, S.H., Sheoran, S.K. and Keskar, H. (2005), “Computer-assisted supply chain configuration
based on supply chain operations reference (SCOR) model”, Computers & Industrial
Engineering, Vol. 48 No. 2, pp. 377-94.
Huang, S.H., Sheoran, S.K. and Wang, G. (2004), “A review and analysis of supply chain
operations reference (SCOR) model”, Supply Chain Management: An International Journal,
Vol. 9 No. 1, pp. 23-9.
Johnson, E.M. and Pyke, D.F. (2000), “A framework for teaching supply chain management”,
Production and Operations Management, Vol. 9 No. 1, pp. 2-18.
Kaiser, H.F. (1960), “The application of electronic computers to factor analysis”, Educational and
Psychological Measurement, Vol. 20 No. 1, pp. 141-51.
Khoo, L.P. and Yin, X.F. (2003), “An extended graph-based virtual clustering-enhanced approach
to supply chain optimisation”, The International Journal of Advanced Manufacturing
Technology, Vol. 22 No. 11, pp. 836-47.
Lambert, D.M. and Pohlen, T.L. (2001), “Supply chain metrics”, The International Journal of
Logistics Management, Vol. 12 No. 1, pp. 1-19.
Lee, H. and Billington, C. (1992), “Managing supply chain inventory: pitfalls and opportunities”,
MIT Sloan Management Review, Vol. 33 No. 3, pp. 65-73.
Lin, L.C. and Li, T.S. (2010), “An integrated framework for supply chain performance
measurement using six-sigma metrics”, Software Quality Journal, Vol. 18, pp. 387-406.
Lockamy, A. III and McCormack, K. (2004), “Linking SCOR planning practices to supply chain
performance: an exploratory study”, International Journal of Operations & Production
Management, Vol. 24 Nos 11/12, pp. 1192-218.
McCrea, B. (2006), “Metrics take center stage”, Logistics Management, Vol. 45 No. 1, pp. 39-42.
Mazen, M.A.M., Hemmasi, M. and Lewis, M.F. (1987), “Assessment of statistical power in
contemporary strategy research”, Strategic Management Journal, Vol. 8 No. 4, pp. 403-10.
Mentzer, J.T. and Flint, D.J. (1997), “Validity in logistics research”, Journal of Business Logistics,
Vol. 18 No. 1, pp. 199-216.
Neely, A. (1999), “The performance measurement revolution: why now and what next?”,
International Journal of Operations & Production Management, Vol. 19 No. 2, pp. 205-28.
Nunnally, J. (1967), Psychometric Theory, McGraw-Hill, New York, NY.
Nunnally, J. and Bernstein, I. (1994), Psychometric Theory, 3rd ed., WCB/McGraw-Hill,
New York, NY.
Rabelo, L., Eskandari, H., Shalan, T. and Helal, M. (2005), “Supporting simulation-based decision
making with the use of AHP analysis”, Proceedings of the 37th Conference on Winter
Simulation, Orlando, FL, 4-7 December, pp. 2042-51.
Richey,
G.R.
Jr
and
Autry,
C.W.
(2009),
“Assessing
interfirm
collaboration/technology investment tradeoffs: the effects of technological readiness and
organizational learning”, The International Journal of Logistics Management, Vol. 20 No. 1,
pp. 30-56.
Röder, A. and Tibken, B. (2006), “A methodology for modeling inter-company supply chains and
for evaluating a method of integrated product and process documentation”, European
Journal of Operational Research, Vol. 169 No. 3, pp. 1010-29.
Measuring DSC
performance
311
JMTM
23,3
312
Sackett, P.R. and Larson, J.R. Jr (1990), “Research strategies and tactics in industrial and
organizational psychology”, in Dunnette, M.D. and Hough, L.M. (Eds), Handbook of
Industrial and Organizational Psychology, Consulting Psychologists Press, Palo Alto, CA,
pp. 419-90.
Schneiderman, A.M. (1996a), “Metrics for the order fulfillment process (part 1)”, Journal of Cost
Management, Vol. 10 No. 2, pp. 30-42.
Schneiderman, A.M. (1996b), “Metrics for the order fulfillment process (part 2)”, Journal of Cost
Management, Vol. 10 No. 3, pp. 6-18.
Stephens, S. (2001), “Supply chain operations reference model version 5.0: a new tool to improve
supply chain efficiency and achieve best practice”, Information Systems Frontiers, Vol. 3
No. 4, pp. 471-6.
Stewart, G. (1995), “Supply chain performance benchmarking study reveals keys to supply chain
excellence”, Logistics Information Management, Vol. 8 No. 2, pp. 38-44.
Supply-Chain Council (2003), Supply-Chain Operations Reference-model Version 6.0,
Supply-Chain Council, Pittsburgh, PA, pp. 5-7.
Tang, N.K.H., Benton, H., Love, D., Albores, P., Ball, P. and MacBryde, J. (2004), “Developing an
enterprise simulator to support electronic supply-chain management for B2B electronic
business”, Production Planning & Control, Vol. 15 No. 6, pp. 572-83.
Venkatraman, N. and Grant, J.H. (1986), “Construct measurement in organizational strategy
research: a critique and proposal”, The Academy of Management Review, Vol. 11 No. 1,
pp. 71-87.
Wang, W.Y.C., Chan, H.K. and Pauleen, D.J. (2010), “Aligning business process reengineering in
implementing global supply chain systems by the SCOR model”, International Journal of
Production Research, Vol. 48 No. 19, pp. 5647-69.
Wisner, J., Tan, K.C. and Leong, K.G. (2008), Principles of Supply Chain Management, 2nd ed.,
South-Western College, Cincinnati, OH.
Yu, Z, Yan, H. and Cheng, T. (2001), “Benefits of information sharing with supply chain
partnerships”, Industrial Management & Data Systems, Vol. 101 No. 3, pp. 114-19.
Further reading
Copacino, W. (1999), “Sell what you make, stupid!”, Logistics Management and Distribution
Report, p. 34.
Dillman, D.A. (1991), “The design and administration of mail surveys”, Annual Review of
Sociology, Vol. 17 No. 1, pp. 225-49.
Kaplan, R.S. and Norton, D.P. (1992), “The balanced scorecard: measures that drive
performance”, Harvard Business Review, January-Feburary, pp. 71-80.
Simchi-Levi, D., Kaminsky, P. and Simchi-Levi, E. (2003), Managing the Supply Chain:
The Definitive Guide for the Business Professional, McGraw-Hill, Boston, MA.
Measuring DSC
performance
Appendix
Construct
Statements
Supply chain delivery reliability
A1. The ability to meet promised delivery date
defined as on-time and in full shipments
A2. The accuracy in filling order
A3. Order cycle consistency such that there is a
minimal variance in promised versus actual
delivery
A4. Fill rate on base line/in stock items (percentage
of order included in initial shipment)
A5. Completeness of order (percentage of line items
eventually shipped complete)
B1. Length of promised order cycle (lead) times (from
order submission to delivery)
B2. Length of time to answer distribution partners/
customers’ queries
B3. Length of time to process a received order
B4. Length of time to produce and ship a received
order
C1. The ability to identify and supply high volumes
in a “quick ship” mode
C2. The ability to automatically back order base line/
in stock items under “quick ship” mode
C3. The ability to meet specific customer service
needs
C4. The ability to plan, source, make and deliver
unplanned orders with minimal cost penalties
D1. Cost for order management (such as purchase
order, expediting, etc.)
D2. Cost of goods (such as direct cost of material and
direct labor)
D3. Cost of sales, contract administration,
engineering, and lab support of products
D4. Cost of carrying inventory (such as warehouse
and retail inventory)
D5. Cost of transportation
D6. Cost of warranty/return processing
D7. Total supply chain management cost
E1. Cash-to-cash cycle time
E2. Inventory days of supply
E3. Asset turns
E4. Gross margin
E5. Operating income
E6. Return on assets
E7. Earnings per share
Supply chain responsiveness
Supply chain flexibility
Supply chain costs
Supply chain asset management efficiency
313
Table AI.
SCOR metrics survey
JMTM
23,3
314
About the authors
Dr Horatiu Cirtita is Senior Supply Chain Management Consultant at Aleman Consulting in
Bucharest, Romania. His fields of expertise are Supply Chain Management and Enterprise
Resource Planning. He holds a PhD in Economics and Management, awarded by Padua
University, Italy. Dr Horatiu Cirtita is the corresponding author and can be contacted at: horatiu.
cirtita@aleman.ro
Dr Daniel A. Glaser-Segura is the Director of the International Education Office and Assistant
Professor of Management at the School of Business at Texas A&M University – San Antonio,
Texas. He has taught in the USA, Argentina, Romania and Mexico. He was awarded a traditional
Fulbright Scholarship to Romania in 2004-2005 and a Senior Fulbright Specialist Scholarship to
Romania in 2006 and 2011. Dr Glaser-Segura has conducted research on international supply
chain topics for the National Academies of Science COBASE program, International Research
Exchange Board, (IREX), the Institute for Supply Management (ISM), and the Mexican Trade
Commission. He holds a PhD in Management from the University of North Texas.
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com
Or visit our web site for further details: www.emeraldinsight.com/reprints
Download