Word - ovae.org

advertisement
This resource is a portion taken from w w w dot two one s t c e n t u r y s k i l l s dot o r g
[Cover]
Title: Assessment of Twenty First Century Skills: The Current Landscape.
Pre-Publication Draft.
June Two Thousand Five.
Partnership for Twenty First Century Skills.
w w w dot two one s t c e n t u r y s k i l l s dot o r g
[End of Cover]
[Page Two]
Note: Pages two through one hundred two have a header reading: Assessment of Twenty First
Century Skills: The Current Landscape. Pre-Publication Draft - June Two Thousand Five.]
The Partnership for Twenty First Century Skills:
The Partnership for Twenty First Century Skills has emerged as the leading advocacy
organization focused on infusing twenty first century skills into education. The organization
brings together the business community, education leaders, and policymakers to define a
powerful vision for twenty first century education to ensure every child’s success as citizens and
workers in the twenty first century. The Partnership encourages schools, districts and states to
advocate for the infusion of twenty first century skills into education and provides tools and
resources to help facilitate and drive change.
Member Organizations:
* Agilent Technologies.
* American Association of School Librarians.
* American Federation of Teachers.
* Apple.
* Bell South Foundation.
* Cable in the Classroom.
* Cisco Systems, Inc.
* Corporation for Public Broadcasting.
* Dell Inc.
* E T S.
* Ford Motor Company Fund.
* Intel.
* J A Worldwide.
* Microsoft Corporation.
* National Education Association.
* Oracle Corporation.
* S A P.
* Texas Instruments Incorporated.
* Time Warner, Inc.
* Verizon.
Researched and Written By:
* Margaret Honey.
* Chad Fasca.
* Andrew Gersick.
* Ellen Mandinach.
* Suparna Sinha.
Education Development Center’s.
Center for Children and Technology.
Ninety Six Morton Street, Seventh Floor,
New York, New York one zero zero one four,
Phone number is two one two dash eight zero seven dash four two zero zero,
Website address is w w w two dot e d c dot o r g slash c c t
Acknowledgment:
Numerous individuals from many organizations made invaluable contributions to the completion
of this research, too many to list here. Though we do not have space to credit each by name, we
take this opportunity to extend our thanks to all of them.
[End of Page Two]
[Page Three]
Table of Contents.
Executive Summary. Page Four.
Key Findings. Page Six.
Key Principles. Page Ten.
Strategic Recommendations. Page Ten.
Introduction. Page Twelve.
Current Assessments Of Twenty First Century Content. Page Eighteen.
Assessment Of Global Awareness. Page Twenty.
Assessment Of Civic Literacy. Page Twenty Four.
Assessment Of Financial, Economic And Business Literacy. Page Twenty Six.
Assessment Of Learning Skills. Page Twenty Eight.
Assessment Of I C T Literacy. Page Thirty Two.
Implications. Page Thirty Six.
Four Key Principles For Assessing Twenty First Century Learning. Page Thirty Eight.
Assessing Twenty First Century Skills: Five Strategic Recommendations. Page Thirty Nine.
References. Page Forty One.
Appendix A: Current Assessments. Page Forty Five.
Assessment Of Global Awareness. Page Forty Five.
Assessment Of Civic Engagement. Page Forty Eight.
Assessment Of Financial, Economic, And Business Literacy. Page Fifty Two.
Assessment Of Learning Skills. Page Fifty Four.
Assessment Of I C T Literacy. Page Sixty Five.
Appendix B: Primer On Assessment. Page Seventy Five.
Appendix C: Key Organizations. Page Eighty Seven.
[End of Page Three]
[Page Four]
Executive Summary.
How can we best prepare students to succeed in the twenty first century? The Partnership for
Twenty First Century Skills, a coalition of leading education, business and technology
organizations, organized to address this question and create a powerful vision for twenty first
century education. The group’s members recognize the profound gap between the knowledge and
skills most students acquire in school and those required in today’s twenty first century
communities and technology-infused workplaces. In its two thousand three report, Learning for
the Twenty First Century, the Partnership synthesized the perspectives of business, education,
and government leaders to create a common language and strategic direction for efforts to infuse
twenty first century skills into K through Twelve education and make U S education more
globally competitive. [Footnote One. This report, as well as more detailed information on the
Partnership framework, is located at w w w dot two one s t c e n t u r y s k i l l s dot o r g. End of
Footnote One] In its subsequent reports and tools, the Partnership has continued to provide
guidance to the education community, policymakers and other leaders as they work to embed
twenty first century skills into learning to better meet the demands of the global economy.
In order to bring its vision to fruition and successfully integrate twenty first century skills into our
educational system, the Partnership recognizes that another critical question must be asked: “How
do we measure twenty first century learning?” The Partnership believes that the movement to
embrace and foster widespread adoption of twenty first century skills hinges on identifying ways
to assess students’ acquisition and application of this knowledge. In light of this, the Partnership
has developed its current report, Assessment of Twenty First Century Skills: the Current
Landscape. In it, we have not reviewed assessments of traditional core content areas,
understanding that many studies and reports have already addressed these types of assessments.
Rather, we have surveyed the current landscape of assessments that measure key dimensions of
twenty first century learning: Twenty First Century Content (Global Awareness, Financial,
Economic and Business Literacy and Civic Literacy), Learning Skills (Information and
Communication Skills, Thinking and Problem-Solving Skills, and Interpersonal and SelfDirectional Skills), and Information and Communication Technology (I C T) Literacy. These key
elements of twenty first century learning are critical for every child’s success as a worker and
citizen in the twenty first century. These are explained briefly here:
(list of four bulleted items)
[End of Page Four]
[Page Five]
* The concept of Global Awareness acknowledges that students need a deeper understanding of
the thinking, motivations, and actions of different cultures and countries in order to successfully
navigate and respond to communities and workplaces extending beyond their neighborhoods.
* The concept of Civic Engagement recognizes that students need to understand, analyze, and
participate in government and in community, both globally and locally, in order to shape the
circumstances that impact their daily lives.
* The concept of Financial, Economic and Business Literacy responds to the growing demand
on people to understand business processes, entrepreneurial spirit, and the economic forces that
drive today’s economy.
* The concept of Learning Skills acknowledges the need for students to think critically, analyze
information, comprehend new ideas, communicate, collaborate, solve problems, and make
decisions, while I C T Literacy recognizes that technology is essential to realizing these learning
skills in today’s knowledge economy.
Assessments of student achievement, from widely recognized standardized tests to classroom
based measures, have become an essential component of educational practice and a crucial
ingredient of educational reform. While the assessment landscape is replete with assessments that
measure knowledge of core content areas such as language arts, mathematics, science and social
studies, there is a comparative lack of assessments and analyses focused on elements of twenty
first century learning. Additionally, there is a growing consensus that current assessments are not
adequately measuring a student’s ability to engage in the kinds of complex thinking and problemsolving tasks required of a twenty first century learner. With spending on assessment
development expected to grow into the billions of dollars this decade (three point nine billion
dollars according to recent Government Accounting Office estimates (G A O, two thousand
three)), it is vital that our investment focuses not merely on fulfilling federal requirements, but on
preparing today’s children to face the challenges of tomorrow’s complex communities and
workplaces. New assessment tools must be developed.
The Partnership is encouraged that the movement to foster twenty first century learning as well as
to develop the means to measure complex, higher-order thinking skills is emerging. States are
meeting to discuss global awareness education and some have civic-skills assessments tied to
accountability measures.
[End of Page Five]
[Page Six]
Economics education has received federal recognition with the passage of the Excellence in
Economics Education Act. With regard to learning skills and I C T literacy, both private and
public sector organizations are demonstrating new approaches to assessing twenty first century
skills. And internationally, a broad consensus exists among education ministries that I C T
literacy must be treated as a core skill area in the new century. While these examples are
encouraging, they do not yet indicate a broad focus on assessing twenty first century skills. Most
K through twelve assessments in widespread use today - whether they be of twenty first century
skills and content or of traditional core subject areas - have thus far measured only a student’s
knowledge of discrete facts, not a student’s ability to apply knowledge in complex situations.
As more of our economic competitors move to foster twenty first century skills within their
educational systems, the United States faces a critical challenge to keep pace in preparing our
students to meet the demands of today’s global community. While U S students have improved
their performance on international assessments of discrete knowledge - falling near the middle of
the pack on the Trends in International Mathematics and Science Study (T I M S S) - their
performance on the Program for International Student Assessment (P I S A), which measures how
fifteen-year-olds apply reading, mathematics and science content knowledge and skills to
analyzing and evaluating information and solving problems and issues in real-life contexts, places
the U S in the bottom third. Without a shift in focus in the U S, it seems likely that this twenty
first century learning gap will only widen as other nations continue to stress twenty first century
skills in their national education plans.
Clearly, there is much to be done to ensure that U S students emerge from our schools with the
skills needed to be effective workers, citizens and leaders in the twenty first century. The
Partnership envisions this report as a significant first step toward creating a comprehensive
agenda focused on assessing twenty first century skills and twenty first century content.
Key Findings.
Assessment of Global Awareness.
Global awareness, while not wholly absent from American public education, has historically not
been viewed as necessary to the lives of average American citizens.
[End of Page Six]
[Page Seven]
In recent years, however, interest has grown in promoting “international education” or
“international studies.” Though K through Twelve international education is now teeming with
activity, the creation of global awareness assessments remains a largely unexplored area. Existing
assessments are tied closely to geography education; no measures currently exist that address
students’ understanding of global and international issues. What activity has occurred has taken
place largely at the state level with many states convening meetings to explore global awareness
education. Notable examples include Connecticut, Delaware, Kansas and Massachusetts. The
current assessment landscape is limited primarily to geography education, specifically the
National Assessment of Educational Progress (N A E P), Geography Assessment and
Intermediate-Level Geography Test.
Assessment of Civic Engagement.
Civic Engagement is one of the oldest and most well established fields among those that the
Partnership has chosen to examine. A number of major organizations have long track records of
involvement in creating resources and educating the public about civics and civics-education
issues. Nineteen states have some civic-skills testing, and eleven of those states tie the tests to
accountability measures. Our survey identified a number of innovative studies and projects aimed
at creating new and better assessments of young people’s civic knowledge, attitudes, and
behavior. In the last four years, the Center for Information and Research on Civic Learning and
Engagement (C I R C L E), based at the University of Maryland’s School of Public Policy has
partnered with an array of other organizations to develop indicators and assessments of students’
civics knowledge and civic-oriented practices, including the Civic Engagement Quiz and the
Databank of Civic Assessment Tools.
Assessment of Financial, Economic and Business Literacy.
The importance of economics education, in particular, has been recognized on the federal level
with the passage of the Excellence in Economics Education Act, a component of N C L B. A
twelfth grade N A E P (National Assessment of Educational Progress) examination is currently
under development. A number of organizations have advocated the importance of both economic
and financial literacy as a key component of K through Twelve education and are working on
raising awareness, developing programs, creating curricula, providing training, and developing
assessments that can be used to gauge student knowledge in these domains.
[End of Page Seven]
[Page Eight]
With respect to entrepreneurial skills, a number of assessments have been developed; however,
there are no large-scale, cross-program assessments in the area of entrepreneurship. With respect
to economic and financial literacy at the pre-college level, a number of assessments have been
developed, including the Junior Achievement Economics Test Bank and Jump$tart Assessment
of Financial Literacy.
Assessment of Learning Skills.
In psychological and educational literature, learning skills - information and communication
skills, thinking and problem solving skills, and interpersonal and self-directional skills - have a
long research and development history. We limited our examination of existing learning skills
assessments to instruments that have been published and have undergone the appropriate testing
or those that are under development that show promising tendencies for new and creative
assessment techniques. While a significant number of assessments are quite old, dating back more
than forty years, a new and growing crop of assessments that place greater scrutiny on the kind
and quality of skills being measured are beginning to appear, sometimes embedded within larger
content-area assessments, as in the P I S A (Program for International Student Assessment), an
internationally-administered assessment developed by Organization for Economic Co-operation
and Development (O E C D). A number of other learning skills assessments are currently in
development, including The Rainbow Project’s large-scale thinking-skills assessment, sponsored
by the College Board and led by Robert Sternberg of Yale University; the Metiri Group’s SelfDirected Learning Inventory; and the Council on Aid to Education’s Assessment of Analytic
Reasoning and Communication Skills. While some of these new assessments are targeted at the
college level, they may have future application in the high school arena.
Assessment of I C T Literacy.
A broad consensus exists among education ministries and major education-oriented
[End of Page Eight]
[Page Nine]
N G Os throughout Europe, Asia, Australia, and the Pacific Rim that I C T literacy must be
treated as a core skill area in the new century. In logical conjunction with this increased emphasis
on I C T literacy, interest has grown in developing assessments that (a) reveal the cognitive skills
that students employ in conjunction with their use of technology and (b) use technology as their
means of delivery. The process of creating such assessments is only beginning even among the
leading-edge education ministries, with the exception of the United Kingdom.
The U K’s emerging tools, particularly the new Key Stage Three (age twelve-thirteen) I C T
Literacy Assessment created by the British government’s Qualifications and Curriculum
Authority (Q C A), are attracting the attention of other ministries of education in Europe and
Asia. One of the most promising new assessments we found in any of our five content areas, the
Key Stage three instrument assesses both content-area and thinking skills as well as provides both
national data on students’ capabilities and information on individual students pertinent to
classroom-level instruction. The Q C A plans to rollout the assessment nationwide by mid-two
thousand six. In the United States, only the Educational Testing Service’s new I C T Literacy
Assessment for higher-education settings approaches the work being developed in Great Britain.
It is currently targeted at college age students, though it may well have implications for future
assessment at the K through Twelve level, particularly in high schools. The International Society
for Technology in Education (I S T E) and Microsoft are developing a performance-based
assessment aimed directly at eighth grade, the N E T S Online Technology Assessment.
[End of Page Nine]
[Page Ten]
Recommendations:
In keeping with its role as the advocate for twenty first century learning, the Partnership has
identified the following key principles and strategic recommendations to guide efforts at building
assessments of twenty first century skills.
Key Principles for Assessing Twenty First Century Learning.
(list of three bulleted items; item one has a dashed sub-list)
* We need assessment tools that will:
(list of four dashed items)
- Measure student mastery of twenty first century skills.
- Diagnose where students require intervention in terms of twenty first century skills.
- Measure the educational system’s effectiveness in teaching twenty first century skills.
- Permit students to demonstrate their proficiency in twenty first century skills to educational
institutions and prospective employers; high stakes assessments alone do not generate evidence of
the skill sets that the business and education communities believe will ensure success in the
twenty first century.
* No single assessment tool will accomplish all these objectives. A diverse menu of assessment
tools must be available.
* Technology should be integrated into assessment tools to effectively measure twenty first
century skills.
Assessing Twenty First Century Skills: Five Strategic Recommendations.
(list of five bulleted items)
* Articulate and build national consensus around the assessment of twenty first century skills
through large-scale public education initiatives.
* Implement a policy that supports a broad vision for the adoption of assessments of twenty first
century skills.
[End of Page Ten]
[Page Eleven]
* Support an R and D infrastructure for building assessments that measure cognitively complex
and real-world-related tasks, led by the federal government and education research institutions in
higher education and the public sector.
* Develop support in the private and public sectors to create viable production and distribution
networks for assessment instruments and tools that measure twenty first century skills.
* Challenge every state to adopt a system that assesses the full range of twenty first century skills
and twenty first century content knowledge by two thousand ten.
[End of Page Eleven]
[Page Twelve]
Introduction.
As the leading advocacy organization for the importance of twenty first century skills in
education, the Partnership for Twenty First Century Skills’ core mission is to help schools fully
address the needs of students in the twenty first century. In its Learning for the Twenty First
Century report, the Partnership acknowledges that “accelerating technological change, rapidly
accumulating knowledge, increasing global competition and rising workforce capabilities around
the world make twenty first century skills essential” (two thousand three: two). To move forward
the agenda for twenty first century learning, the Partnership has chosen to work broadly across
the numerous sectors that influence and shape the quality of teaching and learning in the nation’s
schools. This work has included forming alliances with key educational organizations, developing
partnerships with education policy groups, and assembling resources that can help practitioners,
school administrators, and local stakeholders promote and implement twenty first century skills in
their communities. In its first two years the Partnership has taken real action – issuing two
reports, a guide, an online tool and I C T Literacy Maps - in an effort to promote a powerful
vision for twenty first century education. The research summarized in this report represents the
next phase of the Partnership’s work - the assessment of twenty first century learning.
Over the past two decades assessment has become an essential component of educational
practice. In U S schools, large-scale, summative assessments are the norm. Policymakers view
such assessments as powerful levers for influencing what happens in schools and classrooms, and
large-scale assessment studies are routinely carried out to gauge the strengths and weaknesses of
American students. With the passage of the No Child Left Behind legislation in January two
thousand two, testing has become not only more routine, but also increasingly high-stakes and
focused on core content domains. Test results are regularly used to determine whether students
can advance to the next grade, and to judge the quality of schools and the educators who work in
them.
However, in recent years, educators, business leaders, and policymakers in the U S have
questioned whether the current design of assessment systems focuses too much on measuring
students’ ability to recall discrete facts using multiple choice tests at the cost of not adequately
measuring a student’s ability to engage in and complete complex thinking and problem-solving
tasks.
[End of Page Twelve]
[Page Thirteen]
Outside observers of the U S school system have been quick to note potential shortcomings,
claiming that narrowly focused high-stakes assessment systems produce at best only illusory
student gains (Ridgeway, McCusker and Pead two thousand four). The end result is a widening
gap between the knowledge and skills students are acquiring in schools and the knowledge and
skills needed to succeed in the increasingly global, technology infused twenty first century
workplace.
In a speech delivered to the Education Writers Association, U S Education Secretary
Margaret Spellings underscored the international significance of this gap. “How can you make
your readers believe that the achievement gap affects them [author’s emphasis]? Our nation's
leadership position in the world is being challenged. For example, thirty eight percent of
bachelor's degrees in China were awarded in engineering as opposed to less than six percent in
the U S. And in the decade from nineteen ninety to two thousand, India increased its number of
students enrolled in college by ninety two percent” (Spellings two thousand five).
This gap is most pronounced when U S students are compared with their international
counterparts. Results from the two thousand three Trends in International Mathematics and
Science Study (T I M S S), which measures how much of curriculum content students have
learned, show that mathematics performance among U S eighth-grade students is lower than that
of fourteen other countries (Mullis, Martin, Gonzalez and Chrostowski two thousand four),
putting them near the middle of the pack, well behind the highest-performing countries.
Increasingly our economic competitors are embracing newer assessments that measure how
students apply what they have learned (Stage two thousand five). The Program for International
Student Assessment (P I S A), developed by the Organization for Economic Co-operation and
Development (O E C D), measures how students (fifteen-year-olds) apply reading, mathematics
and science content knowledge and skills to analyzing and evaluating information and solving
problems and issues in real-life contexts. Thirty countries, including the U S, participated in the
assessment’s first application in two thousand three. While U S students fall near the middle of
the pack on T I M S S, their performance on P I S A ranks below the average of their international
counterparts, placing U S students in the bottom third, and trailing their competitors in the area
that the Partnership has identified as most critical to twenty first century learning - applying
knowledge in real-world contexts.
[End of Page Thirteen]
[Page Fourteen]
The P I S A two thousand three results surface amid growing consensus across business and
higher education sectors that mastery of core content (e g, language arts, mathematics, science,
social studies), while a necessary facet of education, is no longer sufficient preparation for
students. As the Partnership states: “To cope with the demands of the twenty first century, people
need to know more than core subjects. They need to know how to use their knowledge and skills by thinking critically, applying knowledge to new situations, analyzing information,
comprehending new ideas, communicating, collaborating, solving problems, making decisions”
(two thousand three: nine).
The Partnership is not alone in making this claim. A report issued by the American
Diploma Project concludes: “The [high school] diploma has lost its value because what it takes to
earn one is disconnected from what it takes for graduates to compete successfully beyond high
school - either in the classroom or in the workplace. Re-establishing the value of the diploma will
require the creation of an inextricable link between high school exit expectations and the
intellectual challenges that graduates invariably will face in credit-bearing college courses or in
high-performance, high-growth jobs” (two thousand four: one).
Investigating assessments that can be used to measure twenty first century skills represents a key
component of the Partnership’s advocacy work. To gather a more complete picture of the current
state of such assessments, the Partnership commissioned E D C’s Center for Children and
Technology to undertake a review of educational assessments that support twenty first century
learning, both in the U S and abroad. This report describes the key findings and implications of
this review, defining key terms and concepts in the field of measurement, describing current
activities in particular content and skill areas, highlighting promising assessments in
development, and outlining key principles and strategic recommendations for current and future
efforts to develop twenty first century learning assessments.
Because the assessment landscape is continuously evolving - new tests are being created while
old ones are retired - the Partnership has developed Assess Twenty One, an online tool that serves
as a companion to this report and a growing repository for assessments of twenty first century
skills. Assess Twenty One can be found on the Partnership’s web site at: w w w dot two one s t c
e n t u r y s k i l l s dot o r g slash a s s e s s two one.
[End of Page Fourteen]
[Page Fifteen]
In defining twenty first century learning the Partnership has embraced five content and skill areas
that represent the essential knowledge for the twenty first century: global awareness; civic
engagement; financial, economic and business literacy; learning skills that encompass problem
solving, critical thinking, and self-directional skills; and I C T Literacy. Brief descriptions of
these content and skill areas, adapted from the Partnership’s Learning for the Twenty First
Century report, follow (two thousand three nine, eleven, fifteen).
Global Awareness.
(list of three bulleted items)
* Using twenty first century skills to understand and address global issues.
* Learning from and working collaboratively with individuals representing diverse cultures,
religions and lifestyles in a spirit of mutual respect and open dialogue in personal, work and
community contexts.
* Promoting the study of non-English language as a tool for understanding other nations and
cultures.
Financial, Economic, and Business Literacy.
(list of four bulleted items)
* Knowing how to make appropriate personal economic choices.
* Understanding the role of the economy and the role of business in the economy.
* Applying appropriate twenty first century skills to function as a productive contributor within
an organizational setting.
* Integrating oneself within and adapting continually to our nation’s evolving economic and
business environment.
Civic Literacy.
(list of four bulleted items)
* Being an informed citizen to participate effectively in government.
* Exercising the rights and obligations of citizenship at local, state, national and global levels.
* Understanding the local and global implications of civic decisions.
* Applying twenty first century skills to make intelligent choices as a citizen.
Thinking, Problem-Solving, Interpersonal and Self-Directional Learning Skills.
(list of seven bulleted items)
* Critical Thinking And Systems Thinking. Exercising sound reasoning in understanding and
making complex choices, understanding the interconnections among systems.
* Problem Identification, Formulation And Solution. Ability to frame, analyze and solve
problems.
[End of Page Fifteen]
[Page Sixteen]
* Creativity And Intellectual Curiosity. Developing, implementing and communicating new
ideas to others, staying open and responsive to new and diverse perspectives.
* Interpersonal And Collaborative Skills. Demonstrating teamwork and leadership; adapting to
varied roles and responsibilities; working productively with others; exercising empathy;
respecting diverse perspectives.
* Self-Direction. Monitoring one’s own understanding and learning needs, locating appropriate
resources, transferring learning from one domain to another.
* Accountability And Adaptability. Exercising personal responsibility and flexibility in
personal, workplace and community contexts; setting and meeting high standards and goals for
one’s self and others; tolerating ambiguity.
* Social responsibility. Acting responsibly with the interests of the larger community in mind;
demonstrating ethical behavior in personal, workplace and community contexts.
Information and Communication Technology (I C T) Learning Skills.
(list of three bulleted items)
* Information And Media Literacy Skills. Analyzing, accessing, managing, integrating,
evaluating and creating information in a variety of forms and media. Understanding the role of
media in society.
* Communication Skills. Understanding, managing and creating effective oral, written and
multimedia communication in a variety of forms and contexts.
* Interpersonal And Self-Direction Skills. Becoming more productive in accomplishing tasks
and developing interest in improving own skills.
Criteria for Selecting Assessments.
While there is little doubt that assessments are a pervasive part of the education landscape, there
is widespread recognition that measuring what students know is a field that requires further study
and innovation (National Research Council: two thousand one a). Standardized tests capture only
a portion of the skills and knowledge that students need to know, and they have only limited
value in helping teachers make decisions about how to improve learning in the context of daily
instruction. And, while classroom assessments can play important roles in advancing student
performance, they are typically not valid and reliable for making broad comparisons across
classrooms or schools. In an ideal world, assessments would offer teachers ongoing diagnostic
information that could be used to develop students’ learning over time and provide summative
outcomes that could determine students’ progress relative to peers in other contexts.
[End of Page Sixteen]
[Page Seventeen]
Advances in computer-based assessments are beginning to make such practices a reality, although
development and deployment costs remain high (National Research Council: two thousand one
a).
In order to focus the scope of this review of assessments for twenty first century learning, we
sought to identify assessments that met the following criteria:
(list of three bulleted items)
* Measure student skills, knowledge, and learning at K through twelve level, and align with P
twenty one’s content and skill areas.
* Have applicability across a wide range of instructional programs.
* Have gone through or are going through a rigorous process of test validation.
We cast a wide net, contacting key organizations and individuals, nationally and internationally,
who could steer us toward relevant resources. Our criteria led us to focus on assessments that
have been created by policy organizations, educational research and advocacy groups, and
commercial test publishers, and not on assessments generated at the classroom level.
This report is written primarily for policy, education, and business leaders who are interested in
moving forward an agenda for twenty first century learning. It is intended to identify assessments
that correspond to the five content and skill areas identified by the Partnership. In domains where
there are particularly interesting assessment practices emerging, we have chosen to highlight
these as examples of the kinds of forward-looking assessment practices that are attempting to
measure the complexity of twenty first century learning. The appendices provide readers with
brief descriptions of all assessments found in this research, basic information on assessment
methodology, and a list of key organizations in each content and skill area.
In the remainder of this report we discuss what we have learned about assessments of twenty first
century skills related to the Partnership’s content and skill areas and the organizations involved in
their creation. Based on the current state of twenty first century skills assessments, we also
describe the implications of this research and outline key principles and strategic
recommendations to guide current and future work in this area.
[End of Page Seventeen]
[Page Eighteen]
Current Assessments of Twenty First Century Content and Skills.
In crafting its two thousand three report, Learning for the Twenty First Century, the Partnership
asked the question, “How can we best prepare students to succeed in the twenty first century?”
The report answered this question by providing a framework and common language describing
the key elements of twenty first century learning.
What the Partnership found was that giving students a solid foundation in core subjects and core
content was not enough. Americans live in increasingly diverse communities, often working for
businesses involved in global commerce. Both personally and professionally, they are responsible
for making sophisticated economic and business choices that will profoundly affect their futures
as educational and career decisions become more critical and financial management choices
become more prevalent in their daily lives. And as our nation navigates the new global
marketplace, our citizens now face greater demands to understand, analyze, and participate in
government and in community, both globally and locally. Recognizing this, the Partnership
identified three important but overlooked content areas - global awareness; financial, economic
and business literacy; and civic literacy - as critical to the success of today’s students in
tomorrow’s global marketplace.
But meeting the demands of the twenty first century requires more than content knowledge. In
order to provide both flexibility and security in an era characterized by constant change, twenty
first century students need “knowing how to learn” skills that enable them to acquire new
knowledge and skills, connect new information to existing knowledge, analyze, develop habits of
learning, and work with others to use information, which is why the Partnership has identified
learning skills (information and communication skills, thinking and problem-solving skills, and
interpersonal and self-directional skills) as part of its twenty first century education framework.
And as technology increasingly becomes the medium for communication and information
sharing, students need to be capable of harnessing technology to perform learning skills, such as
communicating effectively with presentation software or juggling personal responsibilities with a
personal digital assistant, which the Partnership has identified as Information and
Communications Technology (I C T) Literacy.
[End of Page Eighteen]
[Page Nineteen]
Identifying and developing a framework for twenty first century learning has given business,
education, and policy leaders a common language and a strategic direction for efforts to make U
S education more globally competitive. But putting this framework into practice requires an
equally important next step, one that addresses the question, “How do we measure twenty first
century learning?”
To begin to formulate an answer to this question, we sought to survey the current landscape of
twenty first century learning assessments in each of the five content and skill areas identified by
the Partnership. We talked to education leaders, government ministries, commercial assessment
developers, national and international nonprofits that are either developing assessments or are
devoted to particular content areas, and experts within the five skill and content area fields, as
well as conducted our own Internet and library searches, to identify what assessments were
available or in development. We then examined every available assessment, either in paper form
or online, except for a few cases where the assessments were either strictly proprietary or in early
development. At various stages throughout the research process, we submitted our work to
experts in the field for review and consultation, often using these conversations as opportunities
to track down additional assessments.
In this section, we describe the current landscape of assessments worldwide and highlight
particularly promising assessments, either in use or under development. This section focuses
primarily on a high-level discussion of the state of assessment in each learning area. Readers
seeking a more comprehensive list of assessments in each learning area should refer to the
Appendices, which include detail on identified assessments and a list of key organizations
working in related content and skill areas.
[End of Page Nineteen]
[Page Twenty]
Assessment of Global Awareness.
In the last five years, political and educational leaders have engaged in a vigorous public
discussion about global awareness in education. A wide array of commentators appears to agree
that the challenges of the twenty first century world will require citizens to have a global slash
international perspective. There is also growing recognition that students in the United States are
not being educated in this perspective nor are they being prepared to act as informed members of
the global community. As former U.S. Secretary of State Colin Powell noted:
“We live in a truly global age ellipsis. To solve most of the major problems facing our country
today - from wiping out terrorism to minimizing global environmental problems to eliminating
the scourge of A I D S - will require every young person to learn more about other regions,
cultures and languages.” (Kagan and Stewart two thousand four a)
Global awareness issues have not been wholly absent from American public education.
Geography education is a fixture of the American K through Twelve curriculum and geography
standards are present in most states. However, it has often been noted that a nuanced
understanding of other nations and other cultures has not been viewed as necessary to the lives of
average American citizens. One organization seeking to broaden the perspectives of students
recently concluded, “Young Americans remain ellipsis dangerously uninformed about
international matters ellipsis” (National Commission on Asia in the Schools two thousand one).
The response to this and similar observations has been a rising interest in promoting
“international education” or “international studies,” both areas of study that address the learning
goals that the Partnership has grouped under the term “global awareness” - a strong understanding
of the thinking, motivations and actions of different cultures, countries and regions, in service of
tolerance and successful communication around the ethnic, cultural, religious and personal
differences that play out in workplaces and communities. Three major elements are seen as
central to students’ understanding of global issues:
(list of three bulleted items)
* Knowledge of other world regions, cultures and global slash international issues.
* Skills in communicating in languages other than English, working in global or cross-cultural
environments, and using information from different sources around the world.
[End of Page Twenty]
[Page Twenty One]
* Respect and concern for other cultures and peoples. (Kagan and Stewart two thousand four a)
Now teeming with activity, K through Twelve international education is currently at a stage that
mirrors activity in the Information and Communication Technology (I C T) Literacy field a
decade ago, or in the field of Economic and Financial Literacy twenty years ago: a number of
important groups have begun building a framework - capacity, curricula and teacher preparation for international education. The creation of global awareness assessments, however, remains a
largely unexplored area.
In the next five to ten years, observers of global awareness education will be paying close
attention to activity in a number of states, where legislatures and state-level committees are
beginning to draft and implement educational agendas for the integration of global-awareness
issues into curricula. Some notable examples:
(list of five states)
Connecticut.
The state legislature has passed a bill urging the development of K through Twelve “international
education,” and established a task force to recommend strategies for implementing the bill. In the
fall of two thousand four, a statewide meeting of school superintendents convened to plan for the
integration of international education into the curriculum.
Delaware.
The state’s department of education is formally analyzing the state’s capacity for international
education in K through Twenty institutions and creating teacher professional-development
clusters on Asia and on technology-and-international-studies.
Kansas.
The state’s Committee on International Education in the Schools surveyed three thousand adult
professionals about their attitudes toward increasing international K through Twelve education,
finding strong support.
Massachusetts.
The state’s Initiative for International Studies has convened conferences on “Massachusetts and
the Global Economy” and on best practices in international education.
[End of Page Twenty One]
[Page Twenty Two]
Texas.
Beginning in two thousand four, the state introduced a mandatory year of geography education as
a prerequisite to high school graduation.
Similar efforts are taking place in a number of other states - including New Jersey,
New Mexico, New York, North Carolina, Oklahoma, Vermont, West Virginia, Wisconsin, and
Wyoming - where governors’ councils and state committees on international education are
surveying state curricula, convening conferences, and making policy recommendations.
Additionally, a number of organizations with multi-state or regional agendas are aligning
international-education agendas with regional economic growth and well-being. For example, the
Southern Growth Policies Board, on which fourteen governors serve, issued a report titled The
Globally Competitive South (Under Construction) (Clinton, Conway and Hoke two thousand
four) that stressed the need for member states to refit their education systems to provide highquality international education, recommending that they “…internationalize P through Sixteen
and adult education to respond to evolving business and community challenges,” and saying,
“Global events used to be something that happened ‘over there,’ but today, globalization affects
everyone’s lives.”
Efforts and publications like those described above indicate that there is political will and
growing public awareness to support significant reform in the area of international education.
However the current assessment landscape is very limited - the exception being the area of global
awareness covered by geography education. The National Assessment of Educational Progress
(N A E P) Geography Assessment is “the only nationally representative and continuing
assessment of what America’s students know and can do” (N A G B two thousand one) in the
field of geography. The N A E P is administered by the Department of Education’s National
Center for Education Statistics; a bipartisan board appointed by the Secretary of Education sets
policy for the program. The assessment is a set of paper-and-pencil, mixed format tests designed
to measure the geography knowledge and skills of fourth, eighth, and twelfth-graders in three
broad categories of cognitive skills (“Knowing,” “Understanding,” and “Applying”), and three
categories of content knowledge (“Space and Place,” “Environment and Society,” and “Spatial
Dynamics and Connections”). The N A E P was administered in nineteen ninety four and two
thousand one and will not be administered again until two thousand ten.
[End of Page Twenty Two]
[Page Twenty Three]
The Intermediate-Level Geography Test created by the National Council for Geography
Education (N C G E) - one of the leading organizations in the field of geography education – is a
paper-and-pencil, mixed-format assessment keyed to the nineteen ninety four “Geography for
Life” national Geography standards. The ninety question test encompasses six essential elements
- The World in Spatial Terms, Places and Regions, Physical Systems, Human Systems,
Environment and Society, and Uses of Geography. The test was revised and reassessed (for
content validity and reliability) in two thousand, and the N C G E is currently in the process of
revising a similar test aimed at high school students.
One sign that the current interest in reform could extend to the creation of new assessments in
new areas of global-awareness education is the College Board’s recent move to expand its A P
offerings to include a new A P Human Geography course in two thousand one and the first new A
P language courses in forty years - in Chinese, Russian and Italian – in two thousand three.
[End of Page Twenty Three]
[Page Twenty Four]
Assessment of Civic Literacy.
An ability to understand, analyze and participate in government and the local and global
community, an understanding of the historic implications of current policies, the role of leaders
and a broad sense of political awareness, and the opportunities for participation and influence in a
democracy - these are essential characteristics defined by the Partnership of the informed
American citizen. A number of major organizations such as the National Center for Learning and
Citizenship (N C L C) and the Center for Civic Education have long track records of involvement
in creating resources and educating the public about civics and civics-education issues. Nineteen
states have some civic-skills testing, and eleven of those states tie the tests to accountability
measures.
At the same time, the issue of civic engagement, like global awareness, has gained new currency
with the turn of the century, stimulated by the election controversies of two thousand, the events
of September eleven, two thousand one, and the subsequent intensification of national and
international discourse around the meaning of democracy. Though the forces at work in the
revitalization of the civic engagement field are complex, it is perhaps sufficient to say that, as the
United States has made a public pledge to foster democracy throughout the world, it is no surprise
that many voices within the U S have called for a stronger effort to promote the health of
democratic participation at home.
Along with renewed interest in civics education has come a new interest in assessments to
measure and support quality civic instruction. Our survey identified a number of innovative
studies and projects aimed at creating new and better assessments of young people’s civic
knowledge, attitudes, and behavior. At the center of much of this innovative work is the Center
for Information and Research on Civic Learning and Engagement (C I R C L E), based at the
University of Maryland’s School of Public Policy - established in two thousand one to promote
research on the civic and political engagement of young Americans (ages fifteen through twenty
five, with a less consistent focus on elementary-school aged children). In the past four years the
center has developed partnerships with an array of other organizations to develop indicators and
assessments of students’ civics knowledge and civic-oriented practices. C I R C L E’s Civic
Engagement Quiz is a tool for the evaluation of high school students’ engagement with civic
issues and activities.
[End of Page Twenty Four]
[Page Twenty Five]
The short, paper-and-pencil, multiple choice assessment focuses on respondents’ civics behavior,
as opposed to their knowledge of “civic issues” (a subject area covered by more traditional civics
assessments such as the National Assessment of Educational Progress (N A E P) Civics Test).
The quiz asks about the character and frequency of the test-taker’s participation in civic activities,
such as writing letters-to-the-editor, voting, political activism or volunteering. Teachers or
program leaders administering the quiz may then use the results to categorize respondents’ civic
engagement within C I R C L E’s “Typology of Engagement” (civic activities, electoral activities,
political voice activities) and to classify each respondent as “Disengaged,” an “Electoral
Specialist,” a “Civic Specialist,” or a “Dual Activist.” C I R C L E is also involved in a number of
ongoing collaborative projects of interest, including the Databank of Civic Assessment Tools
(with the National Center for Learning and Citizenship (N C L C) and the Education Commission
of the States (E C S), and funded by the Carnegie Corporation of New York), an evolving
assemblage of released multiple choice and performance-based test items in two broad areas:
(list of two lettered items)
(a) Student learning, including students’ civics knowledge, skills and attitudes; and
(b) Institutional capacity for civics education, including a school’s civic climate and investment in
a civic mission.
[End of Page Twenty Five]
[Page Twenty Six]
Assessment of Financial, Economic and Business Literacy.
The importance of economics education, in particular, has been recognized on the federal level
with the passage of the Excellence in Economics Education Act, a component of N C L B. The
intent of the legislation is to help drive economics education through program development,
teacher professional development, and research. The Partnership along with a number of
organizations has also advocated the importance of both economic and financial literacy as a key
component of K through twelve education. The National Council on Economics Education (N C
E E), the National Association of Economic Educators, the Foundation for Teaching Economics,
the Jump$tart Coalition, and Junior Achievement have been the principle leaders in this field,
taking on the challenge of identifying the range of skills and knowledge that are critical to
economics and financial education. They work on raising awareness, developing programs,
creating curricula, providing training, and developing assessments that can be used to gauge
student knowledge in these domains. For these organizations, economic and financial literacy
entails the ability to make appropriate economic decisions, understand the role of economics in
daily life (business and personal), and apply financial skills and knowledge in relevant contexts.
With respect to the assessment of entrepreneurial skills, many of these organizations have
invested in evaluating their individual programs. However, there are no large-scale, crossprogram assessments that are specifically designed to measure student knowledge and skill in this
area. With respect to economic and financial literacy at the pre-college level a number of
assessments have been developed. The Junior Achievement Economics Test Bank (E T B) is a
paper-and-pencil multiple choice test that covers twenty concepts related to micro, macro, and
international economics (such as scarcity and choice, economic systems and markets and prices)
with questions grounded in real applications to students’ lives. The E T B is aligned with the
Voluntary National Content Standards in Economics. The paper-and-pencil Jump$tart
Assessment of Financial Literacy is administered to students in grade twelve (Mandell two
thousand four). It has been used since nineteen ninety seven. The multiple choice items measure
knowledge of issues that Jump$tart Coalition members believe to be essential: financial
knowledge – income, money management, saving, and spending. A group of organizations,
including the National Assessment Governing Board, the American Institutes for Research, the N
C E E, and the Council of Chief State School Officers, is currently creating a twelfth grade
[End of Page Twenty Six]
[Page Twenty Seven]
NAEP that will be used to assess students’ economic knowledge.
[End of Page Twenty Seven]
[Page Twenty Eight]
Assessment of Learning Skills.
Our examination of existing assessments in the areas of global awareness, civic engagement and
literacy, and financial, economic and business literacy focus on those assessments that combine
these content areas with the application of key learning skills: information and communication
skills; thinking and problem solving skills; and interpersonal and self-directional skills. At the
same time, there exists a significant body of research, curricula and assessments focused on the
thinking skills themselves, independent of any associated content area. Each of these categories
has a long research and development history in psychological and educational literature. They can
be found in traditional measurement studies and practice. Much attention has been given to the
definition of these skill sets and how best to measure them accurately. Yet it is also important to
note that overlap exists among these types of skills, and the distinctions are not clear. Take for
example the construct of self-directional skills. A substantial set of literature exists on this skill
set, often defined under the rubric “self-regulated learning” (S R L). Depending upon the theory
and researcher, components of S R L may differ in subtle ways. Some components might be
classified as problem-solving skills, such as monitoring one’s performance. Some components
might be information skills, such as the ability to gather and integrate new information, while still
others, might be considered self-directional, such as self-efficacy. Thus, because of the overlap,
we have chosen to combine the three categories into an overarching one, entitled learning skills.
Many instruments exist for these categories of skills that have been used purely for research
purposes. These often have not been submitted to the typical psychometric tests (i e, norming,
reliability, and validity studies). We limited our examination to instruments that have been
published and have undergone the appropriate testing or those that are under development that
show promising tendencies for new and creative assessment techniques. Many of the instruments
that have been published are quite old, dating back more than forty years. However, as there has
been increasing scrutiny on the kind and quality of skills being measured by the new and growing
crop of high-stakes assessments, new thinking-skills assessments are beginning to appear.
Sometimes these are embedded within larger content-area assessments, as in the two thousand
three and two thousand four administrations of the Program for International Student
Assessment (P I S A), an international mixed-format paper-and pencil
[End of Page Twenty Eight]
[Page Twenty Nine]
assessment developed by Organization for Economic Co-operation and Development
(O E C D), wherein the O E C D tagged and reported on questions throughout the exams that
addressed thinking-skills and self-regulated learning.
Another large-scale thinking-skills assessment is currently being developed by The
Rainbow Project, a collaborative effort sponsored by the College Board and led by Robert
Sternberg of Yale University. The work is based on Sternberg’s triarchic theory of intelligence analytic, creative, and practical abilities. The objective of the Project is to develop a college
admissions test that will better predict success in college than the two currently used tests, the S A
T and A C T, while also minimizing group differences (i e, gender, ethnic, and institutional). The
test measures creative and practical skills in addition to the traditional skills tested by the S A T,
also a College Board product. The intent is to create a test that is grounded in psychological
theory, measures creative and practical skills, and allows admissions officers a window to a
broader range of skills that better predict college performance. The test will use a variety of
traditional and creative item formats, but even the traditional formats, such as multiple choice,
will have a new twist. Instead of a typical vocabulary word, the test will require students to
determine a word meaning from context, necessitating the use of analytic skills. In other tasks, it
will have the student plan and map out a route. It will present vignettes and require that students
determine the most appropriate course of action. These would be measures of practical skills.
Creative skills will be assessed using tasks such as providing captions for cartoons or writing a
short story based on a title or a picture. Beta testing has occurred both in paper-and-pencil and
Web-based media.
The Metiri Group is currently developing the Self-Directed Learning Inventory (S L I), a survey
instrument intended to provide an assessment of self-directed learning skills for students in grades
five through twelve. The paper-and-pencil instrument contains items in which students report
how they most likely feel or what they most often do. The items use a seven-point Likert scale,
ranging from completely false to completely true. The S L I has three scales and several
subscales:
(list of three bulleted items)
* Forethought, including goal setting, strategic planning, self-efficacy beliefs, goal orientation,
and intrinsic interest.
[End of Page Twenty Nine]
[Page Thirty]
* Performance slash volitional control, including attention focusing, self-instruction slash
imagery, self monitoring, and help seeking.
* Self reflection, including self evaluation, attributions, self reactions, and adaptivity.
The Partnership for the Assessment of Standards-Based Science (P A S S) Science
Assessment (P A S S) testing program was developed with a grant from the National Science
Foundation and is currently administered by WestEd. P A S S uses a variety of measures for
upper elementary, middle, and high school students to assess science content knowledge as well
as higher order thinking and communication skills. These measures are aligned with the National
Science Education Standards.
In some of the P A S S tasks, students conduct hands-on investigations with different kinds of
materials. Other tasks involve designing experiments, interpreting results, making inferences and
deductions, and explaining and communicating to others (in writing) the scientific understandings
acquired by working through the task. A given task contains several separately scored questions
that are set in a realistic and engaging context; and, these questions are organized and sequenced
to help students work through the task. A typical response to a question contains a few words or
sentences.
For example, one task asks students to measure the period of several pendulums (i e, the time
required for the pendulum to move from its extreme right position to the left and then back again
to the extreme right). The students measure the periods of pendulums that vary in length and the
amount of weight at the end. They are then asked to use these measurements to explain their
conclusion as to whether the length of the pendulum or the amount of weight at the end has the
greatest effect on its period. Finally, students are asked to use their findings to predict the period
of a pendulum they cannot test but whose length and weight are known.
In P A S S's current form, students write their answers in "lab notebooks" that are scored by
science teachers. However, a technology-based pilot study has indicated that it is feasible to use
computers to not only deliver the tasks to students but also to machine score their open-ended
answers. P A S S measures have been used successfully in statewide, district, and
[End of Page Thirty]
[Page Thirty One]
local assessment programs, and in evaluating the effectiveness of in-service teacher training
programs. Statistical analyses have found that P A S S scores are very reliable and appropriately
sensitive to the effects of instruction.
Finally, the Council on Aid to Education (C A E) has created the Assessment of Analytic
Reasoning and Communication Skills to assess analytic reasoning, critical thinking, problemsolving, and applied written communication skills at the college level. This computer-based
assessment may have future application in the high school arena. The assessment requires
students to address realistic problems across a range of contexts (e g arts and humanities,
business, science, and social science) without requiring specific content-area knowledge.
Responding to the problems requires test-takers to apply analytic skills, such as analyzing the
validity of an argument, and communicate their thinking effectively in writing.
Each of the assessment’s ninety minute problems presents students with an assignment, such as
making a recommendation as to whether their community’s dam should be removed. Students
review relevant documents and present their recommendation by listing the major arguments on
both sides of the issue, describing the strengths and limitations of these arguments, and then
making and justifying a recommendation as to how to proceed.
At the conclusion of each assignment, test-takers’ responses are uploaded and assessed by live
graders (though C A E is experimenting with machine-grading). The assessment then provides
summary results to schools and students. Empirical data show the scores are reliable and
positively correlated with the students’ grades.
[End of Page Thirty One]
[Page Thirty Two]
Assessment of I C T Literacy.
A look at education ministries and major education-oriented N G Os throughout
Western and Northern Europe, East Asia, Australia, and the Pacific Rim reveals a broad
consensus that I C T literacy must be treated as one of the core skill areas to be addressed by
national education systems in the new century. The language of this consensus – the way that I C
T literacy is being described and addressed by major actors in countries as diverse as the U K,
Finland, Singapore and Israel - indicates that I C T literacy’s elevation to centrality in national
curricula has been fueled, in part, by a new understanding of I C T as a domain within which
students can develop and display the kinds of higher-order thinking skills that education
authorities are seeking to foster in students.
In setting the direction of policy for I C T education in the next decade, papers coming out of
these education ministries and allied organizations draw an explicit link between I C T literacy
and higher-order cognitive abilities. Examples of this growing attitude and approach to I C T
pedagogy abound, such as the European SchoolNet Chair’s “Assessing I C T and Learning,” (de
Ricjke two thousand four) or the annual series of white papers on “Adapting Education to the
Information Age,” produced by the Korean Ministry of Education and Human Resources
Development (e g Korean Ministry two thousand three). This increasingly international definition
of I C T literacy dovetails with the Partnership’s own vision for I C T literacy as an ability to use
twenty first century tools in service of thinking and problem-solving skills, information and
communication skills, and interpersonal and self-directional skills.
In logical conjunction with this increased emphasis on I C T literacy as a key competency
comprised of twenty first century content and skills, there is a growing interest in developing
assessments that can capture students’ higher-order thinking with technology, both in the sense of
(a) creating assessments that reveal the cognitive skills students employ in conjunction with their
use of technology, and (b) using technology as a means of delivering such assessments. However,
even the most forward-looking education ministries, international N G Os and university-based
programs are only beginning the process of creating these assessments. A handful of active
projects - largely based in the United Kingdom - have managed to combine the educational
vision, funding, and appropriate political will necessary to begin developing broadly applicable
assessments that capture students’ application of higher-order thinking in problems requiring the
use of I C T skills.
[End of Page Thirty Two]
[Page Thirty Three]
The U K’s emerging tools have attracted the attention of other ministries of education in Europe
and Asia, many of which plan to adopt and adapt assessments and assessment strategies
developed there (Leong, Erstad, Lawand Ripley: November two thousand four). Of these U Kbased initiatives, the innovative new Key Stage three (age twelve through thirteen) I C T Literacy
Assessment created by the British government’s Qualifications and Curriculum Authority (Q C
A) is garnering the greatest amount of international attention.
The assessment, administered online, is an ambitious attempt to assess higher-order thinking in
conjunction with I C T use. It is also one of the most promising new assessments we found in any
of our five content areas, as it represents a sophisticated new approach to combining assessment
of content-area and thinking skills, and to building assessments that can provide both national
data on students’ capabilities and information on individual students pertinent to classroom-level
instruction. The test will not only assess students’ I C T skills, but also their ability to use those
skills to solve a set of complex problems involving research, communication, information
management, and presentation. For example, one task in the pilot test asks students to draft and
publish a journalistic article probing the ethnic diversity of a small town’s police force and
teaching pool. To publish their article, students must collect and analyze employment data, email
sources for permission to publish the information and graphic material they gather, and present
the data in graphic as well as written form. In the course of their work, students use search
engines, navigate web-based information sources, exchange emails with information sources, and
employ spreadsheets, word processors, and presentation software to analyze and present their
research. All of this activity takes place within a complex “virtual town” - an assessment
environment with a detailed landscape, a ‘walled garden’ of assets (e g, text, pictures, data and
“canned” websites) and a toolkit of generic software programs developed by the Q C A to provide
the same capabilities as familiar productivity software on the level playing field of a non-brandspecific platform.
Most importantly, the Q C A’s responsive assessment engine tracks and responds to the testtaker’s performance on both technical and problem-solving tasks throughout the course of the
assessment - as students work through the test session their actions are tracked by the
[End of Page Thirty Three]
[Page Thirty Four]
computer and mapped against the capabilities that they are expected to demonstrate at each level
of the national curriculum; this includes both technical skills and learning skills, such as “finding
things out,” “developing ideas” and “exchanging and sharing information.” Student scores are
based on their performance in these areas as well as their demonstrated level of technical skill,
and the test engine’s final output includes not only a numerical score (useful for national ranking
purposes), but a detailed profile of the test-taker’s performance and areas for potential
improvement - making the test useful not only for ranking students, but for providing them with
targeted instruction in the future.
The Q C A plans a full rollout of the Key Stage three I C T Literacy Assessment to all U K
schools by mid two thousand six. The effort represents an investment of twenty six million
pounds to the British government for the test material and testing infrastructure (Ripley
November two thousand four). As U K schools await full implementation of this pioneering
assessment, some of the nation’s schools are already employing a number of smaller scale but
ambitious performance-based online I C T assessments, such as the C C E A (Northern Ireland’s
Council for the Curriculum, Examinations and Assessment) A-Level Examination in the Moving
Image, which allows students to demonstrate their mastery of multimedia literacy and production
through creation of three-minute films and an online exam wherein they are asked to critically
analyze a series of film clips; The Ultralab Certificate of Digital Creativity, which requires
students to discuss and “defend” their work (digitally produced film, artwork and music are all
eligible for assessment) to a panel of peers and professionals; and the eViva “e-portfolio” facility,
which provides a structured environment for students to post examples of their digitally-produced
work online and engage in dialogue with teachers and peers about their work in relation to a
number of the same curriculum areas touched on by the I C T Key Stage three Assessment research and communication skills, data analysis and presentation.
In the United States, only the Educational Testing Service’s new I C T Literacy Assessment for
higher-education settings approaches the work being developed in Great
Britain. The E T S assessment employs “scenario-based assignments” - tasks such as selecting the
best database for an information need, filling in a concept map, or reforming a word processing
document - to evaluate test-takers’ abilities to use I C T to think critically and solve problems.
[End of Page Thirty Four]
[Page Thirty Five]
Though this evaluation is currently targeted only to college age students, it may well have
implications for future assessment at the K through twelve level, particularly in high schools.
One performance-based assessment in the U S aimed directly at eighth grade is the
N E T S Online Technology Assessment developed jointly by the International Society for
Technology in Education (I S T E) and Microsoft. In accordance with I S T E’s standards
framework, which focuses on the use of I C T to demonstrate achievement in analytic, production
and communication skills, the assessment’s twelve thirty minute activities require students to use
a variety of Microsoft’s most commonly used Office applications Word, Excel, PowerPoint,
Internet Explorer, Outlook, Access and FrontPage - to complete real-world tasks such as writing a
business letter or constructing a slide presentation on “The Nine Planets.” The assessments offer
formative information about students’ skills and have been offered as an online tool for teachers
and administrators to gauge their students’ progress towards N C L B’s eighth grade technologyliteracy requirement.
The assessments discussed above represent some of the interesting and innovative work being
done by organizations and government agencies aiming to align the assessments we use to
measure success in the twenty first century with the higher-order skills required to achieve that
success. However, the majority of assessment instruments currently in widespread use were not
designed to target twenty first century skills, and there is a great deal of work to be done if we are
to transform the assessment landscape. In the next section, we discuss some of the costs and
benefits of a transition to twenty first century assessment.
[End of Page Thirty Five]
[Page Thirty Six]
Implications.
With spending on assessment development expected to grow into the billions of dollars his
decade, it is vital that our investment not focus merely on fulfilling federal requirements, but
rather squarely on preparing today’s children to face the challenges of participation in tomorrow’s
complex workplace and interconnected global community. Toward this goal, the Partnership has
synthesized the perspectives of business, education and government leaders and mapped out the
key content and skill areas that are fast becoming the cornerstones of global economic
competitiveness. Making this blueprint of twenty first century skills a reality will require the
development of new assessments that measure students’ acquisition and application of higherorder thinking skills, and that take advantage of advances in information technology to provide
data in real-time to the education community’s decision-makers - policymakers, administrators,
and teachers -and its key constituents - parents and students. Meant as an aid to the education
community in meeting this challenge, this report has surveyed the landscape of twenty first
century skills assessments worldwide and (in the appendix) outlined key terms in the assessment
field. In this section, we reflect on the implications of our findings and outline key principles and
strategic recommendations for guiding the creation of assessments of twenty first century skills
and for making these assessments a more prominent component of the U S educational system.
The movement to foster twenty first century skills and twenty first century content learning as
well as develop means to measure complex, higher-order thinking skills is emerging. Twelve
states are devising initial forays into global awareness assessments, including Connecticut,
Delaware, Kansas and Massachusetts. Nineteen states have some civic-skills testing, and eleven
of those states tie the tests to accountability measures. The importance of economics education, in
particular, has been recognized on the federal level with the passage of the Excellence in
Economics Education Act, a component of N C L B. And internationally, a broad consensus
exists among education ministries and major education-oriented N G Os throughout Europe, Asia,
Australia, and the Pacific Rim that I C T literacy must be treated as a core skill area in the new
century, drawing explicit links between I C T literacy and higher-order cognitive abilities and
making initial investments in developing assessments that can capture students’ higher-order
thinking with technology. However, even these education ministries and organizations are only
beginning the process of creating such assessments.
[End of Page Thirty Six]
[Page Thirty Seven]
Most assessments of twenty first century skills and content areas have so far focused on closedanswer response questions that test students’ knowledge of discrete facts and not on performancebased measures that examine students’ application of knowledge in problem-solving situations.
With other ministries watching, the United Kingdom has managed to combine the educational
vision, funding, and political will to develop assessments that capture students’ application of
high-order thinking in problems requiring the use of I C T skills. As more of our economic
competitors move to foster twenty first century skills development and assessment within their
educational systems, the United States faces a critical challenge to keep pace in preparing our
students to meet the demands of global community and tomorrow’s workforce.
If we are to improve the status of assessments in the U S educational system and to ensure that we
are measuring the kinds of content and skills that are necessary to succeed in the twenty first
century, we must recognize not just the power of assessments as levers to effect change but also
as tools to foster the application of higher-order thinking skills and to provide critical feedback to
inform instruction and student learning. The creation of diagnostic, formative and summative
assessments that measure students’ content knowledge, skill development, and I C T literacy
presents an important opportunity to feed a variety of assessment data back into our nation’s
school system at their appropriate levels, from the trend and pattern data needed by policymakers,
state officials, and district administrators to the individual and class level student data on
particular tasks, skills, and topics needed by classroom teachers, student, and parents. While
creating more diagnostic, performance-based measures requires some investment, embracing this
teaching and assessment of twenty first century skills presents us with a unique opportunity to
leap ahead of our competition in the global community. The Partnership believes that the
movement to embrace twenty first century skills will be greatly enhanced by identifying ways to
measure these skills. Conversely, the lack of such assessments will hinder progress toward
widespread adoption of twenty first century skills.
[End of Page Thirty Seven]
[Page Thirty Eight]
Four Key Principles for Assessing Twenty First Century Learning.
In its effort to advance progress toward the teaching and assessment of twenty first century skills,
the Partnership has outlined the following key principles that assessments of twenty first century
learning should address.
(list of three bulleted items; item one has a sub-list)
* We need assessment tools that will:
(sub-list of four dashed items)
- Measure student mastery of twenty first century skills.
- Diagnose where students require intervention in terms of twenty first century skills.
- Measure the educational system’s effectiveness in teaching twenty first century skills.
- Permit students to demonstrate their proficiency in twenty first century skills to educational
institutions and prospective employers; high stakes assessments alone do not generate evidence of
the skill sets that the business and education communities believe will ensure success in the
twenty first century.
* No single assessment tool will accomplish all these objectives. A diverse menu of assessment
tools must be available.
* Technology should be integrated into assessment tools to effectively measure twenty first
century skills.
[End of Page Thirty Eight]
[Page Thirty Nine]
Assessing Twenty First Century Skills: Five Strategic Recommendations.
In order to realize this vision and build momentum behind the assessment of twenty first century
skills, the Partnership has identified the following steps as necessary actions that tap the expertise
of key education stakeholders.
(list of five bulleted items)
* Articulate and build national consensus around the assessment of twenty first century skills
through large-scale public education initiatives. Creating broad-based understanding of the
importance of twenty first century skills and the value of measuring these skills is essential to
developing public momentum.
* Implement a policy that supports a broad vision for the adoption of assessments of twenty first
century skills. Administrators and practitioners are reluctant to take on any additional
assessments, particularly when assessments require more time out of the instructional day for
testing and when they require expenditures of funds for testing.
Presenting educators with a clear vision that outlines the benefits, diagnostic role, and the
integration of such assessments within the educational system would help address this issue.
* Develop and support an R and D infrastructure for building assessments that measure
cognitively complex and real-world-related tasks. Currently, there isn’t a sufficient base of
research and development to have the full range of tools that could be used to assess twenty first
century skills and content knowledge. The federal government should take the lead in establishing
this infrastructure, both to ensure a more uniform approach to measuring these skills across the
nation and to alleviate states of the burden of developing an additional assessment when they are
already struggling to meet current testing demands. In some cases, however, states eager to meet
this twenty first century learning challenge in the near term may want to invest in developing
their own infrastructure or crafting an action agenda for their school systems. To develop and
validate the assessments, educational research institutions, both in higher education and the public
sector, should be engaged. The involvement of these institutions will be critical to the success of
these efforts.
[End of Page Thirty Nine]
[Page Forty]
* Develop support in the private and public sectors to create viable production and distribution
networks for assessment instruments and tools that measure twenty first century skills. With
greater harmony among efforts to ensure accountability and efforts to improve twenty first
century instruction, private sector companies in collaboration with the public sector could play
the vital role of supplying aligned assessments throughout the nation’s school system.
* Challenge every state to adopt a system that assesses the full range of twenty first century skills
and twenty first century content knowledge by two thousand ten. This instrument or set of
instruments should be designed to generate both trend and pattern data needed to direct policy
and administrative decisions, as well as instructionally relevant, real-time student and classroom
data needed to differentiate and strengthen classroom instruction.
[End of Page Forty]
[Page Forty One]
References.
* American Diploma Project. (two thousand four). Ready or not: Creating a high school diploma
that counts. Washington, D C: Achieve, Inc. Retrieved May six, two thousand five, from w w w
dot a c h i e v e dot o r g slash a c h i e v e dot n s f slash A m e r i c a n D i p l o m a P r o j e c t
question mark o p e n f o r m.
* Baker, R. S., Corbett, A. T., and Koedinger, K. R. (two thousand four). Pages fifty eight
through sixty five in Y. B. Kafai, W. A. Sandoval, N. Enyedy, A. S. Nixon, and F. Herrera
(Editors), Proceedings of the sixth international conference of the learning sciences. Mahwah,
New Jersey: Lawrence Erlbaum Associates.
* Braun, H. I. (two thousand three). Assessment and technology. Pages two hundred sixty seven
through two hundred eighty eight in M. Segers, P. Dochy, and E. Cascallar (Editors), Optimising
new modes of assessment Dordrecht, Netherlands: Kluwer.
* Burns, H., Partlett, J. W., and Redfield, C. L. (Editors.). (nineteen ninety one). Intelligent
tutoring systems: Evolutions in design. Hillsdale, New Jersey: Lawrence Erlbaum Associates.
* Clinton, J., Conway, C., and Hoke, L. (two thousand four, June). The globally competitive south
(under construction). Research Triangle Park, North Carolina: Southern Growth Policies Board.
* Cronbach, L. J. (nineteen seventy one). Test validation. Pages four hundred forty three through
five hundred seven in R. L. Thorndike (Editor), Educational measurement (second edition).
Washington, D C: American Council on Education.
* Danitz, T. (two thousand one, February twenty seven). Special report: States pay four hundred
million dollars for tests in two thousand one. Washington, D C: Stateline dot org. Available at w
w w dot s t a t e l i n e dot o r g.
* de Rijcke, F. (two thousand four, March). Assessing I C T and learning. Paper presented at the
two thousand four C o S N K through twelve School Networking Conference, Arlington,
Virginia.
* Feldt, L. S., and Brennan, R. L. (nineteen eighty nine). Reliability. Pages one hundred five
through one hundred forty six in R. L. Linn (Editor), Educational measurement (third edition).
New York: Macmillan.
* Good, R.H., Kaminski, R.A., Smith, S., Laimon, D., and Dill, D. (two thousand one). Dynamic
indicators of basic early literacy skills (D I B E L S) (fifth edition)., Eugene, Oregon: University
of Oregon, Institute for Development of Educational Achievement. Retrieved August nine, two
thousand four, from d i b e l s dot u o r e g o n dot e d u.
[End of Page Forty One]
[Page Forty Two]
* Government Accounting Office. (two thousand three, May). Title I characteristics of tests will
influence expenses; information sharing may help states realize efficiencies. Report to
Congressional requesters. Washington, D C: Author.
* International Baccalaureate Organization. (two thousand five a). The diploma program.
Retrieved January three, two thousand five, from w w w dot i b o dot o r g slash i b o slash i n d e
x dot c f m question mark p a g e equals sign slash i b o slash p r o g r a m m e s slash p r g
underscore d i p ampersand l a n g u a g e equals sign E N.
* Kagan, S. L., and Stewart, V. (two thousand four a). Putting the world into world-class
education. Phi Delta Kappan, eighty six(three), one hundred ninety five through one hundred
ninety seven.
* Kagan, S. L., and Stewart, V. (two thousand four b). International education in the schools: The
state of the field. Phi Delta Kappan, eighty six(three), two hundred twenty nine through two
hundred thirty six.
* Korean Ministry of Education and Human Resources Development. (two thousand three).
Adapting education to the information age. Seoul: Korean Ministry of Education and Human
Resources Development.
* Linn, R. L. (nineteen eighty nine). Educational measurement (third edition). New York:
Macmillan.
* Mandell, L. (two thousand four). Financial literacy, financial failure, and the failure of
financial education. Buffalo, New York: University at Buffalo, Department of Finance and
Managerial Economics.
* Messick, S. J. (nineteen eighty nine). Validity. Pages thirteen through one hundred three in R.
L. Linn (Editor), Educational measurement (third. edition). New York: Macmillan.
* Mullis, I.V.S., Martin, M.O., Gonzalez, E.J., and Chrostowski, S.J. (two thousand four). T I M
S S two thousand three international mathematics report: Findings from I E A’s trends in
international mathematics and science study at the eight and fourth grades. Chestnut Hill, MA:
Boston College.
* National Assessment Governing Board. (two thousand one). Two thousand ten National
Assessment of Educational Progress in geography assessment framework. Retrieved December
fifteen, two thousand four, from n c e s dot e d dot g o v slash n a t i o n s r e p o r t c a r d slash g
e o g r a p h y.
[End of Page Forty Two]
[Page Forty Three]
* National Commission on Asia in the Schools. (two thousand one). Preparing young Americans
for today’s interconnected world. New York: Asia Society.
* National Research Council. (two thousand one a). Knowing what students know: The science
and design of educational assessment. Committee on the Foundations of Assessment. Pellegrino,
J. Chudowsky, N., and Glaser, R. (Editors). Board of Testing and Assessment, Center for
Education. Division of Behavioral and Social Sciences and Education. Washington, D C:
National Academy Press.
* National Research Council. (two thousand one b). Classroom assessment and the national
science education standards. J. M. Atkin, P. Black, and J. Coffey (Editors). Center for Education,
Division of Behavioral and Social Sciences and Education. Washington, D C: National Academy
Press.
* Partnership for Twenty First Century Skills. (two thousand three). Learning for the Twenty
First Century. Washington, D C: Author. Available online at w w w dot two one s t c e n t u r y s
k i l l s dot o r g.
* Polson, M. C., and Richardson, J. J. (Editors). (nineteen eighty eight). Foundations in
intelligent tutoring systems. Hillsdale, New Jersey: Lawrence Erlbaum Associates.
* Psotka, J., Massey, L. D., and Mutter, S. A. (Editors). (nineteen eighty eight). Intelligent
tutoring systems: Lessons learned. Hillsdale, New Jersey: Lawrence Erlbaum Associates.
* Ridgeway, J. MCCusker, S. and Pead, D. (two thousand four). Literature Review on Eassessment. United Kingdom: Nesta Futurelab Series. Report Ten.
* Sleeman, D.H., and Brown, J.S. (nineteen eighty two). Intelligent tutoring systems. Orlando,
Florida: Academic Press.
* Spellings, M . (two thousand five). Is America really serious about educating every child?
(Prepared remarks for Secretary Spellings at the Education Writers Association National Seminar,
St. Petersburg, Florida, May six, two thousand five). Retrieved from: w w w dot e d dot g o v
slash n e w s slash s p e e c h e s slash two zero zero five slash zero five slash zero five zero six
two zero zero five dot h t m l.
* Stage, E.K. (two thousand five, Winter). Why do we need these assessments? The Natural
Selection: The Journal of BSCS. Pages eleven through thirteen.
[End of Page Forty Three]
[Page Forty Four]
* Stanley, J. J. (nineteen seventy one). Reliability. Pages three hundred fifty six through four
hundred forty two in R. L. Thorndike (Editor), Educational measurement (second edition).
Washington, D C: American Council on Education.
* Thorndike, R. L. (Editor). (nineteen seventy one). Educational measurement (second edition).
Washington, D C: American Council on Education.
* Yakimowski-Srebnick, M.E. (two thousand one). Security in a high-stakes environment: The
perceptions of test directors. New Orleans: National Council on Measurement in Education (N C
M E) Annual Meeting, National Association of Testing Directors Symposia.
[End of Page Forty Four]
[Page Forty Five]
Appendix A: Current Assessments.
Note: For an interactive and up-to-date view of current assessments related to twenty first
century skills, please visit w w w dot two one s t c e n t u r y s k i l l s dot o r g slash a s s e s s
two one.
Assessment of Global Awareness.
International Baccalaureate Diploma.
Type: Mixed format.
Target Grade Level: High School.
Initiated By: Non-Commercial.
Region: International slash multi-region.
Delivery System: Paper-and-pencil.
More info at: w w w dot i b o dot o r g slash i b o slash i n d e x dot c f m question mark p a g e
equals sign slash i b o slash p r o g r a m m e s slash p r g underscore d i p ampersand l a n g u a g
e equals sign E N.
The International Baccalaureate Organization (I B O), a non-profit organization established in
nineteen sixty eight, currently works with one thousand four hundred sixty two schools in one
hundred seventeen countries, offering multi-subject education programs at three stages: a
“Primary Years Programme” (P Y P) for students aged three through twelve, a “Middle Years
Programme”(M Y P) for students aged eleven through sixteen, and a Diploma Programme (D P)
for students aged sixteen through nineteen. The Diploma Programme (D P) - available only in
English, French and Spanish - culminates in a criterion-referenced, high-stakes examination that
is intended to fulfill the graduation requirements of participating students. The D P and its
associated exam attempt to “incorporate the best elements of national systems, without being
based on any one” (I B O web site two thousand five a).
N A E P Geography Assessment.
Type: Mixed format.
Target Grade Level: Elementary, Middle and High School.
Initiated By: Policymakers.
[End of Page Forty Five]
[Page Forty Six]
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: n c e s dot e d dot g o v slash n a t i o n s r e p o r t c a r d slash g e o g r a p h y.
The National Assessment of Educational Progress (N A E P) Geography Assessment is a set of
mixed-format exams designed to measure the geography knowledge and skills of fourth, eighth,
and twelfth-graders. The assessment targets three broad categories of cognitive skills
(“Knowing,” “Understanding,” and “Applying”), and three categories of content knowledge
(“Space and Place,” “Environment and Society,” and “Spatial Dynamics and Connections”). Each
question in the exam is intended to address at least one cognitive dimension and at least one
content dimension. The N A E P is administered by the Department of Education’s National
Center for Education Statistics, a bipartisan board appointed by the Secretary of Education sets
policy for the program.
NCGE Intermediate-Level Geography Test.
Type: Mixed format.
Target Grade Level: Middle School.
Initiated By: Non-Commercial.
Region: U S only.
Delivery System: Paper-and-pencil.
More info at: w w w dot n c g e dot n e t slash M e r c h a n t two slash m e r c h a n t dot m v c
question mark S c r e e n equals sign P R O D ampersand S t o r e underscore C o d e equals sign
N ampersand P r o d u c t underscore C o d e equals sign b p one one nine ampersand C a t e g o r
y underscore C o d e equals sign a t.
Created by the National Council for Geographic Education (N C G E) Test Development
Task Force, this assessment test is keyed to the nineteen ninety four “Geography for Life”
National Geography Standards. The two-part, ninety question test encompasses six essential
elements: The World in Spatial Terms, Places and Regions, Physical Systems, Human Systems,
Environment and Society, and Uses of Geography. The test was revised and reassessed (for
content validity and reliability) in two thousand.
[End of Page Forty Six]
[Page Forty Seven]
Note: page forty seven is blank.
[End of Page Forty Seven]
[Page Forty Eight]
Assessment of Civic Engagement.
Assessment of Civic Skills Acquisition among Adolescents.
Type: Mixed format.
Target Grade Level: Elementary School.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot c i v i c y o u t h dot o r g slash g r a n t s slash i n p r o g r e s s slash h s
underscore c i v i c dot h t m number sign one two.
Principal Investigator Mary Kirlin, California State University Sacramento, has been funded by
the Center for Information and Research on Civic Learning Engagement (C I R C L E) to develop
a set of instruments to measure civic skills acquisition within Kirlin’s framework of four core
civic skill areas: communication, organization, collective decision-making, and critical thinking.
This work is in progress.
CIRCLE Civic Engagement Quiz.
Type: Multiple choice slash forced-answer.
Target Grade Level: High School.
Initiated By: Non-Commercial.
Region: International slash multi-region.
Delivery System: Paper-and-pencil.
More info at: w w w dot c i v i c y o u t h dot o r g slash p r a c t i t i o n e r s slash i n d e x dot h
t m.
The C I R C L E Civic Engagement Quiz is a tool for the evaluation of young people’s civic
engagement as compared to a national sample. The quiz is focused on respondents’ civic behavior
(as opposed to their knowledge). The quiz asks about the character and frequency of the testtaker’s participation in civic activities, such as writing letters-to-the-editor, voting, political
activism or volunteering. Teachers or program leaders administering the quiz may then use the
results to categorize respondents’ civic engagement within C I R C L E’s “Typology of
Engagement” (civic activities, electoral activities, political voice activities) and to classify each
respondent as “Disengaged,” an “Electoral Specialist,” a “Civic Specialist,” or a “Dual Activist.”
[End of Page Forty Eight]
[Page Forty Nine]
Civic Outcomes for Elementary School Students.
Type: Mixed format.
Target Grade Level: Elementary School.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot c i v i c y o u t h dot o r g slash g r a n t s slash i n p r o g r e s s slash h s
underscore c i v i c dot h t m number sign one five.
This two-year project comes out of a partnership between C I R C L E, the East Bay Conservation
Corps, Abt Associates, Inc., and Brandeis University. Researchers from Abt and Brandeis are
working with the educators at the Corps’charter school to identify indicators and pilot
assessments of elementary school students’ civic knowledge and engagement. The aim of the
project is to create tools for researchers and educators assessing the impact of civic and servicelearning programs. The project team will produce a conceptual framework for civic education at
this level, and a set of valid and reliable measures of civic knowledge, skills, attitudes, and
behaviors.
Databank of Civic Assessment Tools.
Type: Mixed format.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: Not Available (N/A).
In partnership with the National Center for Learning and Citizenship (N C L C) and the
Education Commission of the States (E C S), and funded by the Carnegie Corporation of
[End of Page Forty Nine]
[Page Fifty]
New York and C I R C L E, the Campaign for the Civic Mission of the Schools is currently
engaged in a major project to assemble a wide array of released test items to identify both
multiple choice and performance-based assessments in two broad areas: (a) student learning,
including students’ civics knowledge, skills and attitudes; and (b) institutional capacity for civics
education, including a school’s civic climate and investment in a civic mission. Items will cover
in five categories: civic knowledge, civic thinking and participation skills, social responsibility,
civic disposition, and characteristics of school and curriculum.
IEA Civic Education Study (C I V E D).
Type: Multiple choice slash forced-answer.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: International slash multi-region.
Delivery System: Paper-and-pencil.
More info at: w w w dot i e a dot n l slash i e a slash h q.
From nineteen ninety four through two thousand two, the International Association for the
Evaluation of Educational Achievement (I E A) conducted the “largest and most rigorous study of
civic education ever conducted internationally” (Torney-Purta, Lehmann, Oswald and Schulz two
thousand one). The research tested and surveyed nationally representative samples consisting of
ninety thousand fourteen year-old students in twenty-eight countries, and fifty thousand seventeen
to nineteen year-old students in sixteen countries. Questions in I E A’s survey instrument were
organized around issues of Democracy, National Identity, Social Cohesion and Diversity, and
Engagement in Civil Society. All of the nations participating in the study were democracies, and
thus the study was one of civic engagement in democratic societies only. The study questionnaire
was divided into three sections. The first section consisted of multiple choice items aimed at
testing knowledge of civic content (e g, the workings of laws and branches of government) and
skills in interpreting civics-related information. The second section consisted of background items
about test-takers. The third section included items about civic concepts, attitudes and behavior,
divided into sub-sections related to topics such as “Good Citizens,” “Political Action,” and
“Opportunities.”
[End of Page Fifty]
[Page Fifty One]
Items in this section were constructed in a variety of ways; many questions were Likert scaled,
others were yes slash no.
N A E P Civics Test (grades four, eight and twelve).
Type: Mixed format.
Target Grade Level: Elementary, Middle and High School.
Initiated By: Policymakers.
Region: U S only.
Delivery System: Paper-and-pencil.
More info at: n c e s dot e d dot g o v slash n a t i o n s r e p o r t c a r d slash c i v i c s.
The National Assessment of Educational Progress (N A E P) Civics exam is a mixed-format test
comprising multiple choice, short constructed-response and extended-constructed response
questions, organized into three main categories: civic knowledge, intellectual skills and civic
dispositions.
[End of Page Fifty One]
[Page Fifty Two]
Assessment of Financial, Economic, and Business Literacy.
Jump$tart Assessment of Financial Literacy.
Type: Multiple choice slash forced-answer.
Target Grade Level: Middle School.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot j u m p s t a r t dot o r g.
The Jump$tart Coalition has developed personal finance standards and is developing a bank of
financial-literacy items. The Jump$tart Assessment of Financial Literacy is a high school measure
of financial literacy that is administered to students in grade twelve (Mandell two thousand four).
It has been used since nineteen ninety seven. The items measure what Jump$tart Coalition
members believe to be essential financial knowledge. This content includes income, money
management, saving, and spending.
Junior Achievement Economics Test Bank.
Type: Multiple choice slash forced-answer.
Target Grade Level: Middle School.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot j a dot o r g slash p r o g r a m s slash p r o g r a m s underscore s u p p l
e m e n t s slash e t b dot s h t m l.
Junior Achievement produced the Economics Test Bank (E T B) as a tool to help assess
understanding of economics principles. Grounded in real applications to students’ lives, the tool
measures knowledge of micro, macro, and international economics. The E T B is aligned with the
Voluntary National Content Standards in Economics. It is a multiple choice test that covers
twenty economic concepts: scarcity and choice, benefits and costs, economic systems, incentives,
voluntary exchange, specialization, markets and prices, supply and
[End of Page Fifty Two]
[Page Fifty Three]
demand, competition, economic institutions, money, interest rates, income, entrepreneurs,
investment, government, economics and public policies, gross domestic product, unemployment,
and fiscal and monetary policy. The E T B focuses on middle school students.
N A E P Economics Assessment.
Type: Mixed format.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: U S only.
Delivery System: Paper-and-pencil.
More info at: n c e s dot e d dot g o v slash n a t i o n s r e p o r t c a r d slash e c o n o m i c s.
Acknowledging that economics knowledge is becoming increasingly important to effective
functioning in today’s society, the N C E S is developing a N A E P Economics assessment for
grade twelve students. The assessment will be administered for the first time in two thousand six.
As with the other N A E P tests, the National Assessment Governing Board (N A G B two
thousand two) has developed a framework to guide item construction. N A E P defines economic
literacy as the ability to identify, analyze, and evaluate the consequences of individual decisions
and public policy, including an understanding of: the fundamental constraints imposed by limited
resources and the resulting choices people have to make, and the trade-offs they face; how
economies and markets work and how people function within them; and the benefits and costs of
economic interaction and interdependence among people and nations.
The assessment will contain three types of items: sixty percent multiple choice, thirty percent
short constructed response, and ten percent extended constructed response. The cognitive
demands required of the items will be evenly distributed among knowing, reasoning, and
applying. The assessment will be divided into three content categories, based on the amount of
testing time per area: forty five percent on market economy, forty percent on the national
economy, and fifteen percent on international economy. These categories are further refined by
the standards on which they are grounded.
[End of Page Fifty Three]
[Page Fifty Four]
N C E E Assessments: Basic Economics Test, Test of Economic Knowledge, and Test of
Economic Literacy.
Type: Multiple choice slash forced-answer.
Target Grade Level: Elementary, Middle and High School.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil slash computer-delivered.
More info at: w w w dot n c e e dot n e t.
The National Council on Economic Education (N C E E) has developed three measures for precollege audiences. The Basic Economics Test (B E T) is a test of economics understanding for
grades five and six. It is also a nationally normed and standardized test that contains twenty nine
multiple choice items. The Test of Economics Knowledge (T E K) is a middle-school test
designed to measure economics knowledge in grades seven through nine. It is a nationally
normed test with two forms, each containing forty multiple choice items. The Test of Economic
Literacy (T E L) is a high school level measure based on the national standards. It contains forty
multiple choice items that cover fundamental economic, microeconomic, macroeconomic, and
international economic concepts. The T E L is available online.
Assessment of Learning Skills.
Assessment of Analytic Reasoning and Communication Skills.
Type: Performance-based.
Target Grade Level: Higher Education.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Computer-delivered.
More info at: w w w dot c a e dot o r g.
[End of Page Fifty Four]
[Page Fifty Five]
The Council on Aid to Education (C A E)’s assessment measures analytic reasoning, critical
thinking, problem-solving, and applied written communication skills at the college level. Testtakers address realistic problems across a range of contexts (e g arts and humanities, business,
science, and social science) without need for specific content-area knowledge; test-takers must
apply analytic skills (such as analyzing the validity of an argument) and communicate their
thinking effectively in writing. Each of the assessment’s ninety minute assignments requires
students to review relevant documents and present a recommendation by listing the major
arguments on both sides of an issue, describing the strengths and limitations of these arguments,
and then making and justifying a recommendation as to how to proceed. At the conclusion of
each assignment, test-takers’ responses are up-loaded and assessed by live graders (though C A E
is experimenting with machine-grading). The assessment then provides summary results to
schools and students. Empirical data show the scores are reliable and positively correlated with
the students’ grades.
Cornell Critical Thinking Tests.
Type: Multiple choice slash forced-answer.
Target Grade Level: Elementary, Middle and High School.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot c r i t i c a l t h i n k i n g dot c o m slash s e r i e s slash zero five five
slash i n d e x underscore c dot h t m l.
The Cornell Critical Thinking Tests are published by Critical Thinking Press and Software. The
tests date from nineteen eighty five, having been developed by two Cornell University professors,
Robert Ennis and Jason Millman. Level X is appropriate for assessing the critical thinking
abilities of students in grades four through fourteen. It focuses on deduction, credibility, and
identification of assumptions. It is a multiple choice test. Earlier versions of the test focused on
specific components of critical thinking. The Cornell Class Reasoning Test (nineteen sixty four)
is a multiple choice test for grades four through fourteen to measure class focused on (deductive)
reasoning, while the Cornell Conditional Reasoning Test (nineteen sixty four) measures
conditional (deductive) reasoning.
[End of Page Fifty Five]
[Page Fifty Six]
Ennis-Weir Critical Thinking Essay Test.
Type: Performance-based.
Target Grade Level: Middle and High School.
Initiated By: Non-Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: f a c u l t y dot e d dot u i u c dot e d u slash r h e n n i s slash A s s e s s m e n t dot
h t m l.
Published by Robert H. Ennis in nineteen eight five, the test is intended for use in grades seven
through college to measure skills such as getting the point, seeing the reason and assumptions,
offering reasons, and seeing other possibilities and explanations.
Kit of Factor-Referenced Cognitive Tests (Nineteen Seventy Six Edition).
Type: Mixed-format.
Target Grade Level: High School.
Initiated By: Non-commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot e t s dot o r g slash r e s e a r c h slash e k s t r o m dot h t m l.
The Kit was originally published by Educational Testing Service in nineteen sixty three as the Kit
of Reference Tests for Cognitive Factors (French, Ekstrom and Price nineteen sixty three), and
was revised in nineteen seventy six. It is primarily a research tool to examine various cognitive
skills, in particular reasoning, verbal ability, spatial ability, memory, and other cognitive
processes. The Kit consists of seventy two tests that measure twenty three cognitive factors.
These factors include: flexibility of closure, speed of closure, verbal closure, associational
fluency, expressional fluency, figural fluency, ideational fluency, word fluency, induction,
integrative processes, associative
[End of Page Fifty Six]
[Page Fifty Seven]
memory, memory span, visual memory, number, perceptual speed, general reasoning, logical
reasoning, spatial orientation, spatial scanning, verbal comprehension, visualization, figural
flexibility, and use of flexibility. There are two to five tests to assess each of the factors. For
example, the logical reasoning factor is assessed by the Nonsense Syllogisms Test, Diagramming
Relationships, the Inference Test, and Deciphering Languages. The seventy two tests use a
variety of item formats, ranging from more traditional multiple choice to extensive production
responses. Many of the tests are quite creative in what they require of the test taker. Extensive
development and research has been carried out on these tests and they are theoretically linked to
the cognitive literature. The test can be used for school-age children up through adulthood.
New Jersey Test of Reasoning Skills.
Type: Performance-based.
Target Grade Level: Elementary School.
Initiated By: Non-commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: Not Available (n/a).
Produced in nineteen eighty three, this is a test for grades four through college to measure
syllogism, assumption identification, induction, reasons, kind, and degree.
Partnership for the Assessment of Standards-Based Science (P A S S) Assessment.
Type: Performance-based.
Target Grade Level: Elementary, Middle, and High School.
Initiated By: Non-commercial.
Region: U S-based.
Delivery System: Paper-and-pencil (pending conversion to being computer-delivered).
More info at: w w w dot w e s t e d dot o r g slash c s slash w e slash v i e w slash s e r v slash
nine.
[End of Page Fifty Seven]
[Page Fifty Eight]
The Partnership for the Assessment of Standards-Based Science (P A S S) Science
Assessment (P A S S) testing program was developed with a grant from the National Science
Foundation and is currently administered by WestEd. P A S S uses a variety of measures for
upper elementary, middle, and high school students to assess science content knowledge as well
as higher order thinking and communication skills. These measures are aligned with the National
Science Education Standards.
Some P A S S tasks require the student to conduct hands-on investigations with different kinds of
materials, while other tasks involve designing experiments, interpreting results, making
inferences and deductions, and explaining and communicating to others (in writing) the scientific
understandings the student acquired by working through the task. In P A S S's current form,
students write their answers in "lab notebooks" that are scored by science teachers. However, a
technology-based pilot study has established the feasibility of using computers to deliver the tasks
to students and to score their open-ended answers. P A S S measures have been used successfully
in statewide, district, and local assessment programs, and in evaluating the effectiveness of inservice teacher training programs. Statistical analyses have found that P A S S scores are very
reliable and appropriately sensitive to the effects of instruction.
P I S A Problem-Solving Assessment.
Type: Mixed-format.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: International, multi-region.
Delivery System: Paper-and-pencil.
More info at: w w w dot p i s a dot o e c d dot o r g slash d o c u m e n t slash five four slash zero
comma two three four zero comma e n underscore three two two five two three five one
underscore three two two three six one seven three underscore three four zero zero two five five
zero underscore one underscore one underscore one underscore one comma zero zero dot h t m l.
[End of Page Fifty Eight]
[Page Fifty Nine]
As part of the two thousand three implementation of the Program for International Student
Assessment (P I S A), the Organization for Economic Co-operation and Development (O E C D)
included measures of problem solving embedded within the curricular areas of mathematics,
science, and reading. P I S A’s objective was to include problem-solving items to examine
students’ ability to apply knowledge and skills to analyze, reason, and communicate within
content areas, and interpret problems in the context of real world situations. The assessment
included three problem types - decision-making, systems analysis, and trouble-shooting.
The paper-and-pencil assessment used multiple choice and constructed response item formats.
However, the format of item presentation was varied throughout the assessment to include written
passages as well as graphical displays, representing the sorts of information individuals are likely
to encounter in real life. Students’ problem-solving ability received ratings on a four point scale:
Level three - reflective, communicative problem solvers; Level two - reasoning, decision-making
problem solver; Level one - basic problem solvers; and Below Level one - weak or emergent
problem solvers.
PISA Self-Regulated Learning Assessment.
Type: Mixed format.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: International, multi-region.
Delivery System: Paper-and-pencil.
More info at: w w w dot o e c d dot o r g.
Self-report measures of self-regulated learning (S R L) were included as part of the P I S A
reading literacy assessment (O E C D two thousand four). The P I S A S R L assessment is
administered to representative international samples of fifteen year-old students from thirty two
countries. O E C D studied the literature on S R L and designed self-report items that would
reflect the current thinking about the construct. They identified four global components of S R L,
each with subcomponents totaling thirteen in number: cognitive and meta-cognitive learning
strategies
[End of Page Fifty Nine]
[Page Sixty]
(including memorization, elaboration, and control strategies); motivation and interest (including
instrumental motivation, interest in reading and mathematics, and effort and perseverance); selfconcept (including perceived self-efficacy, self concept in reading and mathematics, and
academic self concept); and reference for learning situations (including cooperative and
competitive learning).
Rainbow Project Instruments.
Type: Mixed-format.
Target Grade Level: High School.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil slash computer-delivered.
More info at:
The Rainbow Project is a collaborative effort sponsored by the College Board and led by Robert
Sternberg of Yale University. The objective of the Project is to develop a college admissions test
that will better predict success in college than the two currently used tests
(the S A T and A C T), while also minimizing group differences (i e, gender, ethnic, and
institutional). The test measures creative and practical skills in addition to the traditional skills
tested by the S A T, also a College Board product. The theoretical framework for the test comes
from Sternberg’s triarchic theory of intelligence - analytic, creative, and practical abilities – and
his ten criteria for academic performance (conscientiousness, academic motivation, adaptability,
perseverance, love of learning, desire to improve, ethics, respectful relationship with professors,
well-roundedness, and realism) and nine criteria for retention (conscientiousness slash respect,
self-respect, ethics, interpersonal skills, community engagement, adaptability, warmth, social
intelligence, and realism). The test will use a variety of traditional and creative item formats. Beta
testing has occurred both in paper-and-pencil and Web-based media. Because the project and its
instruments are still in progress, much of the work remains proprietary other than the generalities
of what has been published or appeared in the public media. Specifics about the items and the
instruments have not been made public.
[End of Page Sixty]
[Page Sixty One]
Ross Test of Higher Cognitive Processes.
Type: Multiple choice slash forced-answer.
Target Grade Level: Elementary School.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: N slash A.
Produced in nineteen seventy six, by Academic Therapy Publications for grades four through six,
this is a multiple choice test on verbal analogies, deduction, identifying assumptions, and
sufficiency of information.
Self-Directed Learning Inventory (S L I).
Type: Multiple choice slash forced-answer.
Target Grade Level: Middle School.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot m e t i r i dot c o m.
The S L I is a survey instrument being developed by the Metiri Group to provide an
assessment of self-directed learning skills for students in grades five through twleve. The
instrument is grounded in sound psychological theories developed by well-known researchers
such as Bandura, Zimmerman, and Midgley. Like many instruments, items have been adapted
and modified from various sources, while others have been created for this measure. The S L I
has three scales, each with several subscales - forethought, performance slash volitional control,
and self-reflection. The S L I contains items in which students report how they most likely feel or
what they most often do.
[End of Page Sixty One]
[Page Sixty Two]
The items use a seven-point Likert scale, ranging from completely false to completely true. This
instrument is still under development and must be treated proprietarily.
Tasks in Critical Thinking.
Type: Performance-based.
Target Grade Level: High School.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot e t s dot o r g.
Produced in nineteen eighty nine by the Educational Testing Service (E T S), this is a ninety
minute test that yields scores on inquiry, analysis, and communication. It contains nine essay
slash short answer tasks, three each in the humanities, social science, and natural science. Inquiry
relates to how well test takers are able to plan a search, use methods of discovery, comprehend
and extract, and sort and evaluate. Analysis involves the formulation of hypotheses and strategies,
application, demonstration of creativity, finding relationships, and drawing conclusions.
Communication is for quantitative and visual information.
Test of Enquiry Skills.
Type: Multiple choice slash forced-answer.
Target Grade Level: Middle School.
Initiated By: Non-commercial.
Region: Australia.
Delivery System: Paper-and-pencil.
More info at: N slash A.
Created in nineteen seventy nine for grades seven through ten in Australia, the Test of Enquiry
Skills is a multiple choice test produced by the Australian Council for Educational Research (A C
E R).
[End of Page Sixty Two]
[Page Sixty Three]
It focuses on the use of reference materials, interpreting and processing information, and
scientific thinking.
Watson-Glaser Critical Thinking Appraisal.
Type: Multiple choice slash forced-answer.
Target Grade Level: High School.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot p a n t e s t i n g dot c o m slash p r o d u c t s slash P s y c h C o r p
slash W G C T A dot a s p.
The Watson-Glaser Critical Thinking Appraisal, a paper-and-pencil instrument, is a commercial
product that assesses various aspects of reasoning ability. It was originally published by Harcourt
Brace Jonanovich and is now published by Psychological Corporation. It is an old measure,
dating back to the nineteen sixties. This measure is an instrument for high school and college
students to assess inference, recognition of assumptions, deduction, interpretation, and evaluation
of arguments. It also can be used to assess critical thinking in the workplace. Performance does
not rely on the test taker’s knowledge of course content or prior knowledge. Instead, it measures
the extent to which an individual processes information, can make judgments, and can think
through options and consequences. Test takers are asked to evaluate written passages that contain
problems, statements, arguments, and interpretations. Responses are in the form of forced choice.
Examinees are expected to answer eighty items in sixty minutes.
World Class Tests.
Type: Performance-based.
Target Grade Level: Middle School.
Initiated By: Policymakers.
Region: U K-based slash International, multi-region.
Delivery System: Computer-delivered.
More info at: w w w dot w o r l d c l a s s a r e n a dot o r g.
[End of Page Sixty Three]
[Page Sixty Four]
Originally devised by the British government’s Department for Education and Skills (D f E S)
and developed by the Qualifications and Curriculum Authority (Q C A), World Class Arena is
“an international initiative designed to identify and assess gifted and talented students around the
world.” World Class Arena has produced a group of online tests aimed at assessing “gifted”
students’ abilities in mathematics and problem solving. Most directly of interest to this survey are
the assessments in problem solving, in which students at the upper primary and lower-secondary
levels solve interactive mathematics-based problems. The tasks are not of the multi-staged,
research-and-communication based variety employed in the P I S A I C T literacy assessment or
the Q C A Key Stage three assessment. Rather they are interactive graphic “brain teasers.” For
example, a student might be presented with a rendering of a scale, two boxes of indeterminate
mass, and a set of weights. The student is then asked to determine the masses of the boxes by
balancing them on the scales along with different combinations of the weights. This problem and others like it - takes advantage of the computer’s capability to present an interactive puzzle,
as well as the ability of the computer-based test to record the steps students take to answer the
problem. The problems are intended to assess students’ ability to: think creatively and logically;
solve problems and answer questions on unfamiliar subjects; work out and respond to unfamiliar
information; and demonstrate clearly how they think through and solve problems. World Class
Tests have been privately licensed and are now in use in seventeen countries in Europe, Asia,
Australia and the Americas; sales are especially high in the Pacific Rim.
[End of Page Sixty Four]
[Page Sixty Five]
Assessment of I C T Literacy.
C C E A A-Level in the Moving Image.
Type: Performance-based.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: U K-based.
Delivery System: Computer-delivered.
More info at: w w w dot c c e a dot o r g dot u k.
The Northern Ireland Council for the Curriculum, Examinations and Assessment (C C E A) is
piloting an A-level (the U K’s subject-based examinations for high school students approaching
graduation) test in Moving Image Arts that would allow students to demonstrate their mastery of
multimedia literacy and production. According to the C C E A, “Moving Image Arts allows
students to develop an understanding of the images they watch every day in films, television
program, music videos, computer games and on the internet.” Assessment is two-fold and
addresses both the production and comprehension of media images. Students create their own
three-minute film production and take an online exam wherein they are asked to critically analyze
a series of film clips. Both elements of the assessment are marked by a central group of C C E A
evaluators.
Cisco Networking Academy Assessments.
Type: Mixed-format.
Target Grade Level: High School.
Initiated By: Commercial.
Region: International, multi-region.
Delivery System: Computer-delivered.
More info at:
One of the few comprehensive self-evaluation tools designed for students was developed in the
private sector by the Cisco Networking Academy.
[End of Page Sixty Five]
[Page Sixty Six]
While the survey items are self-reported - focusing on test takers’ perceptions of their feelings
and reported behavioral actions - they do offer an assessment window onto some of the learning
skills identified as important by the Partnership. The student survey addresses educational plans,
career plans, technical expertise, career self-efficacy, work responsibility, collaboration and
teamwork, lifelong learning, problem-solving confidence, motivation, and academic self esteem.
The Student Engagement and Best Practices Survey also measures personal growth, learning
effort, active learning, and level of challenge. Some of these constructs are particularly relevant to
the skill areas identified by the Partnership, and cross the categorization scheme used in this
document. For example, the collaboration and teamwork construct reflects one’s willingness to
cooperate with others in the work place, and “academic self-esteem” is an individual’s confidence
in his or her “ability to cope with academic learning tasks” (Kelley Executive Partners two
thousand four).
Educational Testing Service (E T S) I C T Literacy Assessment.
Type: Performance-based.
Target Grade Level: Higher Education.
Initiated By: Commercial.
Region: U S-based.
Delivery System: Computer-delivered.
More info at: w w w dot e t s dot o r g slash i c t l i t e r a c y.
The E T S assessment employs “scenario-based assignments” - tasks such as selecting the best
database for an information need, filling in a concept map, or editing a word processing document
- to evaluate test-takers’ abilities to use I C T to think critically and solve problems. Though this
evaluation is currently targeted only to college-age students, it may well have implications for
future assessment at the K through Twelve level, particularly in high schools.
[End of Page Sixty Six]
[Page Sixty Seven]
eViva (e V i v a).
Type: Performance-based.
Target Grade Level: High School.
Initiated By: non-commercial.
Region: U K-based.
Delivery System: Computer-delivered.
More info at: two one zero dot four eight dot one zero one dot seven four.
Another Ultralab product - this one funded as a research study by the Q C A – eViva represents
the leading edge of the growing movement towards the use of “e-portfolios” to assess student
work. The eViva “facility” allows students to post examples of their digitally-produced work for
“annotation” by teachers and peers. The site provides a structured environment for students to
engage in dialogue with teachers and peers about their work in relation to research and
communication skills, data analysis, and presentation.
International Computer Driver’s License (I C D L).
Type: Mixed-format.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: International, multi-region.
Delivery System: Computer-delivered.
More info at: w w w two dot i c d l u s dot c o m.
The I C D L is a mixed-format test of Information Technology (I T) skills designed to certify a
learner’s understanding of basic I T concepts and his or her competence in using common
computer applications. The program is governed by the European Computer Driving License
Foundation (E C D L-F), a group that grew out of a European Commission task force established
in the mid-nineteen nineties to raise the level of I T skills in European industry. The Foundation
has presided over the design and growth of the Computer Driving License program from its start
as a pan-European effort to what is now probably the largest globally recognized standard for
baseline technical competence in computing. The E C D L-F now oversees I C D L programs in
one hundred forty countries in Europe, Africa, the Middle East, the Americas, and the AsiaPacific-Rim region.
[End of Page Sixty Seven]
[Page Sixty Eight]
While the I C D L was designed to help businesses standardize and certify the I T competency of
their employees, it has also come into use as a high school level certification course. The course
and assessment are “vendor-neutral” - adaptable to most major software platforms. Test-takers
move through seven modules: Concepts of Information Technology, Using the Computer and
Managing Files, Word Processing, Spreadsheets, Databases, Presentations, and Information and
Communication. The first module ends with a question-and-answer exam, and the rest end with
performance-based “practical skills tests.”
Internet and Computing Core Certification (I C Superscript Three).
Type: Mixed-format.
Target Grade Level: High School.
Initiated By: Commercial.
Region: International, multi-region.
Delivery System: Computer-delivered.
More info at: i n f o dot c e r t i p o r t dot c o m slash y o u r p e r s o n a l p a t h slash i c three C
e r t i f i c a t i o n.
Certiport’s I C Three certification course is an internationally implemented program relying on a
network of test centers to provide standardized assessment of learners’ baseline computer skills
knowledge and competence. The I C Three mixed-format test is divided into three major contentarea exams: “Computing Fundamentals” (identifying the basic elements of hardware and
software, and basic operating-system navigation); “Key Applications” (demonstrating capability
to operate word-processing, spreadsheet and presentation software); and “Living Online”
(demonstrating basic knowledge of networks, email applications, internet ethics and safety issues,
and basic internet use). The American Council on Education (A C E) has recognized the I C
Three as offering valid certification of computer competency, and as such, member institutions
offer college credit to students who have passes the I C Three exam. Key distinctions between the
International Computer Driver’s License (I C D L) and I C Three are that I C Three comes out of
a commercial (rather than a policy) context, and unlike the I C O L, I C Three is platformspecific, basing its training and assessment on the Microsoft Windows operating system and the
Office suite of applications.
[End of Page Sixty Eight]
[Page Sixty Nine]
Key Stage Three I C T Literacy Assessment.
Type: Performance-based.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: U.K.-based.
Delivery System: Computer-delivered.
More info at: w w w dot q c a dot o r g dot u k slash two nine one four dot h t m l.
The British Qualifications and Curriculum Authority (Q C A)’s ambitious new I C T literacy
assessment - still in the pilot phase - will gauge students’ I C T capability at the end of “Key
Stage Three” (ages twelve through thirteen) in Great Britain’s national curriculum. The test will
not only assess students’ I C T skills, but also their ability to use those skills to solve a set of
complex problems involving research, communication, information management, and
presentation. The test will be graded automatically; human graders will then verify a sample. Test
results will provide both summative information - in the form of a national score for each student
- and detailed feedback about student performance that could be used formatively to inform future
teaching and learning.
The I C T test is set in a complex virtual world, within which students carry out tasks using a
‘walled garden’ of assets (e g, text, pictures, data and “canned” websites) to take the test without
access to the Internet. Students are also provided with a toolkit of applications to enable them to
complete the tasks; all of these assets are generic software programs developed by the Q C A to
provide the same capabilities as familiar productivity software on the level playing field of a nonbrand-specific platform. As students work through the test session, their actions are tracked by the
computer and mapped against expected capabilities for each level of the national curriculum; this
includes both technical skills and learning skills, such as “finding things out,” “developing ideas”
and “exchanging and sharing information.” The information collected about a student’s
performance allows a score to be awarded along with a profile of individual strengths and
weaknesses.
[End of Page Sixty Nine]
[Page Seventy]
N E T S Online Technology Assessment.
Type: Performance-based.
Target Grade Level: Middle School.
Initiated By: Non-commercial.
Region: U S-based.
Delivery System: Computer-delivered.
More info at: w w w dot i s t e dot o r g slash r e s o u r c e s slash a s m t slash m s i s t e.
Developed jointly by the International Society for Technology in Education (I S T E) and
Microsoft, this battery of performance-based assessments is intended for eighth graders and
designed to apply to I S T E’s National Educational Technology Standards for Students (N E T S
dot S). In accordance with I S T E’s standards framework, which focuses on the use of I C T to
demonstrate achievement in analytic, production and communication skills, the assessment’s
twelve thirty minute activities require students to use a variety of Microsoft’s most commonly
used Office applications Word, Excel, PowerPoint, Internet Explorer, Outlook, Access and
FrontPage - to complete authentic, real-world tasks. The assessments offer formative information
about students’ skills and have been offered as a tool for teachers and administrators to gauge
their students’ progress towards N C L B’s eighth grade technology literacy requirement. This
assessment tool is located on I S T E’s Web site, which also offers support resources detailing
each assessment’s points of alignment with I S T E standards and software applications.
North Carolina Test of Computer Skills.
Type: Mixed-format.
Target Grade Level: Middle School.
Initiated By: Policymakers.
Region: U S-based.
Delivery System: Paper-and-pencil.
More info at: w w w dot n c p u b l i c s c h o o l s dot o r g slash a c c o u n t a b i l i t y slash t e
s t i n g slash c o m p u t e r s k i l l s.
[End of Page Seventy]
[Page Seventy One]
North Carolina is the only state thus far to adopt a required test of computer skills. In order to
graduate with a high school diploma, all North Carolina students must pass the North Carolina
Test of Computer Skills, intended to assess achievement in three areas of computer literacy
defined by the North Carolina Standard Course of Study:
(list of three bulleted items)
* Understanding “the important issues of a technology-based society and to exhibit ethical
behavior in the use of computer technology.”
* Demonstrating “knowledge and skills in using computer technology.”
* Using “a variety of computer technologies to access, analyze, interpret, synthesize, apply, and
communicate information” (North Carolina Department of Public Instruction two thousand five).
The test is first administered in the eighth grade (though students may take it in following years if
they fail to pass) and consists of two parts - a multiple choice exam and a performance-based
exam. The multiple choice test is paper-based (though an online version is in development) and
machine-scored, and covers six subject areas: societal issues, databases, spreadsheets, keyboard
utilization slash word processing slash desktop publishing, multimedia presentation, and
telecommunications. The performance-based exam requires students to demonstrate basic
technical skills, using desktop publishing software and answer questions using a prepared
database and spreadsheet.
P I S A Two Thousand Computer Familiarity Questionnaire.
P I S A Two Thousand Three Information Communication Technology Questionnaire.
Type: Multiple choice slash forced-answer.
Target Grade Level: Middle School.
Initiated By: Policymakers.
Region: International, multi-region.
Delivery System: Paper-and-pencil.
More info at: w w w dot p i s a dot o e c d dot o r g slash d a t a o e c d slash five three slash one
eight slash three three six eight eight one three five dot p d f.
These two tools - jointly prepared by a project consortium within P I S A consisting of The
Australian Council for Educational Research (A C E R), the Netherlands National Institute for
[End of Page Seventy One]
[Page Seventy Two]
Educational Measurement (C I T O), the Educational Testing Service (E T S, U S), the
National Institute for Educational Research (N I E R, Japan) and Westat (U S) - are collections of
scaled-response questions aimed at gauging respondents’ knowledge of and familiarity with
computers and I C T.
The Computer Familiarity Questionnaire - focused on fifteen year-olds’ interest in computers;
their self-assessed attitudes and ability to work with computers; and their use of and experience
with computers - was included as a special section with the two thousand P I S A survey to two
hundred sixty five thousand students in thirty two countries in the Americas, Europe, Asia and
Australia. The Information Communication Technology Questionnaire was a refinement of this
same instrument.
P I S A I C T Literacy Assessment.
Type: Performance-based.
Target Grade Level: High School.
Initiated By: Policymakers.
Region: International, multi-region.
Delivery System: Paper-and-pencil (designed to be computer-delivered).
More info at: w w w dot p i s a dot o e c d dot o r g slash f i n d D o c u m e n t slash zero comma
two three five zero comma e n underscore three two two five two three five one underscore three
two two three five seven three one underscore one underscore one one nine eight two nine
underscore one underscore one underscore one comma zero zero dot h t m l.
This assessment was tested as part of a feasibility study conducted jointly by the United
States’ Educational Testing Service (E T S), Australia’s A C E R (Australian Council for
Educational Research), Japan’s N I E R (National Institute for Educational Policy Research), and
the international Organization for Economic Co-operation and Development (O E C D). After
completing an introductory questionnaire and completion of a set of tasks designed to establish
the student’s baseline level of technical I C T literacy, students were asked to complete a set of
complex tasks using a suite of generic (non-commercial) I T applications developed expressly for
the assessment.
[End of Page Seventy Two]
[Page Seventy Three]
Students’ performance would be assessed based on their ability to complete the tasks as well as
the manner in which they completed the tasks. The assessment targeted fifteen year-old students,
and included four major task stages:
(list of four numbered items)
One.
Introductory: Background questionnaire, baseline technical assessment tasks (click and drag,
inserting, deleting and changing text).
Two.
Short Scenarios: easy, discrete tasks - an email task (send an email to a friend, c c another friend),
web-abstract evaluation task (choose best abstract from a static page of web-search results), short
database task (sort database to determine number of items - jazz C Ds - below a certain price). All
tasks are highly directed.
Three.
Web search task: students search for three books that would make a suitable gift, matching
certain criteria for a friend interested in digital photography. Students must communicate by
email with another friend, take recommendations that introduce new criteria, and select the books.
Four.
Simulation task: understanding variables that influence the production of carbon dioxide in bread
dough. Students run experiments and display resultant data. In process, students must explore and
use an unfamiliar interface (the simulation) and choose the best way to illustrate their results
(table, bar graph or line graph).
Ultralab Certificate of Digital Creativity.
Type: Performance-based.
Target Grade Level: High School.
Initiated By: Non-commercial.
Region: U K-based.
Delivery System: Computer-delivered.
More info at: b l o g dot u l t r a l a b dot n e t slash p r o j e c t dot p h p question mark i d equals
sign two one.
ULTRALAB is a learning technology research centre based at Anglia Polytechnic University in
Chelmsford, England. They have developed and are currently piloting a number of innovative
assessments focused on students’ creative use of I C T skills, including the “International
Certificate in Digital Creativity,” an award designed to “accredit creative work
[End of Page Seventy Three]
[Page Seventy Four]
achieved through the application of digital technologies and the clear intention is that it will grow
to become a global award.” The certification process requires students to discuss and “defend”
their work (digitally produced film, artwork and music are all eligible for assessment) to a panel
of peers and professionals. The focus on creative work generated through the use of I C T and the
use of live review panels limits the potential scale of this assessment, but the hope for this project
is that it will begin to foster an international standard for the assessment of students’ I C T
productions.
[End of Page Seventy Four]
[Page Seventy Five]
Appendix B: Primer on Assessment: Key Measurement Principles, Methods, and terms.
Over the past fifty years, spurred by public demand, political attention, and advances in cognitive
sciences, educational assessment has achieved greater prominence in our educational system.
Beginning in the nineteen fifties, when testing was used to select students to enter special
programs for the gifted and as a means for gauging ability to succeed in higher education
(Yakimowski-Srebnick two thousand one), the use of educational assessments has steadily grown
throughout the U S school system. By the nineteen eighties, many state legislatures took testing a
step further by implementing statewide competency testing programs designed to ensure that high
school graduates had obtained a minimum level of basic skills, generally in reading and
mathematics (Yakimowski-Srebnick two thousand one). And as public concern over the
preparation of students to enter today’s complex work environment mounted in the late nineteen
eighties and nineteen nineties, efforts to raise standards and establish higher outcome measures
became more widespread leading up to the enactment of N C L B.
Broadly stated, educational assessments, specifically assessments of school achievement, seek to
measure student knowledge and skill at a particular point in time. Increasingly, educational
assessments are being used by educators and policymakers to make decisions about how to
improve student learning (National Research Council two thousand one a). It is important to first
draw a distinction between different types of assessments, which are often used to enhance
learning (formative assessment), measure individual student achievement (summative
assessment), evaluate programs (program evaluation), and assess aptitude (ability measures) for
the purpose of predicting performance in some future situation; an example is the use of the S A
T roman numeral one to predict college performance (National Research Council two thousand
one a).
While U S society traditionally has placed greater emphasis on large-scale, standardized
assessment over classroom assessment, both have the potential to further student learning.
Advances in cognitive and measurement sciences have produced new theories and tools that
could improve classroom assessment of student understanding, while technology holds the
potential to enable large-scale assessment to produce more valid and useful inferences regarding
individual student learning. With greater alignment between what teachers do to improve student
learning during the course of instruction and what statewide tests assess, the potential exists for
better coherence and consistence.
[End of Page Seventy Five]
[Page Seventy Six]
Although the use of educational assessments has risen dramatically in the last two decades, the
understanding at large of how such educational assessments work, what they test, when they are
appropriate, and what they can tell us about what students know has proven elusive and harder to
cultivate. The following section is an attempt to give leaders in the policy, education, and
business arenas a guide that highlights key principles, methods and terms in measurement with an
eye toward fostering informed discussion and further reading. The next section of this report
provides the reader with a basic framework for understanding the key dimensions of studentbased assessments: terms, purposes, types, formats, delivery mechanisms, and costs that are most
typically used to assess different dimensions of students’ knowledge and learning.
Reliability and Validity.
Reliability and validity are two measurement principles that are essential concepts in all tests.
While these terms can be defined in a highly technical manner (Cronbach nineteen seventy one;
Feldt and Brennan nineteen eighty nine; Linn nineteen eighty nine; Messick nineteen eighty nine;
Stanley nineteen seventy one; Thorndike nineteen seventy one), we posit simple definitions that
may be useful in applied settings.
Reliability refers to the stability or replicability of a test score. If a test, self-report, or observation
is given multiple times, reliability examines the extent to which the scores or responses will be
the same or stable, over time and replication. If classroom observations are conducted with two
observers, reliability addresses the degree of agreement the observers reach while rating the same
behavior. Although a student completes an examination on a particular day, the student’s exam
score is interpreted more generally as a measure of the student’s knowledge that is not limited to
a particular day. The reliability of the exam score describes how stable we could expect the scores
to be from one day to another. Generally speaking, the more items an assessment contains, the
more stable the evidence. Some methods of reliability examine the degree to which individual test
items effectively measure a specific construct - an inferred, psychological concept that underlies a
test - and whether there is consistency among those items. This is called internal consistency. For
example, the algebra section of a standardized test would have a high internal consistency if all
the questions relate directly to algebraic functions and problems.
[End of Page Seventy Six]
[Page Seventy Seven]
If a geometry question about dividing a circle in half were inserted, however, that internal
consistency would drop because the question is outside the specific construct targeted by the test.
Validity refers to the interpretations, uses, and consequences made from the outcomes of an
assessment. A test may be “valid” for some uses but not for others. In the case of a licensing test,
such as those used to certify teachers, it is not so much the score, per se, that is valid, but how that
score is interpreted and put to use. That is, if the test taker’s performance indicates that the
candidate has achieved a passing grade, the score provides valid evidence to make an
interpretation that an individual is worthy of certification. This is a subtle idea, but an important
one, particularly when we routinely use testing information to judge effectiveness with students,
schools, teachers, and districts. The licensing test may provide evidence valid for certification,
but it would be inappropriate to use as a measure of an individual’s performance in graduate
school or toward the fulfillment of degree requirements. It is important to recognize that
assessments can be used in inappropriate ways, violating the validity, not of the test, but of the
use and interpretation of the results. For example, even though some communities use S A T
scores as a barometer for the effectiveness of their schools, the S A T is designed to predict the
ability of individual students to succeed in college, not to measure school effectiveness.
In the traditional measurement literature there are many ways to classify tests.
Cronbach (nineteen seventy one), the foremost theorist in educational and psychological
measurement, makes an essential distinction, differentiating tests that measure maximum from
typical performance. Measures of maximum performance seek to assess what an individual can
do. Test takers are encouraged to do the best they can. Such tests are often referred to as ability
measures. An Advanced Placement test would be an example of maximum performance. In
taking the test, students seek to achieve their highest possible score (subject to a cut off score) for
the purposes of receiving advanced college standing. In contrast, measures of typical
performance examine not the best a test taker can do but what he or she typically does. The
distinction between maximum and typical performance is an important contrast in assessment
because it reflects the types of demands and expectations placed on the test taker.
[End of Page Seventy Seven]
[Page Seventy Eight]
Purposes.
Braun (two thousand three), a former vice president for research at Educational Testing Service
and a statistician, delineates three purposes for which assessments are used: choosing slash
sorting, learning, and qualifying.
Choosing slash Sorting.
This type of measurement is used for the selection of an appropriate course of study based on
results of a relevant assessment. The highly publicized standardized achievement tests, such as
the S A T, Graduate Record Exam (G R E), L S A T (exam for law school admissions), and
Medical College Admissions Test (M C A T), fall into this category. These tests serve as
elevation levels along the road to higher education. Candidates who take these examinations are
given a score that places them within a certain percentile of all test takers, and higher education
institutions (i e, colleges, universities, graduate schools, law schools, business schools, and
medical schools) use the scores, along with other data, to make admissions decisions. The general
rule of thumb is that higher scores are better, though exceptions based on using the test score in
conjunction with other data to make decisions do occur. Using tests to sort and choose students is
also a common practice in business and the military. The classification of a trainee or recruit is
used to determine the type of job slot for which an individual may be appropriate.
Learning.
Assessments that fall into this category are most commonly used in classroom settings. These
assessments assist teachers in evaluating students’ learning processes, how they go about solving
problems and answering questions that indicate what a student knows and understands. Teachers
regularly administer formal and informal assessments to gain an understanding of how their
students are doing, what they know and do not know, and what their strengths and weaknesses
are. A formal classroom assessment might be a typical paper-and-pencil test, a project, a problem,
or some other production task that enables the teacher to determine the extent of students’
learning. An informal assessment might involve the teacher asking probing questions and noting
students’ responses as she wanders around the room interacting with students. The key to this
type of assessments is that they are used to support learning and instruction.
[End of Page Seventy Eight]
[Page Seventy Nine]
Qualifying.
Qualification implies certification of accomplishment - a summative score that provides
documentation of performance. These assessments lead to certifications or degrees. Such
assessments also can be used to determine retention at a grade level -whether or not a student is
qualified to be promoted to the next level or should be retained at the current grade level. Tests
such as those in the Advanced Placement (A P) program provide high school students an
opportunity to show evidence of their learning in special courses. Students “qualify” for college
credit by virtue of their performance on the A P tests. In terms of professional education, the bar
examination, medical licensure, and the Praxis tests for teachers serve as gatekeepers to the
professions. Similar certification tests can be found in other professions such as nursing,
architecture, and aviation. Examinations in graduate schools, known as qualifications, are
certifications that a candidate is competent to stand for the doctorate.
The three major purposes of assessment imply a continuum in which assessment is used
iteratively throughout a students’ educational career. Assessment for learning is an ongoing
process throughout education. Sorting and choosing occurs at specific points of demarcation, such
as entrance to college, to graduate education, or to specific training programs. Qualification is
summative, the end result of an educational or training program.
Braun’s major purposes focus squarely on the student. Given current efforts to foster greater datadriven decision-making among teachers, it is possible that a fourth purpose, instruction, could
become a part of a larger assessment purposes continuum as teachers use greater access to student
assessment data, which historically has been inaccessible to them, as a means for assessing and
refining their own instructional practice.
Types of Assessments.
There are many types of assessments, and these types often depend on the purpose of the
measurement. We describe several of the most widely used kinds of tests, including formative,
summative, diagnostic, norm-referenced, criterion-referenced, and assessments to evaluate
programs.
Formative assessments are used to help determine the strengths and weaknesses of a student and
to generate information that can be used to assist in further learning. Based on information that is
generated for the teacher or student, such assessments are typically used to guide learning and
instruction.
[End of Page Seventy Nine]
[Page Eighty]
Formative assessments recognize that students assume different paths to learning, different paces,
and different learning styles, and are subject to different learning curves. Formative assessments
are both formal and informal, including information received from tests, assignments,
conversations, observations, and technology-based applications (Pellegrino et al. two thousand
one). Such information is interpreted based on prior knowledge of the student and also with
respect to context and content. For example, a teacher examines a student’s work product to
determine how well the student understands specific concepts. The strengths and weaknesses
identified through the assessment process can be used by the teacher to determine future
instructional strategies.
Summative assessments refer to cumulative testing to measure the end product of a learning
experience - achievement, placement, promotion, and accountability (National Research Council
two thousand one b). The S A T roman numeral two tests, formerly known as the College Board
Achievement Tests, measure student accomplishments in particular courses of study and play a
role in applicant selection to higher education. A driver’s licensing examination is a summative
measure to determine if a candidate is competent, both in terms of road performance and
knowledge of driving laws, before a state certifies that an individual is a legal driver. Summative
assessments can be high-stakes, standardized tests or can be classroom-based. For example, a
classroom teacher’s course final is summative. It determines the extent of students’
understanding, knowledge, and achievement in a particular class.
Programmatic evaluations are assessments used to determine the effectiveness or quality of an
educational intervention (National Research Council two thousand one b) or to make broad
comparisons among states or countries on specific educational markers. The best example of this
sort of assessment is the National Assessment of Educational Progress (N A E P), also known as
the Nation’s Report Card. N A E P is periodically administered in specific content areas to a
sample of students at grades four, eight, and twelve across the country. The intent is to provide a
representative sample of the nation’s school population. Although individual students are tested,
scores are reported at the school, district, and state level. Policymakers, stakeholders, and
educators interpret the data and then use it to make high-level policy statements such as, “In
general, children in California are improving their ability to read at the eighth-grade level.” An
important distinction needs to be made here.
[End of Page Eighty]
[Page Eighty One]
As a programmatic assessment, the N A E P aggregates data at a higher level than the individual
student. Data is reported for example on how well students in general understand eighth-grade
mathematics, rather than reporting how well particular students can exhibit eighth-grade
mathematics skills. This data is designed to identify patterns and trends, not to support classroomlevel decision-making.
Diagnostic assessments are similar to formative assessment in that they are used specifically to
identify and diagnose a student’s strengths and weaknesses so that interventions can be
prescribed. Diagnostic assessments generate data that is immediately useful and can be translated
into actionable instructional steps for the student. For example, the field of early literacy
assessment is one in which a number of diagnostic tools have been developed. Perhaps the most
widely used measure is the Dynamic Indicators of Basic Early Literacy Skills (D I B E L S) that
was developed at the University of Oregon (Good, Kaminski, Smith, Laimon and Dill two
thousand one), and is designed to measure the child’s automaticity in phonemic awareness,
phonics, and oral reading fluency. Under the U.S. Department of Education’s Reading First
initiative, teachers are required to use assessments such as the D I B E L S routinely in grades K
through three.
Norm-referenced assessments report scores that are interpreted by comparison to a national or
other norm group. Tests of this type typically are designed to measure differences among
students, enabling the ranking of students along a continuum of performance. For example,
“George performed better in oral communication than ninety percent of the students at his grade
level,” is a norm-referenced interpretation of his individual performance on the test. The S A T is
norm referenced, meaning that it is based on the way scores are statistically calculated. A test
taker’s score is meaningful only when it is interpreted relative to other test takers. For example,
what does a score of five hundred fifty (on a scale of two hundred to eight hundred) mean? When
that score is converted to percentile the meaning becomes more apparent. If a five hundred fifty
translates into the fifty two percentile, this means that forty eight percent of the test takers who
took that particular administration of the S A T scored higher. Thus, it is important to note that
the distinction between norm-referenced and criterion-referenced assessments (see below) is
sometimes blurry. For example, with norm-referenced assessments, it may be possible to make
some criterion-referenced interpretations, depending on how the scores are reported. It is not so
much the test, per se, but the interpretation of the scores that determines into which category an
assessment falls.
[End of Page Eighty One]
[Page Eighty Two]
Criterion-referenced assessments refer to tests designed to measure performance against a goal,
a specific objective, or a standard. Such tests are designed to determine what test takers can do
and what they know. Criterion-referenced assessments determine how a student performs against
specific criteria, rather than the percentage of other test takers that scored higher or lower, as is
the case with a norm-referenced interpretation. A driver’s licensing examination is criterionreferenced with a cutoff score. The expectation is that a driver must achieve a specified score to
be deemed competent. A score below the cutoff point indicates that the individual is likely to be
an unsafe driver in terms of either driving ability, knowledge of the laws, or both. Similar
statements can be made for other licensing and educational tests, such as state bar examinations
and the Praxis teacher certification tests. Through complex prior analyses and standards set by
professional organizations, passing score criteria are established for each test. Many of the current
state tests are criterion-referenced. Specific scores for passing have been set in each tested content
area. Students must obtain these criterion scores to pass, be promoted, or graduate.
Assessment Formats.
As with assessment types, assessment formats also vary considerably. Many of the traditional
achievement tests, particularly in the past, have used what is called forced choice items. Forced
choice items provide test takers with options from which they must select the correct answer from
among the incorrect options. Everyone who is a product of the U S educational system is familiar
with multiple choice and true slash false items. Despite no actual task production being required,
such items can still be cognitively demanding.
Constructed response items require test takers to construct or produce some sort of response, not
just select the correct answer. These responses could be in the form of writing an essay, working
through a mathematics word problem, or producing a graphical representation of a science
problem. Required responses can be quite elaborate or constrained. The underlying principle here
is that the test takers must produce or construct a representation of their knowledge through their
responses.
Performance-based assessments examine student performance using strategies that are often tied
to the completion of authentic tasks.
[End of Page Eighty Two]
[Page Eighty Three]
The essential elements of performance-based assessment are the specification of the skills to be
learned, the tasks that might effectively assess these skills, and the development of a scoring
rubric that ties the skills to intended outcomes. These types of assessments require students or test
takers to create more realistic, larger production tasks to show evidence of learning or successful
performance. These tests or activities can be either formative or summative, depending upon the
purpose of the assessment. An example is designing a building as part of the National
Architectural Accrediting Board Examination.
Projective assessments encourage respondents to “project” their impressions upon ambiguous
stimuli, yielding unrestricted responses. The Rorschach test’s inkblots are perhaps the most
recognizable projective technique. In education, techniques such as Draw a Scientist or the
cartoon task are being used. With Draw a Scientist, students are asked to represent their image of
a scientist. This task sheds light on the conceptions students have about science. The cartoon task,
which can apply to different domains, requires respondents to provide a caption for a cartoon.
They project their perceptions of the meaning of the cartoon via their captions. There is no right
or wrong answer. Scorers assess performance based on previously outlined criteria and rubrics.
Other formats are worth mentioning here. Games, puzzles, and competitions also can serve as
informative assessment techniques. Group assessments and collaborative projects are important
because much work that occurs outside of school and after graduation is collaborative. Valuable
information on a variety of skills, constructs, and knowledge can occur at the group level.
Delivery Mechanisms.
Assessments are administered in a variety of ways and through traditional and technology-based
mechanisms. Although the majority of tests are still administered via paper-and-pencil,
technology is increasingly being used to enhance not only test administration but also test
development, item selection, implementation, and scoring as technology affords new
psychometric models and methods that heretofore have been impossible. We briefly describe five
of the most widely used delivery mechanisms, both traditional and technology-based.
[End of Page Eighty Three]
[Page Eighty Four]
As mentioned, the most prevalent delivery mechanism has been, and continues to be, paper-andpencil. This is the case for high-stakes, standardized tests as well as classroom-based assessment.
In terms of standardized tests, assessments are administered on paper; responses are entered on
bubble sheets; and scores are calculated either by hand or by computer, using scanning
technology. The large-scale standardized tests such as the S A T and A C T use this mechanism.
Computer-based tests, still a relatively new format, take advantage of digital technology to
deliver assessments. One advantage of using the computer-based medium is that scoring can
occur automatically and be returned to the test taker upon completion. In some cases, these tests
have simply translated paper-and-pencil-based assessments into the computer medium. S A T
practice tests are now available in computer formats, as are many other educational practice
examinations. Newer assessments are also beginning to pay greater attention to the feasibility of
using this testing medium.
A number of large-scale testing programs are moving to a delivery mechanism known as
computer adaptive testing (C A T). A C A T is a test in which the underlying psychometric
algorithms assess a test taker’s ability or performance based on the sequence of right and wrong
responses. The computer selects subsequent items based on the response to the previous items,
allowing for the generation of a test score on fewer items than would be possible in a paper
version of the same test. G R E and G M A T, the graduate and business school admissions tests
have recently been converted to C A T’s. These tests are more time effective in that test takers
receive fewer items from which scores are generated without sacrificing any of the test’s
psychometric properties (i e, reliability and validity).
A fairly recent development is the use of handheld devices, such as Palm Pilots, to deliver and
score assessments. The handheld provides the opportunity for the assessor to quickly administer
and assess student performances, with rapid feedback between assessment and subsequent
instruction. The teacher or assessor uses the device to administer the test and collects student
responses on the handheld. The responses can then be downloaded to a computer for further
analysis, tracing student development over time, comparing student performance, and examining
other relevant data.
[End of Page Eighty Four]
[Page Eighty Five]
A final delivery mechanism, intelligent tutoring systems, is the most technically sophisticated and
has only recently been implemented in classroom settings. Intelligent tutoring systems embed
assessment within the instructional process in such a way that a student’s progress can be traced
based on a pattern of responses. Intelligent tutoring systems are based on models of student
learning, errors, and correct responses, that generate appropriate instructional activities matched
to the progress of the student. Intelligent tutoring systems are founded on complex cognitive and
computer science models (Burns, Parlett and Redfield nineteen ninety one; Polson and
Richardson nineteen eighty eight; Psotka Massey, and Mutter nineteen eighty eight; Sleeman and
Brown nineteen eighty two). Development costs are high and the technology necessary to run
such programs tends to be high-end. Researchers at Carnegie Mellon University recently created
intelligent tutoring systems that can be integrated into school settings (see for example Baker,
Corbett, and Koedinger, two thousand four), and the U K’s Qualifications and Curriculum
Authority has developed an intelligent computer architecture that is being used to assess ninth
grade students I C T literacy.
Costs.
It is hard to pin down exactly how much assessments cost because a variety of factors influence
the amount required to produce, administer, and maintain such assessments, some of which are
easily defined and others of which are harder to quantify. In particular, cost varies depending on
the type of assessment. Multiple choice, closed answer assessments require smaller investments,
while open-ended, performance-based measures require greater investment. The delivery
mechanism, whether paper and pencil or computer adaptive; the purpose of the test, whether
choosing and sorting or qualifying; the size of the test population, whether the state of California
or city of Milwaukee; or the scope of the test, whether an A P Biology exam or state test, also
affect costs.
While determining the cost of testing is difficult because states and test development
organizations account for spending differently, a survey of state officials estimated a nationwide
total of four hundred million dollars spent on state test testing in fiscal year two thousand one
alone (Danitz two thousand one).
[End of Page Eighty Five]
[Page Eighty Six]
According to a more recent survey of state education agency officials in all fifty states and two U
S territories conducted by the General Accounting Office (G A O), states plan to spend three
point nine billion dollars on Title roman numeral one assessments between fiscal years two
thousand two through two thousand eight (G A O, two thousand three). The G A O estimates that
if all states chose to use multiple choice questions on their assessments this total would fall to one
point nine billion dollars, while if all states implemented assessments with a mix of both multiple
choice and open-ended questions, the estimate would grow to five point three billion dollars (two
thousand three: fourteen). Scoring expenditures largely account for the differences in these
estimates. While multiple choice-based tests can be scanned and scored by machines, open-ended
questions must be read and scored by people because students write out their responses to these
questions. However, it is worth noting that advances in computer technology are making the
prospect of scoring performance-based assessments electronically more feasible.
It is even harder to estimate what the costs associated with developing diagnostic assessments
aligned with content standards and twenty first century skills will be, though the development of
such tests represents a critical step toward ensuring that students are prepared to enter today’s
workforce. For available estimates, we must look abroad to United Kingdom, where, perhaps, the
most ambitious test of twenty first century skills, the Qualifications and Curriculum Authority’s I
C T Key Stage three Assessment, has entered the pilot stage with a nationwide rollout expected in
mid-two thousand six. The Key Stage three effort, which is described in greater detail in the I C T
literacy section of this report, represents a cost of twenty six million pounds to the British
government for the test materials and testing infrastructure. This does not include the costs to
individual schools, approximately sixty percent of which are upgrading their server capacity to
reach the capacity benchmark the Education Ministry has set for running the tests (M. Ripley
November two thousand four).
[End of Page Eighty Six]
[Page Eighty Seven]
Appendix C: Key Organizations.
Global Awareness.
The American Forum for Global Education.
Rodney W. Nichols, Chair.
One Hundred Twenty Wall Street,
Suite Two Thousand Six Hundred.
New York, New York one zero zero zero five.
Phone number is two one two dash six two four dash one three zero zero.
Website address is w w w dot g l o b a l e d dot o r g.
The Asia Society.
Vivien Stewart, Vice President Education.
Seven Hundred Twenty Five Park Avenue at Seventieth Street.
New York, New York one zero zero two one.
Phone number is two one two dash two eight eight dash six four zero zero.
Website address is w w w dot a s i a s o c i e t y dot o r g.
Global Nomads Group.
Mark von Sponeck, Executive Director.
Three Hundred Eighty One Broadway,
Fourth Floor.
New York, New York one zero zero one three.
Phone number is two one two dash five two nine dash zero three seven seven.
Website address is w w w dot g n g dot o r g.
International Baccalaureate Organization (I B O).
Prof George Walker, Director General.
Route des Morillons Fifteen.
Grand-Saconnex, Geneve C H dash one two one eight.
Switzerland
Phone number is plus four one dash two two dash seven nine one dash seven seven four zero.
George Pook, Assessment Director.
Curriculum and Assessment Centre.
[End of Page Eighty Seven]
[Page Eighty Eight]
Peterson House, Malthouse Avenue, Cardiff Gate.
Cardiff, Wales G B C F two three eight G L.
United Kingdom.
Phone number is plus four four dash two nine dash two zero five four dash seven seven seven
seven.
Website address is w w w dot i b o dot o r g.
International Education and Resource Network (i E A R N).
Peter Copen (Treasurer) - President, Copen Family Fund.
Four Hundred Seventy Five Riverside Drive,
Suite Four Hundred Fifty.
New York, New York one zero one one five.
Phone number is two one two dash eight seven zero dash two six nine three.
Website address is u s dot i e a r n dot o r g.
Southern Growth Policies Board.
Governor Bob Riley (A L), Chairman.
P O Box one two two nine three.
Research Triangle Park, North Carolina two seven seven zero nine.
Phone number is nine one nine dash nine four one dash five one four five.
Website address is w w w dot s o u t h e r n dot o r g.
[End of Page Eighty Eight]
[Page Eighty Nine]
Civic Engagement.
Campaign for the Civic Mission of Schools.
Council for Excellence in Government.
David Skaggs, Executive Director.
One Thousand Three Hundred One K Street, North West,
Suite Four Hundred Fifty West.
Washington, D C two zero zero zero five.
Phone number is two zero two dash seven two eight dash zero four one eight.
Website address is w w w dot c i v i c m i s s i o n o f s c h o o l s dot o r g.
Carnegie Corporation of New York.
Cynthia Gibson, Program Officer.
Four Hundred Thirty Seven Madison Avenue.
New York, New York one zero zero two two.
Phone number is two one two dash two zero seven dash six two seven two.
Website address is w w w dot c a r n e g i e dot o r g.
Center for Civic Education.
Charles Quigley, Executive Director.
Five Thousand One Hundred Forty Five Douglas Fir Road.
Calabasas, California nine one three zero two dash one four four zero.
Phone number is eight one eight dash five nine one dash nine three two one.
Website address is w w w dot c i v i c e d dot o r g.
Center for Information and Research on Civic Learning and Engagement (C I R C L E).
William A. Galston, Director, Institute for Philosophy and Public Policy.
School of Public Policy.
University of Maryland.
College Park, Maryland two zero seven four two.
Phone number is three zero one dash four zero five dash two seven nine zero.
Website address is w w w dot c i v i c y o u t h dot o r g.
[End of Page Eighty Nine]
[Page Ninety]
The Choices Program.
Susan Graseck, Director.
Box One Thousand Nine Hundred Forty Eight, Brown University.
Providence, Rhode Island zero two nine one two.
Phone number is four zero one dash eight six three dash three one five five.
Website address is w w w dot c h o i c e s dot e d u.
East Bay Conservation Corps.
Yolanda Peeks, President.
One Thousand Twenty One Third Street.
Oakland, California nine four six zero seven.
Phone number is five one zero dash nine nine two dash seven eight zero zero.
Website address is w w w dot e b c c dash s c h o o l dot o r g.
International Association for Evaluation of Educational Achievement (I E A).
Doctor Hans Wagemaker, Executive Director.
Forty Seven Tamworth Crescent.
Bellevue Estate.
Wellington New Zealand.
Phone number is plus six four dash four four seven eight seven dash three nine five.
Website address is w w w dot i e a dot n l slash i e a slash h q.
National Center for Learning and Citizenship (N C L C).
Education Commission of the States.
Susan Vermeer Lopez, Policy Analyst.
Seven Hundred Broadway,
Suite One Thousand Two Hundred.
Denver, Colorado eight zero two zero three dash three four six zero.
Phone number is three zero three dash two nine nine dash three six zero zero.
Website address is w w w dot e c s dot o r g slash n c l c.
[End of Page Ninety]
[Page Ninety One]
Financial, Economic and Business Literacy.
Consortium for Entrepreneurship Education.
Cathy Ashmore, Executive Director.
Phone number is six one four dash four eight six dash six five three eight.
Website address is w w w dot e n t r e dash e d dot o r g.
D E C A Inc.
One Thousand Nine Hundred Eight Association Drive.
Reston, Virginia two zero one nine one.
Phone number is seven zero three dash eight six zero dash five zero zero zero.
Website address is w w w dot d e c a dot o r g.
Education, Training and Enterprise Center, Inc. (E D T E C).
Aaron A. Bocage, President.
Three Hundred Thirteen Market Street.
Camden, New Jersey zero eight one zero two.
Phone number is eight five six dash three four two dash eight two seven seven.
Website address is w w w dot e d t e c i n c dot c o m.
Enterprise Ambassador U S A.
Doctor Judith Stein, Director, National Institute for Educational Options.
Nova Southeastern University.
Fischler Graduate School of Education and Human Services.
One Thousand Seven Hundred Fifty N E One Hundred Sixty Seven Street.
North Miami Beach, Florida three three one six two dash three zero one seven.
Phone number is eight zero zero dash nine eight six dash three two two three.
Phone extension is five zero eight five.
Website address is w w w dot f c a e dot n o v a dot e d u slash o p t i o n s slash e a u s a.
[End of Page Ninety One]
[Page Ninety Two]
Federal Reserve Bank of Minneapolis.
Gary H. Stern, President.
Ninety Hennepin Avenue.
Minneapolis, Minnesota five five four zero one.
Phone number is six one two dash two zero four dash five zero zero zero.
Website address is w w w dot m i n n e a p o l i s f e d dot o r g.
Future Business Leaders of America
Jean M. Buckley, President and C E O.
One Thousand Nine Hundred Twelve Association Drive.
Reston, Virginia two zero one nine one dash one five nine one.
Phone number is eight zero zero dash three two five dash two nine four six.
Website address is w w w dot f b l a dash p b l dot o r g.
Jump$tart Coalition for Personal Financial Literacy.
Laura Levine, Executive Director.
Nine Hundred Nineteen Eighteenth Street North West,
Suite Three Hundred.
Washington, D C two zero zero zero six.
Phone number is two zero two dash four six six dash eight six one zero.
Website address is w w w dot j u m p s t a r t dot o r g.
Junior Achievement.
David S. Chernow, President and C E O.
One Education Way.
Colorado Springs, Colorado eight zero nine zero six.
Phone number is seven one nine dash five four zero dash eight zero zero zero.
Website address is w w w dot j a dot o r g.
National Council on Economic Education.
Robert F. Duvall, President and C E O.
One Thousand One Hundred Forty Avenue of the Americas.
New York, New York one zero zero three six.
Phone number is two one two dash seven three zero dash seven zero zero seven.
Toll-free phone number is eight zero zero dash three three eight dash one one nine two.
Website address is w w w dot n c e e dot n e t.
[End of Page Ninety Two]
[Page Ninety Three]
National Foundation for Teaching Entrepreneurship (N F T E).
Steve Mariotti, Founder and President.
One Hundred Twenty Wall Street,
Twenty Ninth Floor.
New York, New York one zero zero zero five.
Phone number is two one two dash two three two dash three three three three.
Website address is n f t e dot c o m.
National Youth Employment Coalition.
David E. Brown, Executive Director.
One Thousand Eight Hundred Thirty Six Jefferson Place, N W.
Washington, D C two zero zero three six.
Phone number is two zero two dash six five nine dash one zero six four.
Website address is w w w dot n y e c dot o r g.
Practical Money Skills for Life (V I S A).
Attn Corporate Relations.
P O Box one nine four six zero seven.
San Francisco, California nine four one one nine dash four six zero seven.
Phone number is eight zero zero dash V I S A dash five one one.
Website address is w w w dot p r a c t i c a l m o n e y s k i l l s dot c o m.
Rural Entrepreneurship through Action Learning (R E A L Enterprises).
A program of the Corporation for Enterprise Development (C F E D).
Andrea Levere, President, C F E D.
Kim Pate, R E A L Director.
Seven Hundred Seventy Seven N Capitol North East,
Suite Eight Hundred.
Washington, D C two zero zero zero two.
Phone number is two zero two dash four zero eight dash nine seven eight eight.
Website address is w w w dot r e a l e n t e r p r i s e s dot o r g.
[End of Page Ninety Three]
[Page Ninety Four]
Note: page ninety four is blank.
[End of Page Ninety Four]
[Page Ninety Five]
Learning Skills.
Australian Council for Educational Research (Test of Enquiry Skills).
Professor Geoff Masters, C E O.
Nineteen Prospect Hill Road,
Private Bag Fifty Five.
Camberwell, Victoria
Australia three one two four.
Phone number is plus six one dash three dash nine two seven seven dash five five five five.
Website address is w w w dot a c e r dot e d u dot a u.
The College Board.
Gaston Caperton, President.
Wayne Camara, Vice President for Research.
Forty Five Columbus Avenue.
New York, New York one zero zero two three dash six nine nine two.
Phone number is two one two dash seven one three dash eight zero zero zero.
Website address is w w w dot c o l l e g e b o a r d dot o r g.
Council for Aid to Education (C A E).
a subsidiary of the R A N D Corporation.
Roger Benjamin, President.
Two Hundred Fifteen Lexington Avenue,
Twenty First Floor.
New York, New York one zero zero one six dash six zero two three.
Phone number is two one two dash six six one dash five eight zero zero.
Website address is w w w dot c a e dot o r g.
The Critical Thinking Co (Cornell Critical Thinking Tests).
Robert Ennis and Jason Millman, Authors.
P O Box four four eight.
Pacific Grove, California nine three nine five zero dash zero four four eight.
Phone number is eight zero zero dash four five eight dash four eight four nine.
Website address is w w w dot c r i t i c a l t h i n k i n g dot c o m.
[End of Page Ninety Five]
[Page Ninety Six]
Educational Testing Service.
Kurt Landgraf, President and C E O.
Rosedale Road.
Princeton, New Jersey zero eight five four one.
Phone number is six zero nine dash nine two one dash nine zero zero zero.
Website address is w w w dot e t s dot o r g.
Ennis-Weir Critical Thinking Essay Test.
Robert Ennis and Eric Weir, Authors.
Four Hundred Ninety Five East Lake Road.
Sanibel, Florida three three nine five seven.
Phone number is two one seven dash three four four dash two zero three eight.
Website address is f a c u l t y dot e d dot u i u c dot e d u slash r h e n n i s slash a s s e s s m e n t
dot h t m l.
Institute for the Advancement of Philosophy for Children.
(New Jersey Test of Reasoning Skills).
Maughn Gregory, Director.
Montclair State University.
Fourteen Normal Avenue.
Montclair, New Jersey zero seven zero four three.
Phone number is nine seven three dash six five five dash four two seven seven.
Website address is w w w dot m o n c l a i r dot e d u slash p a g e s slash i a p c.
Kit of Factor-Referenced Cognitive Tests.
Kurt Landgraf, C E O.
Educational Testing Service.
Rosedale Road.
Princeton, New Jersey zero eight five four one.
Phone number is six zero nine dash nine two one dash nine zero zero zero.
Website address is w w w dot e t s dot o r g.
[End of Page Ninety Six]
[Page Ninety Seven]
Metiri Group (Self-Directed Learning Inventory).
Cheryl Lemke, President and C E O.
Six Hundred Corporate Pointe,
Suite One Thousand One Hundred Eighty.
Culver City, California nine zero two three zero.
Phone number is three one zero dash nine four five dash five one five zero.
Website address is w w w dot m e t i r i dot c o m.
Organization for Economic Co-operation and Development.
Barry McGaw, Director for Education.
Two, Rue Andre-Pascal.
F dash seven five seven seven five Paris Cedex sixteen.
France
Phone number is plus three three dash one dash four five dash two four dash eight two zero zero.
Website address is w w w dot o e c d dot o r g.
P R O - E D (Ross Test of Higher Cognitive Processes).
Eight Thousand Seven Hundred Shoal Creek Boulevard.
Austin, Texas seven eight seven five seven dash six eight nine seven.
Phone number is eight zero zero dash eight nine seven dash three two zero two.
Website address is w w w dot p r o e d i n c dot c o m.
Rainbow Project.
Doctor Robert Sternberg, Yale University Department of Psychology.
Box two zero eight two zero five,
Yale University.
New Haven, Connecticut zero six five two zero.
Phone number is two zero three dash four five two dash four six three three.
Website address is w w w dot y a l e dot e d u slash p a c e slash a s s e s s m e n t three dot h t m
l.
Watson-Glaser Critical Thinking Appraisal.
Jeff Galt, President and C E O.
Harcourt Assessment, Inc.
One Nine Five Zero Zero Bulverde Road.
San Antonio, Texas seven eight two five nine.
Phone number is eight zero zero dash two one one dash eight three seven eight.
Website address is w w w dot h a r c o u r t a s s e s s m e n t dot c o m.
[End of Page Ninety Seven]
[Page Ninety Eight]
World Class Arena (World Class Tests).
Richard Kree.
Eighty Three Piccadilly.
London W one J eight Q A.
England.
Phone number is plus four four dash (zero) dash two zero dash eight nine nine six dash eight four
four zero.
Website address is w w w dot w o r l d c l a s s a r e n a dot o r g.
[End of Page Ninety Eight]
[Page Ninety Nine]
I C T Literacy.
American Association of School Librarians (A A S L).
Julie A. Walker, Executive Director.
American Association of School Librarians.
Fifty East Huron Street.
Chicago, Illinois six zero six one one dash two seven nine five.
Phone number is eight zero zero dash five four five dash two four three three.
Extension is four three eight two.
Website address is w w w dot a l a dot o r g slash a l a slash a a s l slash a a s l i n d e x dot h t m.
Association for Educational Communications and Technology (A E C T).
Doctor Philip Harris, Executive Director.
One Thousand Eight Hundred North Stonelake Drive, Suite Two.
Bloomington, Indiana four seven four zero four.
Phone number is eight one two dash three three five dash seven six seven five.
Website address is w w w dot a e c t dot o r g.
British Educational Communications and Technology Agency (B E C T A).
David Hargreaves, Chairman.
Millburn Hill Road Science Park.
Coventry C V four seven J J.
United Kingdom.
Phone number is plus four four dash (zero) two four dash seven six four one dash six nine nine
four.
Website address is w w w dot b e c t a dot o r g dot u k.
Cisco Networking Academy.
Phone number is four zero eight dash five two six dash four zero zero zero.
Toll-free phone number is eight zero zero dash five five three dash six three eight seven.
Website address is w w w dot c i s c o dot c o m slash e n slash U S slash l e a r n i n g slash n e t
a c a d slash i n d e x dot h t m l.
[End of Page Ninety Nine]
[Page One Hundred]
European SchoolNet.
Ulf Lundin, Director.
Rue de Tr`eves, Sixty One.
One Thousand Forty Brussels.
Belgium.
Phone number is plus three two dash two dash seven nine zero dash seven five dash seven five.
Website address is w w w dot e u n dot o r g.
Finnish Ministry of Education.
Tuula Haatainen, Minister of Education and Science.
P O Box Twenty Nine.
F I - zero zero zero two three Government.
Finland.
Phone number is plus three five eight dash (zero) nine dash one six zero dash zero four.
Second phone number is plus three five eight dash (zero) nine dash five seven eight dash one
four.
Website address is w w w dot m i n e d u dot f i slash m i n e d u.
International Technology Education Association (I T E A).
Ethan B. Lipton, President.
One Thousand Nine Hundred Fourteen Association Drive,
Suite Two Hundred One.
Reston, Virginia two zero one nine one.
Phone number is seven zero three dash eight six zero dash two one zero zero.
Website address is w w w dot i t e a w w w dot o r g.
International Society for Technology in Education (I S T E).
Doctor Don Knezek, C E O.
One Thousand Seven Hundred Ten Rhode Island Avenue N W,
Suite Nine Hundred.
Washington, D C two zero zero three six.
Phone number is eight zero zero dash six five four dash four seven seven seven.
Website address is w w w dot i s t e dot o r g.
[End of Page One Hundred]
[Page One Hundred One]
Korean Ministry of Education and Human Resources.
Jin-Pyo Kim, Deputy Prime Minister and Minister.
Seventy Seven Sejongro, Chongro-Ku.
Seoul one one zero dash seven six zero.
The Republic of Korea.
Phone number is plus eight two dash two dash three seven zero three dash two one one seven
comma three one one four.
Website address is w w w dot m o e dot g o dot k r slash e n slash i n d e x dot h t m l.
Qualifications and Curriculum Authority.
Martin Ripley, Principal Manager for New Projects.
Eighty Three Piccadilly.
London W one J eight Q A.
England.
Phone number is zero two zero dash seven five zero nine dash five five five five.
Minicom number is zero two zero dash seven five zero nine dash six five four six.
Email is R i p l e y M at q c a dot o r g dot u k.
Website address is w w w dot q c a dot o r g dot u k.
[End of Page One Hundred One]
[Page One Hundred Two]
Partnership for Twenty First Century Skills.
Three Hundred Forty Five East Toole,
Suite One Hundred Five.
Tucson, Arizona eight five seven zero one dash one eight four two.
Phone number is five two zero dash six two three dash two four six six.
Email is i n f o at two one s t c e n t u r y s k i l l s dot o r g.
Website is w w w dot two one s t c e n t u r y s k i l l s dot o r g.
[End of Page One Hundred Two]
[End of Document]
Download