final_brief_Personalized Learning LE Design Paper

advertisement
Learning Ecosystem Brief Design Paper: Mass Customized, Personalized Learning
Ramona Pierson,
Ramona Ramona.pierson@prometheanworld.com
Executive Summary
The goal of the Learning Ecosystem (LE) is to bring critical resources into the hands of teachers to
transform the teaching and learning moment. By leveraging a fully integrated learning ecosystem,
education will finally be able to fulfill the goal of developing a mass customized, personalized learning
solution at scale for all students and educators. Adaptive learning has capped the learning potential of
students within a prescribed box of learning assets. In addition, adaptive solutions do not consider
educators as part of the solution; thus such programs fail to create systemic changes necessary to create
the transformation needed to meet the needs of our kids and schools.
By customizing delivery of learning assets for teachers and offering immediate and targeted professional
development they can, in turn, offer immediate and targeted instructional interventions to their students.
Such an approach promises to transform today's educational environments to better prepare our students
for the future and improve schools, districts and institutions of higher learning.
The Learning Ecosystem offers a toolkit to not only support the work of teachers to customize instruction
to their students, but it also offers customized professional development to teachers based on their
learning needs and the needs of their students. Intelligent professional development means content that is
recommended to teachers is associated with individual student and group learning needs. This content is
selected in alignment with curricular content specific to the learner’s style. Professional development is
delivered by a system using artificial intelligence to appropriately identify, align, and differentiate content
for teachers.
The LE is a complicated project with sophisticated computer engineering and empirically based learning
and teacher progressions that has the potential to change the face of education. The premise; however is
simple, determine the needs of classrooms and individual students, and provide the teachers with the
knowledge and strategies to meet these needs in real time. Attempts have been made to address this
national issue, this project is unique in that it will utilize a complete team of professionals, involve all
stake holders in the development, pilot and implementation phases and has the capacity for replication
and portability across the country as the content will be open source.
The Learning Ecosystem will operate within a framework that is similar to RTI, for both the student as
well as the teacher. One of the most significant shifts in education policy of the past several decades has
been the implementation of RTI or Response to Intervention. The reauthorization of the Individuals with
Disabilities Education Improvement Act of 2004 (IDEA; P.L. 108-446) allows educators to use
responsiveness-to-intervention (RTI) as a substitute for the, or supplement to, IQ achievement
discrepancy to identify students with learning disabilities (LD) (Fuchs and Fuchs, 2005). “An RTI model
uses a student’s lack of response to an evidence-based intervention that is implemented with integrity as
the basis for intensifying, modifying, or changing an intervention. RTI is based on the notion of
discrepancy between pre and post intervention levels of performance. RTI is also consistent with a
problem-solving model of intervention in which problems are defined as a discrepancy between current
and expected levels of performance” (Gresham, 2004). Although the Learning Ecosystem is not
specifically designed to serve children with special needs it utilizes the RTI framework concept to
enhance the learning of individual children as well as entire classrooms by identifying gaps in learning
and instruction and providing educators with both insight and educational objects and strategies to
address these gaps in real time. The mass customized, personalized Learning Ecosystem design could
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
ultimately provide tailored learning experiences and intervention solutions for all students regardless of
learning needs by the utilization of dynamically programmed learning progressions that automates the
RTI process for all students and even educators. The human element is not eliminated, rather the
information regarding student performance, the means for addressing gaps for children who might be
struggling and providing strategies for those students who are ready to excel. Below is a brief synopsis of
a larger design paper available upon request.
Project goals
The primary technology innovations will be to develop a scalable platform that will provide mass
customization and personalized learning for both students and educators, and support a sustainable
continuously improving system for any school or district that wishes to use it, regardless of resource
constraints.
Student and Teacher Learning Progressions
At the center of the Learning Ecosystem, driving and enabling our approach to using technologies to
fundamentally change how educators and students work together to learn, are a few key assumptions.
The first assumption is that students progress through learning skills and mastering content in an organic
and developmental fashion, and second, that the current educational system has been designed to
primarily meet the needs of the majority, “typical” students; to move large numbers of learners through
the system as efficiently as possible (an industrial model); and to rank and sort students to some extent in
service of an increasingly outdated view of limited post-secondary options and economic advantage.
Third, recognizing that while education reform has been increasingly robust in recent decades, the
fundamental nature of the roles that teachers, administrators, education support specialists, and learners
and parents play in schools has not changed much.
The true potential of education lies in providing a framework for thinking differently about how educators
(of all sorts), students, and families work together to help learners find their strengths and progress toward
important and exciting learning goals. The idea of differentiating supports for students in a competencybased framework and extends to educators as well, fundamentally shifting the nature of the educational
work, has the potential of uncapping the learning for students beyond pre-defined learning categories that
exist today. We need teachers who can be effective coaches and guides, who motivate and challenge, and
who share responsibility for high achievement with their students as well as the full array of members of
the learning and home communities who can support students’ growth. Teachers must understand their
students, they must understand the content and skills to be learned, and they must understand how to build
relationships with their students so that together, they can foster continued growth and achievement.
Student Learning Progressions (SLPs) are the basis for enabling this shift in the learning and teaching
work. Carefully developed to model common journeys that students make through learning a content
area, while also remaining actively responsive to individual and group student data, SLPs will anchor the
Recommendation Engines that help educators as well as learners and parents understand what supports a
learner needs to move to the next level. The current educational learning progressions that exist today fall
short of having the comprehensive information to support the new learning models being proposed in a
parallel project; therefore, the LE proposes to begin with the current progressions then migrate the final
LE solution to use the new model of learning progressions which incorporate the cognitive, neuroscience,
and learning research such that we begin to support learning from a deeply individualized perspective.
To facilitate effective management of the educational system and to help set equitable learning goals,
standards have become commonplace. Unfortunately, standards often become the sole goal set for
learning, at least for some students. Worse, standardized assessments—by definition even smaller subsets
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
of the learning domain identified by standards—have become for many the de facto curriculum standard
and sole intended outcome. Traditional standards and standardized assessments have also tended to
promote the assumption that learning for students is linear, the same for all students, isolated by content
domains and blocks of instructional time, and often addressed once then either forgotten or never
revisited. In the effort to achieve educational efficiency, we have confused the relationship between the
domains of knowledge and skills that can potentially be learned, and the standards and assessments,
which help students and their teachers approach learning and understand progress.
“Spiral” curriculum approaches and curriculum “scaffolded” to support students moving at different paces
have helped, but learning progression-style instructional design has typically been approximated in a nonempirical fashion, relying on past experience of learning which may or may not be relevant for today’s
learners. For example, Popham (2007) notes that:
Typically, learning progressions are constructed on the basis of some sort of backward analysis.
An educator first identifies a significant curricular aim and then asks, “What does a student need
to know or be able to do to master this aim?” The educator identifies one necessary building
block and then asks, “What does a student need to know or be able to do to master this building
block?” This sort of backward analysis can isolate the key tasks a student must accomplish on the
way to mastery. Teachers should, of course, sequence the learning progression's building blocks
in a pedagogically defensible order. (p. 83)
The idea of building curriculum, instruction, and assessments based on learners’ progress is appropriate,
and the Learning Ecosystem takes that notion a leap forward by building and continuously refining
student learning progressions based on real-learner data from multiple, embedded assessment measures
based on a polymorphic learning model, which will allow student learning to develop new categories of
learning.
The Learning Ecosystem project is a way to better access and utilize the many resources and research that
exist in order to help personalize supports for students and their teachers, to take a more comprehensive
view of the students’ educational experience—e.g., that learning takes place in many ways, many places,
and with many formal and informal “educators”—and to refine how we choose to support students
through their learning journeys based on a growing body of data about how students actually learn. We
are approaching the creation of student learning progressions in a way that builds on traditions of practice
in standards-based education (a “linear” approach) while being flexible enough to recognize the variance
that occurs in real-world learning by real students with individual differences (a “responsive” approach)
and incorporating newer and deeper notions of learning that can only occur when neural and cognitive
science research data is combined with educational data. Potential outcome of project research could
result in considerations of new sets of standards and assessments that will unlock new doors for student
and teacher success as an outcome.
The Learning Ecosystem will build an increasingly robust perspective on SLPs, linked with Teacher
Learning Progressions (TLPs) to support the fundamental change in the learning-teaching relationship
that is needed for the 21st century age of information and innovation.
Although the TLPs and SLPs are functionally linked clarity requires ordering. As a result the
development of SLPs will be presented first, then the development of TLPs followed by the linking of the
two.
Development of Learning Progressions & Framework
Unique to the Learning Ecosystem is the linkage between student learning progressions and teacher
learning progressions. “Learning progressions are descriptions of the successively more sophisticated
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
ways of thinking about a topic that can follow one another as students learn about and investigate a topic
over a broad span of time” (NRC, 2007, Taking Science to School). Both teacher and student learning
progressions will be initially plotted within the Learning Progression Framework. The Learning
Progression Framework (LPF) will provide the common construct of learner progression for the linking
of SLPs to TLPs. The full development and relationship between student and teacher learning
progressions will be an iterative process, beginning with an estimation of Student Learning Progressions
in a discipline content area of interest developed from extant research on learners’ development,
discipline “enduring ideas” (Wiggins & McTighe, 2001) and relationships of these to an increasingly
deep understanding of the content area, standards and curriculum maps. The focus of the LPF for the
purposes of the pilot will be the Common Core Standards for middle grades Mathematics (6-8).
Learning progressions (LPs), hypothesized descriptions of the successively more sophisticated ways
student thinking about an important domain of knowledge or practice, develop as students learn about and
investigate that domain over an appropriate span of time. Learning progressions are interrelated with four
interrelated guiding principles (Hess 2008, 2010):
1. LPs are developed (and refined) using available research and evidence
2. LPs have clear binding threads that articulate the essential core concepts and processes of a
discipline (sometimes called the ‘big ideas’ of the discipline)
3. LPs articulate movement toward increased understanding (meaning deeper, broader, more
sophisticated understanding)
4. LPs go hand-in-hand with well-designed and aligned assessments.
The learning progressions frameworks developed in mathematics for this project build upon the concept
of the Assessment Triangle, first presented by Pellegrino, Chudowsky, and Glaser in Knowing What
Students Know/KWSK (NRC, 2001). “The assessment triangle explicates three key elements underlying
any assessment: ‘a model of student cognition and learning in the domain, a set of beliefs about the kinds
of observation that will provide evidence of students’ competencies, and an interpretation process’ for
making sense of the evidence” (NRC, 2001, p. 44). Thus, the LPF frameworks offer a coherent starting
point for thinking about how students develop competence in an academic domain and how to observe
and interpret the learning as it unfolds.
“We see the LPFs as articulating conceptual hypotheses about mathematics learning to be validated with
evidence seen in student performance across the grades. As teachers and researchers continue to use the
LPF learning targets and progress indicators for each strand, we will refine our thinking about how
mathematics learning develops over time for different populations of learners” (Hess, 2010, p. 6).
Interdisciplinary Connections will also be identified and tagged, so that the Recommendation Engines
can enable cross-content curriculum and assessment recommendations as well as give teachers insights
into student learning (e.g., what is occurring if a student can conduct measurement tasks successfully in
science but is challenged by similar tasks in mathematics class?). Initial connections will be estimated by
expert input, utilizing the NCIEA Learning Progressions Framework (Hess, 2008, 2010). As the Learning
Ecosystem grows, interdisciplinary connections will be based on empirical evidence of learners’ growth
and interdisciplinary links.
Learning Progression Scaling Process
By using assessments as ways to place students on the learner progression maps, rather than presenting
assessments as the proxy for content, we will be able to empirically examine the assumed standards-based
progressions and also permit students to move in non-linear fashion through learning the content. Data
trump our estimations and assumptions. We will first populate our assessment layer with data from the
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
National Assessment of Educational Progress (NAEP), PISA, state assessments, and available data from
other national and international databases.
The process for developing learning progressions relies on a Rasch model for assessment item scaling to
the Learning Progressions Frameworks (LPF). The Rasch model is an item response model in which the
total score on a measure is transformed in a nonlinear fashion to locate a person or item on a trait
continuum. Both persons and items are placed on the same scale, which allows for visual comparison of
how closely items are located vis a vis person abilities and of how much of the latent trait continuum is
reflected in the items used. In contrast to other item response models (called two-parameter and threeparameter models), only the difference between item and person position is modeled while extraneous
variables such as guessing or testwiseness are not used. The Rasch model is the simplest of item response
models and has the minimum number of parameters to estimate, thus making estimates obtained from its
use often more stable. It is particularly useful with smaller datasets or datasets with extensive missing
data that may stem from infrequent presentation of some items because they are very easy or very
difficult. Advantages the Rasch model shares with other item response models include conversion of raw
scores into interval-scaled measures and, particularly, invariance. When the data fit the model, the
location of a person or item on the continuum is independent of the particular set of items sampled or
population of persons who happened to respond, or invariant, making item response models useful for
computer-adaptive testing where different individuals respond to different items but receive a score (logit
position) on the same trait scale.
The family of Rasch models allows scaling of dichotomous; polytomous, and frequency count data.
Measures using items with combinations of response scales can be accommodated as can missing data.
Further, individuals’ use of the response scale can be assessed; test developers often assume test-takers
use higher categories of response to reflect more of the trait. With a Rasch analysis, this assumption can
be tested. Also subject to test is whether the trait being measured (e.g., knowledge of Geometry) is
understood in the same manner by subgroups in the population. If items function differentially in
relationship to the subgroups responding, scores on the measure cannot be compared because the trait is
interpreted differently for, for example, girls compared to boys. Invariance is tested to ensure that items
function similarly for population subgroups.
To assist in determining whether data fit the Rasch model, indices such as overall fit, dimensionality, item
and person fit, response scale step fit, and logical layout of items based on knowledge of the content are
used.
This approach will eventually reach a mass and variety of data in the system such that estimated learning
progressions for students and sub-groups of students will become stable at least within sub-domain areas
and for student sub-group patterns. Scaling based on accurate placement in relation to the content field
and to other assessment and curriculum items will help refine the ontologies used for the
Recommendation Engines and also serve to provide a stable benchmark for estimating validity, reliability,
and difficulty/depth of new items that additional users might bring to the system as it grows.
Mapping the Standards & Domain Modeling
The Learner Progressions and Performance team will work with the Digital Content and Development
team to identify the “core of core” math curriculum standards. Initial pilot focus will be on math, grades
6-8. These will be the focus of testing and refining the initial ontologies, with existing learning assets to
be harvested (with expert review) and for new learning assets to be created and piloted with district
partner educators.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
The student learning progressions (SLPs) will be mapped using the Learning Progression Framework;
assessments will need to be classified (and then tested and classifications refined as needed by the
algorithm) by key content sub-domains (e.g., a standard “strand” for a grade level. The recommendation
in turn delivers:
•
•
•
Student-focused tutoring type options
Classroom-focused instructional / curriculum options for the teacher (based on the group of
students, as well as on individual students’ needs)
Whether there might be more serious underlying issues for students, in which case they would be
“bumped” to the second tier (and eventually 3rd tier) of the Response to Intervention/Instruction
(RtI) framework.
Domain Modeling
A quotation from Messick (1994) neatly summarizes the core idea of an assessment argument:
“A construct-centered approach would begin by asking what complex of knowledge,
skills, or other attributes should be assessed, presumably because they are tied to explicit
or implicit objectives of instruction or are otherwise valued by society. Next, what
behaviors or performances should reveal those constructs, and what tasks or situations
should elicit those behaviors? Thus, the nature of the construct guides the selection or
construction of relevant tasks as well as the rational development of construct-based
scoring criteria and rubrics.” (p. 17)
To the end of instantiating such an argument, an Evidence-Centered Design (ECD) framework provides
terminology and representations for layers at which fundamental entities, relationships, and activities
continually appear in assessments of all kinds.
Educational assessment involves characterizing aspects of student knowledge, skill, or
other attributes (KSAs), based on inferences from the observation of what they say, do, or
make in certain kinds of situations. The ECD framework can be thought of as an end-to-end processes in
several conceptual layers (Mislevy, 1994; Mislevy, Steinberg, & Almond, 2002, 2003). The first step in
starting the assessment process is considering characteristics of the world that are relevant to the
assessment one wishes to construct. This process is represented by the top layers of the ECD model,
(adapted from Mislevy & Riconscente, 2006): The first layer is marshalling facts and theory about the
domain and the second is organizing the information in the form of assessment arguments. The middle
layer, the Conceptual Assessment Framework (CAF), specifies more technical models for task creation;
evaluation procedures; measurement models; and the like—in essence, blueprints for the pieces and
activities that instantiate the argument in the real world. The next layer concerns the manufacturing of the
assessment artifacts and the specifics for their usage. The lower layer describes a four-process
architecture for understanding assessment delivery.
Domain analysis and domain modeling define the high-level content or experiential domains to be
assessed and documents relationships among them. This is the content knowledge or subject matter
related to the educational enterprise; the ways people use it; and the kinds of situations they use it in.
What constitutes mathematics? What do we know about progressions of thought or skill; patterns of
errors; or common representations? For the current context, what knowledge, skills, goals, tools, enablers,
representations and possible distractions are relevant? How do students interact with the physical
environment, conceptual representations, and other individuals to accomplish certain feats? By what
standards are these efforts judged? To answer such questions, designers conduct a domain analysis. They
consider the domain from a number of perspectives, such as cognitive research; available curricula;
professional practice; ethnographic studies; expert input; standards and current testing practices; test
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
purposes; and the various requirements, resources and constraints to which the proposed assessment
might be subject.
Developments in psychology have had profound effects on domain analysis in recent decades (Mislevy,
2006). For example, domain analysis under behaviorist psychology, focused on identifying concrete and
precisely defined actions in a domain, to be expressed as behavioral objectives (Mager, 1962). The
information-processing perspective of the cognitive revolution (Newell & Simon, 1972) called attention
to the internal and external knowledge representations that people work with; the procedures and
strategies they use; and the features of problems that make them hard--all holding clear implications for
assessment design. A socio-cognitive perspective further widens domain analysis to the range of
cognitive, cultural, and physical tools students use and the ways they interact with situations and each
other to accomplish goals in some sphere of activity (e.g., Engeström, 1999).
In domain modeling, designers organize information from domain analyses to describe the
relationship among capabilities, what we might see people say, do, or make as evidence, and situations
and features to evoke it—in short, the elements of assessment arguments. Graphical and tabular
representations and schemas are constructed to convey these relationships. Furthermore, prototypes may
be used to fix ideas or test assumptions. Among the representational forms that have been used to
implement ECD are claims and evidence worksheets; Toulmin diagrams for assessment arguments; and
design patterns for constructing assessment arguments for some aspect of capabilities, such as design
under constraints and model-based reasoning (Mislevy, Riconscente, & Rutstein, 2009).
Twenty-first century technology influences how assessments are delivered as conceptualized in the fourprocess model. Advances in computer-based semantic analysis and pattern matching allow for more
complex evidence identification procedures. In both digital and physical systems, pattern recognition is
used to map aspects of the work product to symbols that represent quality or quantity of relevant aspects
of the work product (i.e., values of observable variables). In physical scoring systems, the need for
mechanical simplicity in scoring often drives backward up the ECD chain to constrain tasks to simplified
formats consistent with simplified scoring, thereby constraining the kinds of evidence that can be
obtained. In digital systems, we can program rules and algorithms to identify and process a wider variety
of types of work products and apply scoring rules more flexibly.1
Infrastructure & Refinement of Learning Progressions
A key challenge is how to identify gaps and create content to meet the learning needs of teachers. We
propose a decision-making framework that is student-centric and combines monitoring and guiding
learner progression while attempting to optimize teacher development. The primary goal remains
optimizing long-term student learning experience. Teacher development will be driven by a machine
learning engine realized using nonlinear parameterized models, which are trained using supervised
learning schemes to map teacher features, student scores and PD selections to measured class
improvements. Such models will initially be determined by experts and later optimized for a particular
geographical region and/or pedagogical goals to further optimize the PD selection process. The nonlinear
relationship between the teacher, class and PD elements will be identified and serve as a powerful guide
for future decision making despite its inherent uncertainty.
The learning needs of teachers will be evaluated by measuring the impact PD selections had on the class
scores. While the educator lies at the center of the AI model proposed, the objective function that drives
all action (i.e. curricular and PD selections) are guided predominantly by student progression assessments.
1
Excerpted from CRESST Report 778, http://www.cse.ucla.edu/products/reports/R778.pdf.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
With every additional data collected, the AI system will improve its precision thus leading to a more
accurate decision making process.
The process pursued and automated analysis provided by the AI engine, will be made available to the
primary stakeholders, including teachers and parents. This will enable the community at large to be
included in the evolution of the student and teacher learning progression.
This model will allow us to map out complex and interrelated learning progressions, which will increase
with complexity requiring a Hierarchical Deep Recurrent Neural network (HDRN) to manage the
dimensional complexity as additional subject areas are phased into the project. HDRN is a biologically
inspired computational architecture that will capture the spatiotemporal observations in the LPs as new
layers of strands and content domains are added to the LP mappings. The top down and bottom up nature
of the information processing will allow the for the LE to provide predictions on learning pathways for
individual learners and associations to content associated to each learning node.
Gap Analysis and Refinement
Assessment items will initially be drawn from a high quality assessment item bank such as ETS, which is
already aligned to the common core standards for Mathematics, allowing for immediate administration of
formative assessments. The Learning Ecosystem will continue building out assessment item resources
using NAEP, PISA, and state measures. Through expert vetting and alignment we will be able to test
alignment of these items to Common Core Standards through the Learning Ecosystem and comparisons
of Rasch modeling of item mapping along the Math LPF.
Analysis of items across learners and from multiple assessment banks will allow for the securitization of
item functioning. The LPP team will regularly assess item functioning (e.g., on a monthly basis) to
identify weak items within the system. Where items are found to have a weak fit, new assessment items
will be constructed and tested to align more closely with the CCSS and in turn refine and improve the
accuracy of the learning progression framework.
Through learning asset ratings, students and teachers will be able to give object-specific feedback. In pilot
stages, the LE teams will observe and work alongside teachers and students. Quantitative and qualitative
assessments of teacher and learning needs will prioritize the work of item and asset development for the
LE.
The learning assets of the system consist of layers of content, which includes: assessment items, student
learning objects, and teacher learning objects. Teacher learning objects consist of three types of content:
professional development for assessment, professional development for content knowledge, and
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
professional development for differentiation of instruction. Student learning objects consist of content
associated with the Common Core Standards. These student learning objects are also cross-correlated to
differentiate delivery based on student learning need.
Development of Teacher Learning Progressions (TLPs)
Teacher learning progressions are derived from student learning progressions in that the recommendation
engine pushes appropriate professional development content to the teacher based on the students'
performance across a learning progression. The system identifies Targets for Intervention (TfI) across two
tiers; first, by content area based on assessment outcomes, and second, by learner function to allow for
differentiation of instruction by the necessary modality for the individual student. In this way, the
Learning Ecosystem can present Targets for Instruction (TfI) to teachers for aggregate outcomes and
group learning or on a more granular level for individualized student need.
The learning progressions will be developed in an XML format and directly connected to the LP platform
and the Learning algorithms in order to allow for the data feedback loop to drive the flexible adaptation of
the progression to meet the individual learning needs and pathways of the learners and educators in the
system. The system will develop a “profile” of the in-service educator from pre-existing data from three
to five years of student achievement and assessment information to develop a learning need patter of
where to place the educator in the learning progression aspect of the content domain. (Pre-service
educators will be assumed to be developing their skills and will be assessed at baseline to determine their
mapping associated to pre-service learning). Curricular selections will be determined by the learning
algorithm formulated as a dynamic programming problem whereby action-dependent state transitions are
expressed as
and the expected reward is given by
The goal is to then maximize the sum of discounted rewards over time,
The dynamic system requires “experts” to define the two constructs for the dynamic programming
problem in the “top down” process. A solution to the DP problem is finding the optimal action (i.e.
curricular selection) at every state of the learner progression trajectory. Once sufficient data is collected,
the system will adaptively personalize the constructs for each individual learner (educator / student) and
match the learner’s learning trajectory to its “N closest learners”; it will refine the dynamic programming
model by using data from the “N closest” learners; and optimize the Professional development/curriculum
selection with the refined Dynamic Programming model.
The process for the development of Teacher Learning Progressions (TLPs) will be parallel to that of
Student Learning Progressions. Both will be based on the Learner Progression Frameworks derived from
the common core standards to ensure content alignment. In addition, professional development learning
assets will be informed by the Curricular Focal Points for Mathematics developed by the National
Council of Teachers of Mathematics (NCTM). The secondary construct informing Teacher Learning
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
Progressions to effectively identify needs and deliver appropriate content is the associated parallel learner
function with targeted instructional function.
The Learning Ecosystem will activate professional development learning assets to educators according to
learner function and/or content area based on signal of the system by predetermined thresholds of student
outcomes. In this way, the Learning Ecosystem can present Targets for Instruction (TfI) to teachers for
aggregate outcomes and group learning or on a more granular level for individualized student need.
The system will use cross validation process of content rating for teachers and students to assist in the
calibration of the model parameters. The data will also cross validate the value of the ratings with
achievement outcome data, which will allow the system to gather the meta-cognitive values and accuracy
of self reporting.
Teacher learning progressions, similar to student learning progressions, driven by student learning
progression item mapping, which is scaled to the Learner Progression Framework. The Learning
Ecosystem will capture data from the student assessments and the recommendations and educator
professional development that are presented in response to student assessments. It will be possible to
analyze what teachers do with individual students as well as with the full classroom of students and subgroups of students with particular needs. If / when a teacher effectively adjusts differentiated (one-on-one
or teacher with small group, but temporary) and universal (every day, every year) practices, we should see
a pattern of improvement over time in their students’ being a) off target less often, and/or b) for shorter
periods of time. Adjustments by educators in response to student needs become more automatic, and/or
more systemically incorporated into the regular classroom practice.
Finally, in such a system, multiple resources abound to help student and educators refine their
understanding of the student’s current place in the learning progression. Not only formal summative
assessments, but informal assessments, formative assessments, tutoring programs, computer games, and
regular curriculum and instructional activities all have potential to inform choices about the next
educational steps. This challenges much of traditional thinking about the role of assessments and how
they are used. Assessments can be any intentional effort to help students and educators understand
student progress. The “stretch” offered by mapping standards, curriculum options, learning assets, and
assessments (diagnostic, formative, and summative) to our content Field enables more differentiation of
instruction, including better targeted remediation and re-teaching in different modalities as well as
supporting extensions and deeper knowledge options for students who have already mastered the required
standard.
In a larger sense, evidence of meeting the learning needs of teachers will primarily be found in the
resulting performance of students following an instructional intervention. Specifically, the efficacy of
teaching and learning assets and professional development assets will be assessed through object rating
and feedback. Item feedback interfaces allow for communities of practice to evaluate and improve
existing system resources. Open resources will be vetted via a three-tier processes detailed in the sections
on learning assets. In addition, formative assessment of the pilot implementation by the LE team will
evaluate professional development assets through observation of teacher use and classroom
implementation. The LE team will conduct gap analysis in the identified student learning needs and
outcomes and teacher learning needs and outcomes compared with asset use. In the first two years
monthly analyses of the ontological framework effectiveness as well as asset use analysis will inform
content improvement. As the system becomes more robust through use the ability to meet learning needs
of teachers will improve based on learner progression discrimination and item refinement and validation.
Ongoing evaluation of content will be built sustained by use of the Learning Ecosystem as a site of
experiential learning for the students of the College of Education (in the Library Information Science
Program and Curriculum and Instruction Program). Finally, the team evaluators will engage stakeholders
and collect feedback data to inform development and refinement of the systems.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
Additionally, the LE team will conduct observations to ensure fidelity to the model. Student outcomes
may not accurately represent the effectiveness of the Learning Ecosystem generally or the instructional
interventions specifically if they are not applied with fidelity. The LE team will develop a fidelity rating
system. Graduate students and peer coaches will use classroom observations to assist teachers in
implementing the instructional interventions with fidelity.
In addition, capturing the impressions of the teachers and other stakeholders will be crucial to determining
the success of the Learning Ecosystem. The LE team will document feedback regarding system
utilization as well as the perceived effectiveness of the system and the instructional interventions. The
teachers’ perceptions of effectiveness are as important as the outcome data from students, particularly in
the initial stages. If teachers do not perceive the system and the specific instructional interventions to be
useful they could resist use, which would negatively impact the project. It is important to note that
significant change in student performance may not be observed immediately. Change and growth over
time are likely within this framework. Even with effective instructional techniques, growth can be
inconsistent. This again reinforces the need for teachers to feel that the system is working so that they
will persist through slower times of growth.
While learning progressions for teachers will be developed for in-service, practicing teachers initially, and
refined based on the growing database of information about educators and how they use the Learning
Ecosystem resources in support of their work with students, we will build on these in Phase II to
extrapolate the teacher learning progressions “down” into pre-service preparation. The existence in
Colorado of a “teacher identifier” that links teachers back to their pre-service preparation programs will
allow us to also trace back the learning progressions to preparation elements in Phase II.
Linking SLPs and TLPs to drive Recommendation Engines
In the pilot launch of the Learning Ecosystem, the LE team will be analyzing student learning
progressions and performance along side of teachers. A student’s failure to answer a content target area
correctly could be an indicator of three possibilities: the assessment item was poorly constructed, the
student has not mastered the concept, and/or the concept was not taught effectively. Indeed, an incorrect
response may also be part of a standard error of measure or have occurred because of more than one of
the aforementioned possibilities. In any formative assessment of learning these possibilities exist; the
learning ecosystem offers immediate resources for targeting the necessary intervention.
Should a large enough sample of the teacher’s students miss items correlated to a content area, the
learning ecosystem recommendation engine would activate content for professional development for the
teacher (by content as well as order of magnitude cognitive modalities of the population of students in
that class).
Should the learning ecosystem identify a specific intervention for a smaller subgroup of students or
individual student, the delivered content would not only offer content differentiated for that student, but
the system would link to a teacher professional development learning asset for specific differentiation of
instruction for that individual student or subgroup of students.
Lack of proficiency by a class of students could suggest either a need for professional development (as
described above) or could simply be a need for additional resources based on that content area. Teachers
would be able to search for content tagged along the content standards of the learning progressions
framework and cross-correlated by modality to ensure available resources.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
The cognitive coaching system will include two main components: a learner modeling module and a
recommendation engine. The learner trace model tracks student and teacher progress mapped to the LPF
by:
•
builds a probability model of where students may need built in support and refines the
model based on actual real time data collected within the system.
•
estimating directly for students through multiple assessment measures and indirectly
though data on learning asset use
•
estimating in-directly for teachers through patterns of students’ assessment results,
teachers related choices of curricular and instructional adaptations, and resulting impacts
on their students’ learning progress.
In addition to data captured for students’ and teachers’ learning progressions, the cognitive coaching
system will include individuals’ background information. These data sources contribute to feed the
recommendation engine that will deliver learning assets for students (both individual and classroom or
group focused for use by a teacher in support of students) as well as related professional development
(PD) modules for teachers.
A key output of the learning ecosystem is the educator user interface that will identify Targets for
Intervention (TfI). The TfI will be accompanied by professional development modules, which correspond
to the core of core mathematics standards and the National Council of Teachers of Mathematics (NCTM)
curricular focal points. The professional development modules will provide differentiated instructional
learning assets to provide efficient and timely response to intervention.
To best support improved student learning, a teacher must be able to identify the location of disconnect in
understanding for an individual student, as well as his/her strengths upon which to build. In order to be
able to deliver instruction that differentiates for individual needs, differentiated professional development
modules must be available alongside of high-quality, reusable differentiated content.
The goal of the learner modeling module is to capture and track learning progressions of individual
learners (students and teachers), so that the recommendation engine can identify each learner’s current
needs and search for items that best fit the needs for students and in turn deliver appropriate content to
teachers.
Item maps will show student performance will aid teachers in isolating the “disconnect” in the learning
process and allow for efficient and targeted differentiated instruction. Item families are groups of related
items designed to measure the depth of student knowledge within a particular content area (vertical item
families) or the breadth of student understanding of specific concepts, principles, or procedures across
content areas (horizontal item families). Within a family, items may cross content areas, vary in
mathematical complexity, cross grade levels, and include cognitive demand data. Item maps will allow
for an aggregate visual assessment of key areas of instructional needs at different levels by, for example,
individual student, groups of students (by subpopulations), and class.
The recommendation engine will work slightly different for students and teachers. In the case of a
student, it will first identify the learning needs based on his or her learning progression and personal
attributes. In the case of a teacher, the professional development areas will be identified based on both the
teacher’s learning progression and his or her students’ group data. For both students and teachers, the
recommendation algorithm will combine two approaches in finding learning assets or PD modules that
best fit current progression of the student or teacher: one is a content-based recommendation which
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
calculates how well the learning assets or PD modules match student’s or teacher’s learning needs and
personal traits; the other is collaborative filtering which looks for successful recommendations previously
made for other students or teachers with similar learning progressions. This hybrid approach is currently
considered a highly effective method for recommendation problems (Melville et al, 2002).
Depending on the uncertainty of the attributes (i.e., whether the value of an attribute is represented as a
probability), different modeling methods are adopted. In this cognitive tutoring system, attributes of
students and teachers can be summarized at a high level.
For attributes with certain values, the system will record their values and update them as they change. For
an attribute with probabilistic value (e.g., a student is a visual learner with 80% probability), the system
will use a Bayesian Network (BN) to estimate its probability. The BN paradigm is so far the most
commonly used formal reasoning technique for uncertainty-based learner modeling and can handle
situations with missing data (Brusilovsky & Millan, 2007). BN will be combined with machine learning
algorithms to refine the model based on the learner’s performances on assessment. Freeware tools already
exist for BN such as GeNIe (SMILE) and UnBBayes that support BN models and flexible integration of
learning algorithms.
Specifically, the content-based recommendation will rank the learning assets or PD modules according to
the student’s or teacher’s learning needs, his or her unique personal traits and strengths, the attributes of
learning assets or PD modules, and (for students) the teacher’s learning progression. As a start point, the
ranking of learning assets or PD modules will be determined by a weighted linear combination score
where the importance of each attribute will be tuned using simulated data and feedback collected from
teachers’ interactions with the system. Other information retrieval (IR)-based techniques (e.g., similarity
measures, probabilistic models) will also be explored to improve performance.
Collaborative filtering is used to find patterns among similar learners’ history that can be used to improve
classroom and group based supports. As the project proceeds, there will be more and more
students/teachers’ learning progressions recorded in the system. Recommendations made for other similar
students/teachers are valuable knowledge that can help improve subsequent recommendations. In this
approach, similar students/teachers are identified by considering attributes of their learning progressions.
At this point, the algorithm will calculate a score for previous recommendations adopted by similar
students/teachers based on how similar the students/teachers are to the student/teacher in question and
how often these previous recommendations have been adopted. Advantages include the ability for
classrooms and schools to benefit from trials of learning assets in other similar classrooms and schools,
and in helping to ensure that the learning assets and PD modules remain benchmarked to a widely
acceptable standard of quality.
The two scores from content-based recommendation and collaborative filtering will then be combined
using a weighted linear combination. The weights will be initially tuned by simulated data and then
adjusted using feedback collected from teachers’ interactions with the system.
Item maps will illustrate what students know and can do in subject areas by positioning descriptions of
individual assessment items along the Learner Progressions Framework. An item is placed at the point on
the scale where students are more likely to give successful responses to it. Assessment items from ETS
(initially), NAEP, and other state measures will be aligned with the common core standards to ensure
stability of scaling.
To map items to particular points on each subject area scale, a response probability convention had to be
adopted that would divide those who had a higher probability of success from those who had a lower
probability. Choosing a response probability convention has an impact on the mapping of assessment
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
items onto the scales. The underlying distribution of skills in the population does not change, but the
choice of a response probability convention does have an impact on the proportion of the student
population that is reported as “able to do” the items on the scales.
For identifying the “band” of performance that will shape the teacher learning progressions a Rasch
model will be used to identify response probability conventions. NAEP has adopted two related response
probability conventions: 74 percent for multiple-choice items (to correct for the possibility of answering
correctly by guessing), and 65 percent for constructed-response items (where guessing is not a factor).
These probability conventions were established, in part, based on an intuitive judgment that they would
provide the best picture of students' knowledge and skills. For pilot purposes we will adopt and test these
probability conventions and adjust them vis a vis refinement of the learning progressions.
Linking SLPs, TLPs, and data captured from Recommendation Engines with educator effectiveness
systems
Because the Learning Ecosystem has the capacity to capture a rich array of data about the work that
students and their teachers do together, it has the potential to drive a larger Educator Effectiveness system
that puts student achievement, professional development and continuous improvement of educators, and
local choices at the center of accountability policies. With data captured from multiple assessment
measures, frequent near-real-time assessments, the choices of curriculum and instruction and related
professional development efforts that educators make, links across classrooms and with education
specialists which allow richer data to follow students, and an unprecedented ability to look at
disaggregated subgroups of students, the Learning Ecosystem could prove central to changing not only
the classroom but educator effectiveness overall.
The concept is not new—many states have investigated the concept of being able to “drill down” to
individual student level data, and using that information as the starting point for accountability systems
for teachers, principals, schools, and districts. Value-added models have refined the usage of summative
assessments for such purposes. The real obstacle to success has been the ability to capture classroom data
closest to the students and their educators, and to have a way to simultaneously understand the effort that
educators are making so that they can be supported for continuous improvement and not merely judged.
These elements—a focus on the symbiotic learning and teaching relationship and on supporting efforts to
improve—are essential if a system is going to move from accountability and judgment to improving
effectiveness and responsibility for student achievement. The Learning Ecosystem, as described above,
has the capacity to capture rich data about student and teacher interactions, to use those data primarily to
drive new options for improvement, and to finally provide a more comprehensive picture of teacher,
classroom, and schoolwork for a fair and focused accountability system.
The LE project is being built in tandem with the creation of a “Balanced Assessment System” to focus on
teacher effectiveness. To ensure portability and scalability, we are also working closely with the
Colorado Department of Education and the State Council for Educator Effectiveness to help ensure that
the Learning Ecosystem has the capacity to serve as an effective accountability system should educators,
districts, or states wish to use it in that way.
The Learning Ecosystem has several features that make it particularly valuable to a state seeking an
accountability system that will improve student achievement:
•
It captures a broader range of richer assessment data closer to the classroom and learning
activities; this is much preferred to using end-of-year standardized assessments to infer
performance.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
•
It first uses those data to recommend supports for improvement, and can track efforts by students
and educators to improve; this puts improvement first, and allows recognition of educators’
efforts in context of their educational task.
•
It facilitates sharing of resources, including assessments and learning assets, across classrooms,
schools, and districts, helping to level the playing field and close achievement gaps.
•
It enables local choices to count—districts, schools, and individual teachers can use their own
materials and make their own choices, which are benchmarked to a more global standard.
•
It helps address the unintentional reinforcement of achievement gaps, which occurs when
educators, students, and families have no way to “recalibrate” their understanding of quality of
achievement performance.
Beyond the basic operations of the systems, the Learning Ecosystem project will provide the option of
capturing data about teacher work via the Recommendation Engines described above. To illustrate the
process with a negative example, if we saw a regular pattern of assessments for students that caused
recommendations for improvement of instruction that the teacher was either unable or unwilling to
implement or master, we would want that aspect of teacher learning to flag an instructional coach or
mentor for additional attention, and eventually this might become a focus of evaluation and remediation.
Back to the positive side of this type of system: teachers who are really good with patterns of students
would be recognized for same, even if these were the most challenging students with whom to foster
growth, and might eventually be encouraged to work with these students to close achievement gaps. The
evaluation system data captured about their students assessments, the recommendations, and the teachers’
successful changes in his/her practices would value that teacher, instead of putting him/her “at risk” in a
career because the summative state standardized assessments indicate continued learning challenges
among the students.
Beyond the Phase I capabilities, we hope to work closely with a state in Phase II to develop two
predictive Progressions for Instructional Effectiveness (PIEs) that are based on the data captured by the
Learning Ecosystem closer to the classroom. The first of these PIEs would compare the progress of a
teachers’ students to the broader group of similar learners, once the student learning progressions have
stabilized in patterns based on a large mass of student participants. Analysis of variations in students’
progress can also take into account teachers’ professional development efforts and changes to improve
classroom curriculum, instruction, and assessment practices. Student patterns of learning progression
over time that lag similar students would be considered low value-added (even detracting); student
patterns that improve over time, especially those that improve at faster rates than similar student groups,
would be considered positive value-added. We also believe that a second PIE model could be conceived
with the Learning Ecosystem data, where individual students are linked to the full group of educators with
whom they work. In this case, student progress over time provides an indicator of the effectiveness of the
learning community, and could provide insights into principal effectiveness, school policy, opportunities
to learn, and effectiveness of educators in content areas with less assessment evidence.
Digital Content and Educator Professional Development
The LE team will be to provide full support to the implementation and execution of the learning
progressions system and recommendation engine, which are the central components of the Learning
Ecosystem (LE). Other automated personal cognitive tutoring systems have failed in ease of use,
integration into normative practice, and appropriate support for building school capacity. Thus,
development for teachers related to data-driven instruction will be approached immediately through the
LE project to in-service teachers but also through the realignment of teacher and leader preparation
programs for pre-service teachers within higher education.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
To achieve authentic instructional development, a mutual partnership between the College of Education
and the school district will be facilitated and studied for continuous improvement. A framework for future
partnerships will help inform the national scale-up and sustainability of how this product will shift the
current educational paradigm. Due to the complexity and interdisciplinary nature of the Learning
Ecosystem, each partner Institution of Higher Education (IHE) will be expected to enter the Learning
Ecosystem Consortium as a collaborative unit, bringing forward internal relationships that further the
project’s capacity and sustainability (e.g., education and computer science). Without such
interdisciplinary requirements on IHE, the “siloing” effects of higher education will echo here and limit
the impact of the Learning Ecosystem. Recent studies including the Department of Education’s Study of
the Impact of Professional Development on Middle School Mathematics and the inTASC study of eLearning for Educators Initiative suggest important considerations for how professional development is
delivered for middle grades math. Whereas online delivery of professional development has shown
promising results for student achievement, greater emphasis must be placed on the quality and design of
professional development for math teachers as well as a focus on instructional skills over content
knowledge for educators. To account for these considerations the learning ecosystem will allow for
research on the efficacy of the professional development learning assets used by educators as well as
ongoing refinement of professional development as the project scales.
The initial implementation of the Learning Ecosystem will be studied not only for impact on professional
development, but to allow for ongoing feedback to inform and improve the learning assets available to
educators. How teachers use the system and their needs and feedback will be an essential data source to
informing improved versions of the learning progressions, educator interface, recommendation engine,
and learning assets for scale up of the Learning Ecosystem. Mechanisms for feedback will be built-in to
the LE itself through a feedback interface for students and educators to rate learning assets, data logs of
asset and item use for analysis of system performance, and development of a social network layer for
online communities of practice to sustain continuous system improvement. Teacher effectiveness is the
driving concept behind the harvesting and development of learning assets, and the effectiveness of the
teacher’s professional development will be assessed through his or her student’s results. The learning
ecosystem uniquely links student learning progressions to teacher learning progressions to drive the
appropriate delivery of these learning assets. To offer content and professional development that leads to
improved student achievement, the first priority of the LE team will be to identify and harvest existing
high-quality content sources.
Organization, harvesting, and development of Learning Assets
Data analysis of previous years’ performance growth as well as a baseline assessment of teacher learning
and teaching styles will inform an immediate picture of professional development needs. Educator
professional development will also be recommended based on individual and aggregate results for student
outcomes as the system is populated with data to allow for finer levels of granularity. A third source of
data to inform professional development will be through the collaborative partnerships with educators and
coaches from the pilot partner school district.
The underlying intention of any professional development learning asset is the improvement of student
learning outcomes. To support this goal the learning ecosystem is designed to not only diagnose learner
needs but teacher needs as well to support effective teaching and subsequent instructional interventions.
To improve learning a teacher must be able to identify the location of disconnect in understanding
occurring for an individual student. In order to be able to deliver instruction that differentiates for
individual needs differentiated professional development modules must be available alongside of highquality, reusable differentiated content. A unique contribution of the learning ecosystem is the linkage
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
between teacher learning progressions and student learning progressions across the learning progressions
framework.
Item maps linking content performance by modality will aid teachers in isolating the “disconnect” in the
learning process and allow for efficient and targeted differentiated instruction. Thus, assessment results
report not only content proficiency but modality data.
The Common Core Standards for Mathematics and the National Assessment for Educational Performance
(NAEP) framework for Mathematics along with the will be used for the assessment and mapping of
student learning outcomes. Individual learner maps will also show the relationships between items that
were correct and items that were incorrect for students in order to isolate related areas of concern.
Learning Assets Rating & Refinement
Embedded in the overall workflow in the LE, from acquiring content and related metadata to
management, and delivery, the annotator/data curator and the community of learners will play a key role
by providing feedback and assigning scores to the learning assets based on a number of variables that
address the alignment issue.
Professional development for using assessments and the Learning Ecosystem
Development of a strategic plan for assessment of professional development in schools and districts for
rollout in partnership with teachers, principals, psychologists and service professionals will include a
communication plan to support a feedback loop. We will collaborate with district coaches to develop
professional development modules focused on the topic of assessment to facilitate use of the assessment
system and learning assets. Coordinate with teacher learning progressions team to develop framework for
assessments that tie to natural order of magnitude for learning ecosystem implementation. Ongoing
observation of the implementation of the learning ecosystem will inform improvement of assessment
learning assets to ensure ease of scalability to other districts.
Research methods faculty will lead teams of curriculum coaches from the school district as well as math
teachers to develop framework for assessment professional development. The LE team will coordinate
and oversee the following:
• Assessment professional development modules will be constructed by the team with the
assistance of the LE partners and design specialists to ensure quality construction of
items.
• Assessment professional development modules will be tagged with appropriate meta-data
and validated in the repository.
• Creation of train-the-trainer professional development workshops with high-definition
video-capture. Assessment professional development assets will be available in the
following format types: video, audio, written, example curricular plans, etc. to meet the
different needs of educators.
Evaluation of assessment and LE use efficacy of teachers, principals, psychologists and service
professionals will continue through communities of practice. The LE team will collaborate with whole
child and community and learning progressions and performance teams to evaluate effective application
of assessment in the classroom through qualitative evaluation of teacher and student impact and
quantitative analysis of teacher effectiveness and learning outcomes, respectively.
Digital Assets for professional development
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
In collaboration the LE team we will develop a framework for evaluation of learning assets so that they
align appropriately to learning and teaching modalities and cognitive styles. Specifically, math
instructional assets will be categorized according to the following to facilitate targeting of process points
for intervention: conceptual/theoretical, mechanical, procedural, application, and representation. These
teacher targets for intervention align with tags for student math competence: 1) declarative knowledge, or
math facts; 2) procedural knowledge, or goal-directed processes; 3) conceptual knowledge, or an
extensive network of concepts (e.g., originality, cardinality) for problem solving; 4) estimation skills; and
5) the ability to graphically depict and model mathematical relationships and outcomes (Bisanz, Sherman,
Rasmussen, & Ho, 2005; Byrnes, 2001). Along these lines we will develop professional development
frameworks for training of learning ecosystem by content and identify need areas for learning assets to
support educator professional development.
In recent years, efforts of generating online professional development for STEM teachers have emerged
with promising results. Thus, the LE team will approach the harvesting and creation of professional
development learning assets parallel to the approach of student learning assets. A three-tier review team
will ensure the high quality initial selection and development of assets.
• Teams of district specialists, teachers, and educator preparation faculty will develop framework
for professional development needs and assets.
• Vetting of professional development learning assets will take place through the three-tier review
process as well as through communities of practice and asset rating.
• Areas of need for learning assets will be identified and pedagogical experts will design learning
assets for professional development centered on differentiation techniques for multiple
modalities.
• These learning assets will be available in multiple formats for accessibility and for effective
teacher learning, including (but not limited to), video, audio, and portable documents.
Professional development sessions will be evaluated with the LE team through qualitative inquiry with
educators and students to understand impact on teaching and learning. Professional development learning
assets will also be evaluated through quantitative analysis of teacher and student learner progressions. The
learning assets will also have an online UI that allows for rating of the learning objects, which conforms
to universal standards of LO ratings and feedback to the system for asset improvement.
The Learning Ecosystem is as much about teacher learning as it is about student learning. To that end the
learning assets in the Learning Ecosystem include content-based and instructional professional
development assets for teachers. The teacher learning assets will be organized and categorized correlated
to student learning assets. One of the unique strengths and potential for impact of the Learning Ecosystem
is the ability to connect student learner needs with teacher professional development through the
recommendation engine known as the cognitive tutor.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
References
Bisanz J., Sherman, J.L., Rasmussen C., & Ho, E. (2005). Development of arithmetic skills and
knowledge in preschool children. In Campbell J.(Ed.), The handbook of mathematical cognition (pp. 143–
162). New York: Psychology Press.
Brusilovsky,P. and Millan, E. (2007). User models for adaptive hypermedia and adaptive educational
systems. The Adaptive Web,, Vol. 4321, pp. 3-53
Byrnes, J. P. (2001). Minds, brains, and education: Understanding the psychological and educational
relevance of neuroscientific research. New York: Guilford.
Fox, L., Dunlap, G., Hemmeter, M. L., Joseph, G., & Strain, P. (2003). The Teaching Pyramid: A model
for supporting social competence and preventing challenging behavior in young children. Young
Children, 58(4), 48-­‐53.
Fuchs, D. & Fuchs, L. (2005). Responsiveness to intervention; a blueprint for practitioners, policy makers
and parents. TEACHING Exceptional Children (38), 1,57-61.
Gresham, F. M. (2004). Current status and future directions of school-based behavioral interventions.
School Psychology Review, 3, 326-343.
Hess, K., (2008). Developing and using learning progressions as a schema for measuring progress. Paper
presented at 2008,CCSSO Student Assessment Conference, Orlando, FL. [online] available:
http://www.nciea.org/publications/CCSSO2_KH08.pdf
Hess, K., (2008). Developing and using learning progressions as a schema for measuring progress. Paper
presented at 2008 CCSSO Student Assessment Conference, Orlando, FL. [online] available:
http://www.nciea.org/publications/CCSSO2_KH08.pdf
Hess, K. (2008). “Developing and using learning progressions as a schema for measuring progress.”
[online] available:
Hess, Karin K., (Ed.) December 2010. Learning Progressions Frameworks Designed for Use with the
Common Core State Standards in Mathematics K-12. National Alternate Assessment Center at the
University of Kentucky and the National Center for the Improvement of Educational Assessment, Dover,
N.H. (updated- v.2).
Interstate Teacher Assessment and Support Consortium (InTASC). (July 2010). Model Core Teaching
Standards: A Resource for State Dialogue. Council for Chief State School Officers, available online at:
http://www.ccsso.org
Melville, P., Mooney, R. J., & Nagarajan, R. (2002). Content-Boosted Collaborative Filtering for
Improved Recommendations. Paper presented at the 18th National Conference on Artificial Intelligence,
Edmonton, Canada.
NRC. (2007). Duschl, R., Schweingruber, H., and Shouse, A.(Eds.) Board on Science Education, Center
for Education, & Division of Behavioral and Social Sciences and Education. Taking Science to School:
Learning and Teaching Science in Grades K-8. Washington, D.C.: The National Academies Press.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
Popham, W.J. (2007). The Lowdown on Learning Progressions. Educational Leadership, 64 (7), pp 8384.
Vandewaetere, M., Desmet, P. & Clarebout, G. (2011). Review: The contribution of learner
characteristics in the development of computer-based adaptive learning environments. Computers in
Human Behavior. 27, 1, 118-130.
Wiggins, G. & McTighe, J. (2001). Understanding by Design. Alexandria, VA: Association for
Supervision and Curriculum Development.
Ramona Pierson (ramonajpierson@gmail.com; 206-852-5552)
Download