Unit 5C

advertisement

Part 4

Assessing Skills

There are two aspects to this:

• How do we know how well skills have been developed?

• Against what measures can skills be assessed?

© Curriculum Foundation 1

So, how do we assess skills? We suggested earlier that the best way was to observe them being deployed.

Prof Laura Greenstein of the University of Connecticut advocates what she call ‘authentic learning’ that provides contexts within which assessments of

‘mastery’ can take place.

Assessing 21st Century

Skills

A Guide to Evaluating

Mastery and Authentic

Learning

2012 Corwin Books http://books.google.co.uk/books?hl=en&lr=&id=uHWu_pPEPiUC&oi=fnd&pg=PR1&dq=assessing+21st+century+skill s+laura+greenstein&ots=u2BGDqqkor&sig=yvSYadAxTsWMZpDOPn_kwrTn9M#v=onepage&q=assessing%2021st%20century%20skills%20laura%20greenstein&f=false

© Curriculum Foundation 2

‘Mastery Learning’ is often linked to a ‘competency’ approach and

Brian Male suggested this was “the ability to apply knowledge with confidence in a range of situations”. This implies the use of skills to apply knowledge, so the higher (or deeper) levels of knowledge are seen as skills-based (more of this in the next part).

Lorna Greenstein sees ‘authentic learning’ as being located in a real or realistic setting, so that learning is not just abstract and theoretical but meaningful to the learner in their own context.

These settings then become the contexts within which skills can be deployed and so assessed. Without the authentic setting, assessment is not so valid.

Authentic learning and ‘authentic assessment’ are part of a world-wide movement.

© Curriculum Foundation 3

Sheila Valencia is Professor of Education at the University of Colorado.

Her 1993 book ‘Authentic Reading Assessment’ is interesting in that one might have thought that reading is always located in its own setting anyway. It is skills like problem solving that might vary greatly from setting to setting, and when you really have to solve a problem in real life, then it might be much easier (or harder!) than in the classroom setting.

However, in Unit 3 we looked at ED Hirsch’s research that showed that reading skills are, indeed, contextually related. So ‘authentic’ learning and assessment are really important.

© Curriculum Foundation 4

The key point here is that if we want to assess skills – be they subject skills or more generic ‘21 st Century’ skills – then the best approach is to observe those skills being deployed. The more authentic the situation in which they are deployed, then the more valid the assessment is going to be.

Laura Greenstein’s point is that if skills are learned in an authentic setting, then they will be able to be deployed in an authentic setting.

This is not just an assessment point. If we want our students to be able to apply their skills in real life, then we need to make our learning contexts as close as possible to those real-life situations. Hence

‘authentic’ learning and assessment.

© Curriculum Foundation 5

This still leaves us with the issue of what we are measuring skill performance against. How do we know how good the performance is when we see it?

There are two separate approaches here:

• A skills ladder

• Contextual performance

Of course, you will already have set learning objectives in your planning and these will be your key assessment criteria.

© Curriculum Foundation 6

Did you notice this ladder by the way? What do you think it is based on?

Doesn’t it look a bit like Bloom?

You will be familiar with this approach. It takes a particular skill and imagines what the progressive levels might be. This then acts as a rubric or marking scheme, and so adds some structure and reliability to what would otherwise be a subjective judgment.

The issue is the extent to which we get these levels right, or whether there really are distinct levels, or whether skills can be considered (or even exist) outside of the context in which they are deployed.

© Curriculum Foundation 7

Do you remember ED Hirsch from Unit 2?

He suggested that skills are always contextually related and that it is impossible to create a skills ladder that does not take account of the context in which the skill is deployed.

For example, the way you carry out an investigation in science is different from the way you investigate in history. Yet investigation is a skill. The ability to think critically (another skill) depends upon having sufficient knowledge of the subject you are thinking about.

Brian Male (2013) has argued that there is seldom a need for a skills ladder, because the increasingly complex knowledge context provides the necessary progression.

© Curriculum Foundation 8

In this approach, the skill is seen as staying essentially the same, but the context in which it is deployed becomes increasingly more complex. For example, a young child can carry out investigation of rolling cars down a slope and can control the variables of slope and surface etc.

Increasingly, they will be able to carry out more complex investigations (possibly ending with the Large Hadron Collider!). The skill of investigation has stayed the same (setting things up, controlling variables, drawing conclusions etc). What has changed is the level of complexity of the context in which the skill is deployed.

Hirsch would argue that this applies to all skills. His research showed that even reading skills are related to the learner’s knowledge of the subject being read.

© Curriculum Foundation 9

As with the three approaches to assessment, these two approaches are not being put forward to say that one is right and one wrong

(although, intellectually, they do seem rather mutually exclusive!).

Skills ladders can be very helpful – so long as we remember that skills do not necessarily develop in such an orderly and hierarchical way.

It is also useful to consider the complexity of the context as the criterion of progression.

What is essential is to provide an authentic context in which the skills can be developed, and in which they can be assessed.

But skills are not the only aspect of development in which we are interested.

© Curriculum Foundation 10

Part 5

A Balanced Scorecard

© Curriculum Foundation 11

Adapted from Robert S. Kaplan and

David P. Norton, “Using the Balanced

Scorecard as a Strategic Management

System,” Harvard Business Review

(January-February 1996): 76.

The notion of a ‘balanced scorecard’ was developed for industry by Robert Kaplan and

David Norton of Harvard Business School – but it has application for schools. It is an attempt to take account of complexity and avoid a unidimensional approach.

© Curriculum Foundation 12

The detail of the Harvard model is not important. What is significant is taking account of a wide range of those aspects of development in which we are interested, such as:

• Knowledge

• Understanding

• Skills

• Attitudes

• Values

• Personal development

These are sometimes represented on a ‘star diagram’.

© Curriculum Foundation 13

Each blue line here represents a different facet of development: skills, independence, reading ability….. whatever you are interested in.

The red lines indicate how well this learner does on each aspect. The better they do, the further out is the mark.

So we end up with a shaped profile and can see at a glance the learner’s strengths and weaknesses.

Of course, we had to go through all the processes of assessment to get to this stage.

© Curriculum Foundation 14

Whatever approach you take, it is important to remember what we value and not just to assess those things that are easy to assess.

There was sentence in a Australian Government White Paper on

Education in 2010 which said, ‘we must avoid the mistake of

England which is to over-test against too narrow a range of measures’.

We must be confident in the value of teacher assessments over tests. We must remember how effective our use of formative assessment can be. And we must remember all those aspects of development that we really value.

© Curriculum Foundation 15

Part 6

References and Answers

© Curriculum Foundation 16

Do we always measure something in assessment? It depends what you mean by ‘measure’! If we are trying to find out what a pupil has learned, then we do not necessarily need to measure in the sense of attributing a value or a number.

For example, we might want to find out whether a pupil knows the date of the Battle of Hastings. They either do or don’t know it – so we are not really measuring; we are just finding out. We are ‘ascertaining’ whether they know it or not.

This works for simple elements of knowledge. But when we come to more complex understandings, we might want to find out to what extent a pupil understands something, or how well they can do something.

This implies a measure or the attribution of a value – and the whole thing gets much more complex!

© Curriculum Foundation 17

There seemed to be sixteen different types of assessment. Did you get them all? So, what do they all mean? In reality, there are six different ways of grouping the types: according to purpose, type of questions, the learning being assessed, referencing and setting. Here we go with the definitions:

Purpose: there are different reasons for carrying out assessments:

• Initial is to establish a baseline or starting point

• Formative is to check how learning is progressing to make adjustments to the course or to teaching as you go along

• Summative is to get a final picture of learning

Types of questions: there are two key types:

• Objective usually involves a single answer or response that does not require interpretation by the assessor

• Subjective is the opposite – requiring some judgement on the part of the assessor. This can be minimised by mark schemes and rubrics but an element of subjectivity will remain.

© Curriculum Foundation 18

The learning being assessed falls into two broad categories:

• Standards-based refers to prescribed expectations in terms of knowledge and understanding

• Performance-based refers to the ability to perform a task or apply the knowledge and understanding

Assessments can be referenced, or compared, in different ways.

• Norm-referencing means comparing and individual’s performance to a group

• Criterion-referencing means comparing it to a set of given criteria

• Ipsative referencing is a comparison to the same individual’s previous performance

Settings can be formal and informal – and mean exactly what they say!

© Curriculum Foundation 19

Here are the references for Dylan

William

The case for formative assessment

The evidence that formative assessment is a powerful lever for improving outcomes for learners has been steadily accumulating over the last quarter of a century. Over that time, at least 15 substantial reviews of research, synthesizing several thousand research studies, have documented the impact of classroom assessment practices on students:

(Fuchs & Fuchs 1986; Natriello, 1987; Crooks, 1988; Bangert-

Drowns, Kulik, Kulik & Morgan, 1991; Dempster, 1991, 1992; Elshout-

Mohr, 1994; Kluger & DeNisi, 1996; Black & Wiliam, 1998; Nyquist,

2003; Brookhart, 2004; Allal & Lopez, 2005; Köller, 2005; Brookhart,

2007; Wiliam, 2007; Hattie & Timperley, 2007; Shute, 2008).

© Curriculum Foundation 20

So, that’s it. So it must be homework time.

This time you might like to go back to the unit that you planned and give consideration to the assessments that you will carry out.

We are usually very precise in the learning objectives we set for lessons, yet tend to be less precise about the objectives we set for longer pieces of work. Yet it is over the longer pieces that students make discernible progress, and assessment is much more meaningful.

We need to get away from WALT (We all learned to-day) and into

WILITU (What I learned in this unit).

Happy assessing!

© Curriculum Foundation 21

Download