Keynote speech by Lindsay Paterson, Professor of Educational Policy, University of Edinburgh

st1\:*{behavior:url(#ieooui) } 1 ASSESSMENT AND CURRICULUM FOR EXCELLENCE

[(1) Introduction]

My intention today is to ask some questions about the role of assessment in encouraging sound learning, with particular attention to Curriculum for Excellence.

I’ll be dealing with the actual proposals made for the National Qualifications, but I’ll also deal with the role of assessment more generally.

Two general principles seem to be at the heart of Curriculum for Excellence:

the importance of students’ believing in themselves, and the importance of student motivation.

Students have to believe in their capacities if they are to learn:

if you set out expecting failure, according to this view, then you will fail, and indeed you will lack even the motive to try.

And to be motivated to try to learn, students have to see the task of learning to have some point.

So my purpose today is to consider the implications of these two points for assessment:

what does assessment have to do to induce students to believe in themselves?

and what might assessment do to persuade students to learn anything worthwhile at all?

[(2) Reality checks]

The most basic point to make is that proper self-belief depends on the student’s passing frequent reality checks, and that successfully doing so is a most potent source of motivation.

Thus the judicious use of reality checks aids motivation by underpinning students’ self belief because they see that they have accomplished something truly significant.

But for this to happen, these reality checks have to be set at an appropriate level of difficulty:

they have to stretch the most accomplished to pull them onto the next level,

and those learners who have not reached the level to which they are aiming need to be challenged in ways that show them why they fall short and how they may try again.

This requires two things above all.


It requires, first, the matching of assessment to students’ capabilities:

if the purpose is to show what has been understood in order to lead a student forward by pointing constructively towards what has not been understood, then the assessment has to be finely chosen:

it must not be so easy that everyone vaguely at the right level may pass;

thus assessment must differentiate, a point to which I’ll return several times.

But assessment must also not be so difficult that it cannot allow the reliable measurement of the current outer limits of a student’s understanding:

measuring where a student currently is requires measurement both of what she can do as well as of what she cannot.

This first principle might thus be called the matching of assessment to current understanding.


This also then requires, second, teachers who have expert understanding of the structure of knowledge, by which I mean the disciplines that have been built up and refined over the centuries, and that represent the embodiment of the best that the best minds and the best practitioners have thought, said and done.

Teachers with this disciplinary expertise are able to guide students through understanding successive zones of proximal development, not merely encourage students to set their own goals, and assessment is the means by which they do this:

We can call this second principle the grounding of assessment in the progressive structure of a discipline.

So the question is whether we can see a way in which these two principles might be consistent with what seem to be emerging as the roles for assessment in Curriculum for Excellence.

[(3) Levels of Assessment]

So far as the matching of assessment to students’ levels of understanding is concerned, there are such serious concerns about the proposed new National Qualifications as to render very dubious indeed the claims that they are an improvement on what we currently have or even that they are in any sense consistent with what Curriculum for Excellence seems to need.

The problem in practice is that the neat matching of tests to student ability that the designers of assessment might seek to achieve may well disintegrate in context if schools do not follow the centrally defined rules.

And the reason to fear that that will happen is the well-entrenched aspect of Scottish educational culture which has been called the tendency to over-presentation – to present students at the highest level at which they have even a small chance of succeeding, rather than at the level that the designers of the assessment have attuned to their current understanding.

That has been going on for a century, and so is not going to stop now.

 [The evidence on the operation of the Higher Still courses comes from research by Raffe, Howieson and Croxford.]

Consider the weight of evidence going back:

We find it in the current system:

Of those students in local authority schools whose average Standard Grade attainment is General, one third take Highers in S5, which is one level beyond the level that is supposed to follow from General.

Of those whose average Standard Grade attainment is Foundation, over one third take Intermediate 2 courses or Highers, one or two levels beyond what is meant to be.

The same continues into S6:

of those whose modal level of study in S5 is Intermediate 1, one quarter take Higher or Advanced Higher in S6, again at least one level beyond where they are meant to be.

The same phenomenon was evident at Standard Grade:

whereas the intention of Standard Grade was that the proportion who would gain Credit awards would be about no more than one fifth, currently over one third have average Standard Grade attainment at Credit.

Over-presentation at O Grade was the reason why we have Standard Grade in the first place:

by 1976, two thirds of S4 students were achieving at least one O grade, whereas the original intention was that it would be suitable for the top third only.

This history of schools’ presenting students at the highest feasible level is, (more fundamentally), a consequence of schools’ role in social selection,

having to grade people in as finely distinguished a manner as possible in order to aid their recruitment into subsequent educational courses or into jobs:

indeed, (in a system of comprehensive secondary schooling), the main sifting role is performed by terminal examinations.

It is no help to anyone – able students included – to have a clustering of attainment at the top end of any scale of measurement:

people do need to be stretched,

and assessment does have to differentiate.

We may infer two things about assessment and motivation from this long-standing Scottish predilection for presenting students at a level that will stretch them, even if that risks failure:

[(3.1) Over-presentation will continue]

One is that it is highly unlikely to go away now, and so it is as safe as any prediction can ever be in social science to expect that there will be no neat matching of National levels to prior attainment or onward to courses in S5:

many students who ought to be taking National 4 courses in S4 will do National 5 courses, and many who have only National 4 attainment in S4 will do a Higher in S5.

The situation may even be worse in this respect than in the current system insofar as the merging of Intermediates with Standard Grades removes one element of flexibility in S5:

whereas at present a student who just scrapes a Credit in Standard Grade might take an Intermediate 2 in S5 rather than go straight to a Higher, in the new system there will nowhere to go after a National 5 course in S4 other than Higher.

If over-presentation does persist in these various ways, then any hope we have of using assessment to encourage motivation in carefully targeted ways would be futile.

[(3.2) Safety nets]

The second inference is the obverse of the risk-taking that is entailed by presenting students at the highest feasible level: the insistence on not by-passing any safety net that is on offer:

so, (as certain as that there will be presentation at ambitious levels) is that there will be almost no by-passing of National 5 courses by able students en route to a Higher in S5, and so over-presentation paradoxically entails simultaneously under-presentation.

Moreover, the tendency for a safety net will be towards National 5, not National 4, creating further complications.

The reason for this is the fact that the National 4 courses are to be un-graded and internally assessed, and hence will have lower status than the National 5 courses.

The tendency in this situation will be therefore to encourage presentation at the higher-status National 5 level even of very borderline candidates.

Moreover, (despite this), the new structures will remove the safety nets of overlapping levels that have been at the heart of Standard Grade.

And so, (oddly enough), a tendency that arises in order to secure a safety net associated with over-presentation might end up providing a less secure net than the General level of Standard Grade currently provides.

So, (interfering with the neat designs), and hence with the capacity to use assessment to encourage learning, we will have over-presentation, because students will sit the highest feasible level at which their school judges them to have even a small chance of success.

We will have under-presentation, because students will simultaneously sit lower levels than they are capable of, in search of a safety net.

And we will have over-presentation within the under-presentation, because National 4 assessment will have lower status than National 5 assessment, or than the General level of Standard Grade.

And for all these reasons, there are very serious doubts as to whether the proposed new structure of National Qualifications can do much to improve students’ learning:

they are, so far as I can see from the very limited information we have been granted, not only inconsistent with the goals of Curriculum for Excellence to improve motivation and self-belief, but also entirely inferior to what we now have.

[(4) Literacy and Numeracy Tests]

There is a further concern, too, about the proposed new arrangements for National Qualifications, a concern about the proposed assessment of numeracy and literacy.

Here the concern is not so much with these tests themselves as with the vagueness of the Experiences and Outcomes of Curriculum for Excellence so far as all kinds of necessary technical competence are concerned.

In fact, far from being intrinsically a problem – despite the apparently widespread belief of much public criticism of them – in one important sense the tests of numeracy and literacy are welcome.

They do recognise the importance of those detailed, technical, craft-like skills which underpin valid knowledge and the valid use of knowledge.

And – despite the criticism – these proposals seem to understand that a well-designed assessment can encourage students to learn:

adapting a cliché of this debate, if we know that our pig is to be weighed then we will do all we can to fatten it up.

It is also very welcome that the tests of literacy and numeracy are now to be absorbed into the disciplines were they belong, English and mathematics, although the role of portfolios and the other non-examination-based features of them still raises many unanswered questions about the validity of the assessments.

But there is still a deeper problem.

The tests, being only two in number, tend to discourage an understanding that all disciplines require distinctive technical skills:

the skills of the laboratory or other kinds of data-gathering;

the skills of deploying the body and the voice in drama;

the physical skills of sport;

the practical skills of art or engineering or cookery.

Let me explain this point by illustrating how the Experiences and Outcomes fail to allow appropriate importance to these crucial technical matters.


Applied projects without the technical underpinnings of the theory and of the practical context will be amateurish;

for example, under the topic ‘inheritance’ in ‘Biological Systems’ at level 4, one outcome is:

‘Through investigation, I can compare and contrast how different organisms grow and develop.’

Comparing and contrasting are not techniques in themselves:

what matters is valid comparison, and the gathering of valid evidence, and it is these specific technical skills that need to be assessed, not merely investigating and comparing, terms which could be as readily used of the Victorian gentleman amateur as of the modern professional scientist.


Ethical debates without clear and specific criteria of objectivity and evidence will be impossible to assess reliably;

for example, at level 4 of ‘People, Society and Business’, one outcome is

‘I can critically analyse the relative importance of the contribution of individuals or groups in bringing about change in a significant political event’, to which the response is ‘why only bringing about change’? Why not resisting change? What do we mean by ‘critically analyse’? What does ‘significant’ mean?


Aims that, (with the best of intentions), seek to get beyond the merely routine will skate over the surface unless the contributing expertise has been properly grounded;

for example, in Drama we are told that learners will ‘have rich opportunities to be creative and to experience inspiration and enjoyment’.

How do we assess learners’ experiences in any objective way?

If someone tells us that they have been inspired and have enjoyed themselves, isn’t that just the end of the story, regardless of what anyone else might judge?

Isn’t what is lacking here some objective criteria of performance and of aesthetic quality?

Why is this outcome identical for drama, dance, music and art and design? Doesn’t that raise the suspicion that the thinking has not been precise enough?

Most worthwhile learning has something analogous to the skills of literacy and numeracy, and although these two might be the most fundamental, those others need attention through assessment as well.

The Experience and Outcomes pay inadequate attention to such skills, even in relation to literacy and numeracy:

thus we find almost no attention to grammar, and no clarity as to where the basic manipulative skills of arithmetic are to be learnt.

So, although the proposed assessment of literacy and numeracy is welcome for motivation because it does direct attention to technical details, it is from being enough in the wider context of Curriculum for Excellence’s vagueness on technical skills in general.

(6) Disciplines

I suggested earlier that there are two important principles which assessment has to meet if it is to encourage motivation and self-belief in an educationally worthwhile way:

it has to be matched to the current understanding of the learner, which we have just seen is unlikely to happen in the proposed new systems;

and it has to be embedded in the progressive structure of a discipline, and it is to that which I turn now.

Indeed, the points just made about necessary technical skills are a preliminary to this.

By forcing students to pay attention to what is specific to each discipline, studying, practising and mastering technical details are the necessary first steps to grasping disciplinary coherence.

Of course, much controversy has been generated by the place of the subject disciplines in Curriculum for Excellence, and when it has been claimed that it does not respect their integrity and importance, this has been strenuously denied by those in charge of the reform and by the advocates of the reform.

I don’t intend to return to this controversy today, but I would ask questions about the relationship between assessment and the disciplines.

(6.1) Assessment and the structure of disciplinary knowledge

The first point is that meaningful assessment presupposes some structure of disciplinary knowledge:

unless tests are to be merely of self-contained small bits of knowledge, they are bound to relate to a wider structure of thought.

Take some examples:


Why do we test students on their knowledge of quadratic equations?

It’s not because these are like a sort of Sudoko puzzle, sufficient in itself and pointing to nothing beyond itself.

It’s because quadratics relate in several ways to more general principles: to the properties of all the higher-order polynomials, to the properties of graphs, to the workings of calculus.

And these in turn lead to the highest reaches of the mathematical discipline, to measure spaces and topology and functional analysis.

In other words, quadratic equations are propaedeutic, a way of starting on important paths that have no intrinsic limit even if most students will choose not to go very far along them.

Worthwhile assessment of a student’s knowledge of quadratics will therefore have to make sure that these principles are laid down.


Why do we ask students to prepare a folio of reading and writing about their reading?

It’s not as an exercise in taking part in a book-reading group, however enjoyable these might be.

It’s because the reading we do in our teenage years lays down the beginnings of an understanding of the techniques that imaginative writers deploy, of the genres in which they deploy them, and of the range of human dilemmas on which they exercise their powers.

These forms of understanding make full sense only in the context of a canon of defining works that display the language with its most expressive powers, and that provide the insights into the human condition of some of the finest minds that have thought about it.

Few students will follow these first glimpses right to the end, but the glimpses are not mere random flashes:

to be certificated as being literate requires that a student shows some understanding of what the language (at its best) is capable of.


Why do we ask students to develop some understanding of the facts of the natural world and of the theories that link these facts together?

It’s not like a sort of pub quiz of animals, plants, elements or forces.

It’s because science is not only one of the supreme and beautiful intellectual accomplishments but also because it is uniquely powerful in explaining and manipulating the universe in which we live.

Understanding what electricity is does not merely offer opportunities for fun: the tests we make of whether students have understood the fun, (as opposed to merely appreciating it as entertainment), must point to what electricity is an instance of:

the movement of electrons;

the properties of those classes of substance that we call conductors;

the power that such understanding gives us for good and bad acts.

These are the senses in which assessment presupposes a structure of disciplinary knowledge.

They are why a syllabus is required for any meaningful assessment that is able to lead onto anything further:

the map of knowledge (which is what a curriculum ought to be) shows not only the main routes across the countryside – the subjects – but also, (in the syllabus), the places and vistas that we encounter along the way.

The syllabus is the detailed implications of the logic of the discipline, and we might know that an assessment is a valid test of a student’s knowledge of the discipline if the test selects bits from each major components of the syllabus.

Yet you will find nothing whatsoever about syllabuses in Curriculum for Excellence.

Curriculum for Excellence’s broad topic areas are not subjects but groups of subjects, and (even with these) it repeatedly says far more about cross-curricular themes than about specialist enquiry.

So I cannot see how its principles can be used to define a worthwhile system of assessment.

[(6.2) Comparison]

This necessary embedding of valid assessment in the progressive structure of disciplines has a further implication that is counter not only to the apparent philosophy of Curriculum for Excellence but also to the whole tenor of pedagogical principles that have come to dominate in recent decades.

This is that comparison and absolute standards of excellence, (far from being invidious), are inseparable from sound learning.

The comparison that is most obvious perhaps is with a body of knowledge that the learner might acquire but hasn’t, or hasn’t yet.

There are real standards of accomplishment because there are real entities called subjects or disciplines, and one of the main purposes of assessment is to tell the learner how far he or she is towards reaching these standards.

– how thorough is their understanding of French grammar;

– how well they understand the unifying principles of calculus;

– how skilled they are in carrying out a scientific experiment or performing a musical instrument in public;

– how deeply they have responded to the complexities of Shakespeare’s plays.

In telling someone how far they are towards these absolute goals, we are of course also telling them how far they fall short:

failure is inevitable.

For most people for most of the time, assessment is bound to mean relative failure, in the sense that there is, for most of us for all the time, far more that we don’t know about a subject such as French or mathematics or science or literature than that we do.

That comparison with absolute standards might be controversial enough in our relativistic and post-modern age.

But it also points towards an even more unfashionable conclusion:

comparison with absolute standards of human accomplishment entails

comparison with accomplished people.

If we set an assessment criterion in a way that is sensible, not only must it not be too easy;

it also must be within the grasp of the best students.

Inevitably, therefore, those who are less than the best will be forced by assessment to compare what they have achieved with the accomplishments of the best.

Only if we adopt a philosophy in which anything goes, or in which everyone’s view is equally valid, (including their view of what counts as success), can we avoid this comparing of students:

comparing students, (I would reiterate), follows with ineluctable logic from the existence of absolute standards of excellence,

and from variation in people’s capacity to accomplish them.

Yet comparing students is seemingly one of the most heinous sins of fashionable pedagogy.

And that is daft for another reason too:

research suggests that comparing our performance to others’ is indeed deeply encouraging of achievement:

it encourages people to do better;

it encourages them to learn from others wise strategies for improving their learning, such as monitoring their own progress by seeing it through the eyes of others;

it encourages them to measure their understanding of their own performance against the judgement of people who appreciate why this performance matters, not just teachers but also fellow-students.

7. Conclusions

So if assessment is to promote students’ motivation and their belief in their capacity to succeed at the levels of which they are capable, then it has to be properly challenging and it has to be authentic, and in both these respects it has to be grounded in the details of disciplinary distinctiveness.

Curriculum for Excellence evades the central point about assessment, which is that it tests expertise and therefore has to measure failure as well as success:

for assessment to be truly motivating and truly encouraging of self-belief, it has to be more than merely trite:to be meaningful, it has to be difficult.

That is what the point of learning is:

learning is, (at some level), unavoidably and intrinsically excluding, and in the important and admirable aim of extending opportunity to everyone capable of benefiting, we must not confuse opportunity with success.

Whatever the potential may be of the new Curriculum, the actual proposals for assessment

– insofar as one can discern anything about them in the scant documentation we have been given to date –

are so flawed in the ways that I have outlined that I cannot see how they may be said to be ‘for learning’.

And without valid assessment, can Curriculum for Excellence really be said to be feasible at all?  


1 Lecture given by Lindsay Paterson (Edinburgh University) to the annual conference of the Scottish Secondary Teachers’ Association, 7 May 2010, Peebles. For sources of research relevant to the lecture, please contact him at .

Published on 12 May 2010 - Congress