Contact Us | Print Page | Sign In | Register
Curriculum, teaching and support
Blog Home All Blogs
Search all posts for:   

 

View all (167) posts »
 

Using feedback for more able learners to promote self-regulated quality

Posted By Robin Bevan, 08 March 2021
Dr Robin Bevan, Headteacher, Southend High School for Boys (SHSB)
 
One of the underlying tests of whether a student has fully mastered a new area of learning is whether they have the capacity to “self-regulate the production of quality responses” in that domain. At its simplest level, this would be knowing whether an answer is right or not, without reference to any third party or expert source. This develops and extends into whether the student can readily assess the validity of the reasoning deployed in replying to a more complex question. And, at its highest level, the student would be able to articulate why one response to a higher-order question is of superior quality than another.
 
Framed in another way, we teach to ensure that our pupils know how to answer questions correctly, know what makes their responses sound and, ultimately, understand the distinguishing features of the best quality thinking relevant to the context (and, by this I mean far more than just the components of a GCSE mark scheme).
 
This hierarchy of desired learning outcomes not only provides an implicit structure for differentiating task outcomes, but also gives a strong steer regarding our approaches to feedback for the most able learners. Our intention for our most able learners is that they can reach the highest level of critical understanding with each topic. This is so much more than just getting the answers right and hints at why traditional tick/cross approaches to marking have often proved so ineffective (Ronayne: 1999).
 
These comments may be couched in different language, but there is a deep resonance between my observations and the clarion call – over two decades ago – for increased formative assessment that was published as Inside the Black Box (Black and Wiliam, 1998):
 
Many of the successful innovations have developed self- and peer-assessment by pupils as a way of enhancing formative assessment, and such work has achieved some success with pupils from age five upwards. This link of formative assessment to self-assessment is not an accident – it is indeed inevitable.
 
To explain this, it should first be noted that the main problem that those developing self-assessment encounter is not the problem of reliability and trustworthiness: it is found that pupils are generally honest and reliable in assessing both themselves and one another, and can be too hard on themselves as often as they are too kind. The main problem is different – it is that pupils can only assess themselves when they have a sufficiently clear picture of the targets that their learning is meant to attain. Surprisingly, and sadly, many pupils do not have such a picture, and appear to have become accustomed to receiving classroom teaching as an arbitrary sequence of exercises with no overarching rationale. It requires hard and sustained work to overcome this pattern of passive reception. When pupils do acquire such an overview, they then become more committed and more effective as learners: their own assessments become an object for discussion with their teachers and with one another, and this promotes even further that reflection on one's own ideas that is essential to good learning.
 
What this amounts to is that self-assessment by pupils, far from being a luxury, is in fact an essential component of formative assessment. Where anyone is trying to learn, feedback about their efforts has three elements – the desired goal, the evidence about their present position, and some understanding of a way to close the gap between the two (Sadler: 1989). All three must to a degree be understood before they can take action to improve their learning. (Black & Wiliam, 1998)

Understanding the needs of the more able: a tragic parody

Sometimes an idea can become clearer when we examine its opposite: when, that is, we illuminate how the more able learner can be starved of effective feedback. To illustrate this as powerfully as possible, I am going to employ a parody. It is a tragic parody, in that the disheartening description of teaching and learning that it includes is both frustratingly common and yet so easily amenable to fixing. Imagine the following cycle of teacher and pupil activity.

 

  1. The teacher identifies an appropriate new topic from the scheme of work. She delivers an authoritative explanation of the key ideas and new understanding. It is an accomplished exposition and the class is attentive.
  2. A set of response tasks are set for the class. These are graduated in difficulty. Every pupil is required to work in silence, unaided – after all, it has just been explained to them all! Each pupil starts with the first question and continues through the exercise. The work is completed for homework.
  3. The teacher collects in the homework, marks the work for accuracy of answers with a score out of 10. 
  4. In the next lesson, the class is given oral feedback by the teacher on the most common errors. The class proceeds to the next topic. The cycle then repeats.
This is probably not far removed from the way in which many of us were taught, when we were at school. Let us examine this parody from the perspective of the more able.

 

  1. It is highly likely that the more able pupil already knows something, or a great deal, about this topic. Nonetheless, complicit in this well-rehearsed didactic model, the most able pupil sits through the teacher’s presentation, patiently. A good proportion of this time is essentially wasted.
  2. Silent working prohibits the development of understanding that comes through vocal articulation and discussion. The initially easy exercise prevents, by its very design, the most able from exploring the implications and higher consequences of the topic. The requirement to complete all the questions, even the most simple, fills the time – unproductively. Then, the whole class faces the challenge of completing the harder questions, unsupported, away from the teacher’s expert assistance. For the most able, these harder questions are probably the richest source of potential new learning. But it is no surprise that for the class as a whole the success rate on the harder questions is limited.
  3. The most able pupil gains 8 or 9 out of 10; possibly even an ego-boosting 10. The pupil feels good and is inclined to see the task as a success. Meanwhile, all the items that are discussed by the teacher were questions that everyone else got wrong, not the learning that is needed to extend or develop the more able pupil.
  4. A new topic is started. The teacher has worked hard. The class has been well-behaved. The able pupil has filled their time with active work. And yet, so little has been learned.

Unravelling the parody

This article is intended to focus on the most effective forms of feedback for the more able learner; but it is clear from the parody that we are unlikely to create the circumstances for such high-quality feedback without considering, alongside this, elements such as: the diagnostic assessment of prior learning, structured lesson design, optimal task selection, and effective homework strategies. Each of these, of course, warrants an article of its own.
 
However, we cannot escape the role of task design altogether in effective feedback. A variety of routine approaches, often suited for homework, allow students to become accustomed to the process of determining the quality of what might be expected of their assignments. For example:
 
a. Rather than providing marked exemplars, pupils are required to apply the mark scheme to sample finished work. Their marking is then compared (moderated), before the actual standards are established. This ideally suits extended written accounts and practical projects.
 
b. Instead of following a standard task, pupils are instructed to produce a mark scheme for that task. Contrasting views and key features of the responses are developed; leading to a definitive mark scheme. (It may then be appropriate to attempt the task, or the desired learning may well have already been secured.) This ideally suits essays and fieldwork.

c. These approaches may be adapted by supplying student work to be examined by their peers: “What advice would you give to the student who produced this?” “What misunderstanding is present?” “How would you explain to the author the reasons for their grade?” This ideally suits more complex conceptual work, and lines of reasoning.

d. As a group activity, parallel assignments may be issued: each group being required to prepare a mark scheme for just one allocated task, and to complete the others. Ensuring that the mark schemes have been scrutinised first, the completed tasks are submitted for assessment to the relevant group. This ideally suits examination preparation.
 
Although these are whole-class activities, they are particularly suited to the more able learner as they give access to higher-order reflective thinking and the tasks are oriented around the issue of “what quality looks like”.

Marking work or just marking time?

Teachers spend extended hours marking pupils’ work. It is a common frustration amongst colleagues that these protracted endeavours do not always seem to bear fruit. There are lots of reasons why we mark, including: to ensure that work has been completed; to determine the quality of what has been done; and to identify individual and common errors for immediate redress.
 
The list could be extended, but should be reviewed in the light of one pre-eminent question: to what extent does this marking enhance pupils’ learning? The honest answer is that there are probably a fair number of occasions when greater benefit could be extracted from this assessment process.
 
The observations of Ronayne (1999) illustrate this concern and have clear implications for our professional practice with all learners, but perhaps the most able in particular. In his study, Ronayne found that when teachers marked pupils’ work in the conventional way in exercise books, an hour later, pupils recalled only about one third of the written comments accurately – although they recalled proportionately more of the “constructive” feedback and more of the feedback related to the learning objectives.
 
Ronayne also observed that a large proportion of written comments related to aspects other than the stated learning objectives of the task. Moreover, the proportion of feedback that was constructive and related to the objectives was greater in oral feedback than written; but as more lengthy oral feedback was given, fewer of the earlier comments were retained by the class. In contrast, individual verbal feedback, as opposed to whole-class feedback, improved the recollection of advice given.

So what then should we do?

It is usually assumed that assessment tasks will be designed and set by the teacher. However, if students understand the criteria for assessment in a particular area, they are likely to benefit from the opportunity to design their own tasks. Thinking through what kinds of activity meet the criteria does, itself, contribute to learning.
 
Examples can be found in most disciplines: pupils designing and answering questions in mathematics is easily incorporated into a sequence of lessons; so is the process of identifying a natural phenomenon that demands a scientific explanation; or selecting a portion of foreign language text and drafting possible comprehension questions.
 
For multiple reasons the development of these approaches remains restricted. There is no doubt that teachers would benefit from practical training in this area, and a lack of confidence can impede. However, it is often the case that teachers are simply not convinced of the potency of promoting self-regulated quality expertise.
 
A study in Portugal, reported by Fontana and Fernandes in 1994, involved 25 mathematics teachers taking an INSET course to study methods for teaching their pupils to assess themselves. During the 20-week part-time course, the teachers put the ideas into practice with a total of 354 students aged 8-14. These students were given mathematics tests at the beginning and end of the course so that their gains could be measured. The same tests were taken by a control group of students whose mathematics teachers were also taking a 20-week part-time INSET course but this course was not focused on self-assessment methods. Both groups spent the same time in class on mathematics and covered similar topics. Both groups showed significant gains over the period, but the self-assessment group's average gain was about twice that of the control group. In the self-assessment group, the focus was on regular self-assessment, often on a daily basis. This involved teaching students to understand both the learning objectives and the assessment criteria, giving them an opportunity to choose learning tasks, and using tasks that gave scope for students to assess their own learning outcomes.
 
Other studies (James: 1998) report similar achievement gains for students who have an understanding of, and involvement in, the assessment process. 
One of the distinctive features of these approaches is that the feedback to the student (whether from their own review, from a peer or from the teacher) focuses on the next steps in seeking to improve the work. It may be that a skill requires practice, it may be that a concept has been misunderstood, that explanations lack depth, or that there is a limitation in the student's prior knowledge.
 
Whatever form the feedback takes, it loses value (and renders the assessment process null) unless the student is provided with the opportunity to act on the advice. The feedback and the action are individual and set at the level of the learner, not the class.
 
In a similar vein, approaches to “going through” mock examinations and other tests require careful preparation. Teacher commentary alone, whilst resolving short-term confusion, is unlikely to lead to long-term gains in achievement. Alternatives are available:
 
i. Pupils can be asked to design and solve equivalent questions to those that caused difficulty;

ii. Pupils can, for homework, construct mark schemes for questions requiring a prose response, especially those which the teacher has identified as having been badly answered;

iii. Groups of pupils (or individuals) can declare themselves “experts” for particular questions, to whom others report for help and to have their exam answers scrutinised.
 
Again, in each of these practical approaches the most able are positioned close to the optimal point of learning as they articulate and demonstrate their own understanding for themselves or for others. In doing so, they can confidently approach the self-regulated production of quality answers.
 
Further reading
  • Black, P. & Wiliam, D. (1998). Inside the Black Box: Raising standards through classroom assessment. School of Education, King’s College, London.
  • Fontana, D. and Fernandes, M. (1994). ‘Improvements in mathematics performance as a consequence of self-assessment in Portuguese primary school children’. British Journal of Educational Psychology. Vol. 64 pp407-17.
  • James, M. (1998) Using Assessment for School Improvement. Heinemann, Oxford.
  • Ronayne, M. (1999). Marking and Feedback. Improving Schools. Vol. 2 No. 2 pp42–43.
  • Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science. Vol. 18 pp119-144.

From the NACE blog:


Additional support

NACE Curriculum Develop Director Dr Keith Watson is presenting a webinar on feedback on Friday 19 March 2021, as part of our Lunch & Learn series. Join the session live (with opportunity for Q&A) or purchase the recording to view in your own time and to support school/department CPD on feedback. Live and on-demand participants will also receive an accompanying information sheet, providing an overview of the research on effective feedback, frequently asked questions, and guidance on applications for more able learners. Find out more

Tags:  assessment  feedback  independent learning  metacognition  research 

Permalink | Comments (0)