Authentic, inclusive assessment – takeaways from a workshop

Yesterday the School of Business and Economics was privileged to host Prof. Pauline Kneale, PVC for Teaching & Learning at Plymouth University (PU), as speaker at a seminar and workshop on authentic, inclusive assessment. PU has, in recent years, completely overhauled its institutional assessment policy, and PU’s teaching and learning support team has produced some excellent resources to help staff and students manage assessment better. We wanted to hear from Pauline what the main changes were that Plymouth had made, and what we could learn from their experience about enhancing our own assessment at Loughborough.

At the risk of oversimplifying the very rich discussion we had, I will summarise Pauline’s main points under seven key themes below:

  1. What is the best kind of assessment for learning – as opposed to the best assessment of learning? As soon as we frame assessment in this way, we have to ask ourselves why we are doing many things that we take for granted as part of ‘normal’ teaching and assessment.
  2. Assessment for learning requires us to think about inclusivity and fairness. PU found they had an average of 8-10% of students per cohort with special needs, for example requiring additional invigilators and infrastructure for exams. They decided to stop producing modified exams, and instead to create a single assessment that would be applicable to everybody. This had the dual effect of making the standards more consistent for all students and making the assessment tasks more interesting, flexible and varied. One way they achieved this was to give students choices regarding the type of assessment (e.g. an exam or a portfolio); another solution was to allow flexible time frames for exams (e.g. a 24-hour, open-book, non-invigilated exam).
  3. Thinking about assessment for learning also leads to authentic assessment tasks – i.e. tasks that would be done in the real world. Pauline gave examples of assessments for undergraduates involving them analysing real data sets (e.g. the data set from the lecturer’s own PhD thesis – even if this was done 30 years ago!) and coming up with new interpretations. Other examples involved accessing relevant data sets from employers on real problems they were trying to solve.
  4. The advantage of authentic assessment tasks is that they tend to be more challenging and interesting for students than tasks contrived by lecturers for assessment purposes, and they also serve the purpose of increasing work-readiness. As an added bonus, they are more interesting for lecturers to mark!
  5. Authentic, inclusive tasks often require students to carry out group work. This is both a good reflection of the world of employment, and also an efficient way of managing assessment in large cohorts. The most common mistake made in designing group work tasks is to set a task that is not challenging enough – the task needs to be so big that it cannot possibly be done by one person, and complex enough that every group could potentially approach it from a different angle. This keeps all individuals engaged, and also makes the sharing/presenting of group work much more interesting to the other groups because they are all interested to see how others tackled the task.
  6. Policy and rules (both at institutional and departmental/School level) need to be in place to support the development of assessment for learning. Needless to say, if any rules (or perceived rules) exist that run counter to the spirit of assessment for learning (for example, students not being allowed to see their exam scripts after marking), these need to be changed.
  7. Effective assessment requires planning and organisation. Time needs to be allocated to marking and giving feedback, and postgraduate students trained/supported to help with marking on larger cohorts (over 50). If a module is being ‘over-assessed’, time needs to be allocated for the module leader and other colleagues who teach on the programme to review the module and brainstorm solutions. A common problem is that module outlines contain too many ‘knowledge’ Intended Learning Outcomes (ILOs), so that students are forced into regurgitating content in exams, rather than developing skills (teamwork, report-writing, critical thinking, etc.) by working on meaty tasks.

The workshop provided plenty of food for thought. The simple act of asking ourselves how we can assess for learning can have a powerful effect on the way we design courses and programmes.