Developing consistent marking and feedback in Learn

Background

More and more Schools within Loughborough University are looking at ways in which they can develop consistency within marking and feedback. Additionally, they are moving towards online submission to support this. As a result, colleagues are looking at ways that they can use rubrics or grid marking schemes to feedback electronically in an efficient and timely manner.

Philip Dawson, (2017) reported that:

“Rubrics can support development of consistency in marking and feedback; speed up giving feedback to save time for feed-forward to students; and can additionally be used pre-assessment with students to enable a framework for self-assessment prior to submission.”  (p. 347-360.)

There are several types of rubrics and marking guides available within Learn and these take on different forms within different activities. Each has different requirements and results. This can make the process of transitioning to online marking a daunting process and, as we found recently, requires a carefully thought out approach.

Loughborough Design School recently made the move to online submission and online marking using the Learn Assignment Activity. Following this decision, we ran several workshops to assist staff with making the transition and specifically a rubric workshop. This blog post explores, explains and offers some options to the issues we encountered in the School and that we are facing more widely across the University.

What is the challenge?

Staff are already using hard-copy versions of feedback sheets that replicate the aims of having a rubric (i.e. consistency of marking and feedback), but many of these existing rubrics do not neatly transition into the Learn Assignment Activity and require a blend of features.

For example, a common feature of rubrics is that as well as providing a set of levels for criteria they often have a space provided to put in a specific mark e.g. 9 out of ten for a specific piece of criteria. This level of granularity can be the difference between a 1st class honours degree and a 2:1 class degree and, crucially, it allows students the opportunity to see where they can gain marks. Rubrics in the Learn Assignment Activity do not allow for this type of granularity – you can assign a range to a level e.g. 60-70% but not a specific mark within this range.

What’s the difference between the Learn Assignment Activity and Turnitin Feedback Studio rubrics?

What’s the difference between a rubric and a marking guide?

A rubric aligns marking criteria with fixed levels of attainment. For instance, a rubric may feature several criteria with attainment levels stretching from Fail, Poor, Average, good and excellent and within these levels a description will inform the student (and tutor) of where they have been awarded and lost marks:

A marking guide is more flexible and simplistic in what it offers. You still have criteria, but instead of levels, the tutor is expected to give a qualitative summary of how they feel the student performed and a mark for the criteria:

 

For both the rubric and marking guide, the criteria can be weighted to reflect the components importance in the overall mark.

Moving forward

The Centre for Academic, Professional and Organisational Development plan to offer a new Rubric workshop in Semester 2 of the 1819 academic year. The aim of this workshop will be to provide clear guidance on the benefits, use and technical considerations behind rubrics and marking guides. Existing workshops can be found on the following page: https://www.lboro.ac.uk/services/cap/courses-workshops/

We’ll continue to work with Schools and support academics on a one-to-one basis where requested. We recognise that every case is different and recommend getting in touch with the Technology Enhanced Learning Officer and Academic Practice Developer within your School for further support.

Discussions will also continue with Turnitin.co.uk and the Moodle (the system behind Learn) community so we can stay ahead of changes and new rubric features as they arrive.

References

[Phillip Dawson (2017) Assessment rubrics: towards clearer and more replicable design, research and practice, Assessment & Evaluation in Higher Education, 42:3, p.347-360.]

 

 

Supporting students’ learning using STACK: Application on a Finance module

Kai-Hong Tee (School of Business and Economics) has been using the STACK question type in Learn to generate mathematical questions. In the post below, Kai explains how it works and why it benefits students. 

Background
I am currently teaching “Financial Management” which is offered as a core module for our second-year students studying for the “Accounting and Financial Management” (AFM) and “Banking and Financial Management” (BFM) degree programs, which are currently one of the best in the UK, with the latest ranking from the Guardian at the second place after University of Bath. The AFM degree is also accredited by the professional accounting body, we follow strict criteria when teaching the students and appropriately assessed them. However, as this module is also opened to other students whose degree programs are less mathematical in nature, such as Marketing and International business, it is important that “Financial Management” must provide sufficient support taking care of a wider range of students of different capabilities. This prompted me to re-think and re-design an existing self-assessment exercise that already has been available to students on LEARN. Figure 1 shows the questions and answers I have been using to help students on Learn. This assessment is based on one topic (equity valuation) in which most students don’t feel very confident, based on my observation and years of experience teaching “Financial Management”.

Figure 1. 

 

As you can see from figure 1, this self-assessment exercise consists of limited questions. The aim, however, is to encourage students to gain a better understanding on the topic through practicing doing the problems. However, for the weaker students, having just 4 or 5 questions may not be enough to develop the skill required for this topic.

Why use the STACK question type?
To efficiently supply larger number of questions so as to allow students to have more chances to practice them to acquire the skills, I adopted the STACK question type in the Learn Quiz activity to enable mathematical questions to be generated automatically. Currently, STACK has been applied in different schools, including the subject areas of engineering and mathematics. Figure 2 shows a snapshot of a question set up using STACK on LEARN. Figure 3 shows that using STACK it is possible to generate a similar question to be done again with different information. This then allows students to have additional practice.

Figure 2.

Figure 3. 

I’ve applied this idea to “financial management”. Figure 4 is an example of a question. Figure 5 shows that students will receive feedback on the answers they provide.

Figure 4.

Figure 5. 

New questions will be generated automatically and can be viewed by clicking on “start a new preview” (see figure 3), the implementation of STACK therefore involves some program coding. Figure 6 gives a snapshot of the “program coding” screen. Here, every question is treated as a “model”, and the “numbers” in the question as “variables”, since they will be different in each attempt of the students to “start a new preview”. Figure 6 shows the range of “variables” (inputs) that you want to set for your questions, representing the “new” question each time a new preview is attempted.

Figure 6.

What’s next?
One issue I face here is that the “new” question won’t have the text changed, just the “numbers”, there may therefore be a familiarity after a few rounds of practicing the questions if there are insufficient questions. Therefore, to increase the effectiveness, more questions can be included to reduce biases arising from “getting right answers because of familiarity”. These questions of different level of difficulties are then re-shuffld for each new round of attempt made by the students.

An area that is worth further developing is to incorporate adaptive learning pathways. Basically, I would only need, for example, 5 questions of different levels of difficulty, and then work on them to create pathways based on the feedback (answers) from the students. From the feedback, this then indicates which level of difficulty is reasonable to further assess the students. It may be possible for STACK to be developed in a way that students are led to attempt questions of reasonable level so that their standards are monitored and matched so that appropriate skills could be developed alongside sufficient practices of more suitable questions. This will be an area for future investigation. If workable, this should support the weaker students better. This implies that students could practice more questions and become more skillful based on their level of understanding, rather than simply practice any available questions without adequate understanding of their standards.

Further information at STACK can be found on the following page:
 http://www.stack.ed.ac.uk 

Tutorials on STACK are available on:  https://stack2.maths.ed.ac.uk/demo/question/type/stack/doc/doc.php/Authoring/Authoring_quick_start.md 

The STACK project team also recently won a HEA CATE award to disseminate STACK across other institutions: https://www.heacademy.ac.uk/person/open-university-stack 

EAT – it’s good for you!

Loughborough University and the Higher Education Academy Community of Practice: on Assessment and Feedback are pleased to offer a one-day event focusing on developing and implementing a self-regulatory approach to assessment. The event is taking place on Wednesday 20th September 2017 in the Stuart Mason Building and is being facilitated by the Centre for Academic Practice.

The day will be split into two parts:

Developing a Self-Regulatory Approach to Assessment: The EAT Framework (10.30 – 12.30, SMB 0.14)
Professor Carol Evans, University of Southampton

Assessment practice is a key driver in promoting high impact pedagogies and student engagement in learning. A step change is needed to advance how higher education institutions implement assessment practice to enhance student engagement and to maximise student learning outcomes. The session will describe how the EAT self-regulatory framework, a holistic inclusive assessment feedback framework, has evolved and how it can be used to support student and staff development of assessment literacy, feedback and design in partnership with each other as part of sustainable assessment feedback practice. Core to the development of this approach is an understanding of cognitive, metacognitive, and emotional regulation of learning informed by the Personal Learning Styles Pedagogy Framework (Waring & Evans, 2015).

Lunch will be provided from 12.30 – 1.30

Implementing EAT: Key lessons in scaling-up (1.30 – 3.30, SMB 0.02)
Professor Carol Evans, University of Southampton

This session is designed for Associate Deans and all those responsible for leading assessment and feedback practice. In the session, key considerations in scaling-up assessment and feedback practices mindful of institutional and faculty priorities and specific disciplinary needs will be explored with the intention of identifying strategies to support key priorities as an integral part of ‘ the fabric of things’ within the university. The potential of being a core member of the HEA Assessment and Feedback online Community of Practice will also be highlighted.

You can book onto this event on My.HR by following this link: https://myhr.lboro.ac.uk/tlive_ess/ess/index.html#/summary/careerdev/scheduledlearningactivity/474418AXK5

Bridging the Feedback Gap

It is a common occurrence to hear staff express concerns about how feedback is used, but it’s often unclear what the expectations around feedback are for both students and staff.

Simon Martin, Department of Materials (AACME), recently a conducted a survey that was aimed at establishing just how much student and staff attitudes to feedback differ, and how these gaps might be bridged. With the help from the Materials’ Programme President, Alex Marrs, a short on-line survey was sent out to students within the Department. Materials staff were invited to take part in an identical survey.
Bridge
Concerns and issues experienced by staff and students surrounding assessment feedback indicated many similarities and a few differences giving potential clues to ways forward to improve the effectiveness of feedback.

The results of the survey were shared with School staff at a recent lunchtime Learning and Teaching workshop aimed at finding ways to make feedback more relevant, effective and meaningful for students whilst also making it manageable and sustainable for academics to deliver.

AACME’s regular L&T workshops focus on considering, challenging and developing practice.

If you wish to know more about the survey results, methodology and indicated outcomes Simon Martin is happy to be contacted directly (s.j.martin@lboro.ac.uk) for further information.

Feedback practice was also the focus of a staff/student Teaching Innovation Award last year in SSEHS. The final report of Harry Lane, Emma Giles, Dr Emma Haycraft and Dr Hilary McDermott’s project ‘Developing a common language: Enhancing communication and feedback’ is available on the 2015 awards section of the CAP website (http://www.lboro.ac.uk/services/cap/procedures-schemes/teaching-awards/teaching-innovation-awards/)

CAP Forum Friday 6th Feb – Focus On Assessment

Focus on assessmentDate: Friday 6th February
Venue: BRI.2.08 – Bridgeman Building
Time: 14.00 – 16.00

Come and discuss assessment!

 

During January the Centre for Academic Practice (CAP) has been running a series of workshops and other activities focused on assessment. The CAP Forum, which will be the first in a regular series, will be an opportunity for any staff with an interest in assessment to share ideas and experiences.

There will be a number of short presentations around the theme followed by a structured discussion. Refreshments will be served.

The presentations will include:

  • Dr Nick Allsopp (CAP) – Diversifying assessment and inclusive assessment
  • Dr Mike Waring (SSEHS) – A framework for developing effective formative assessment and feedback practice

All are welcome but please e-mail cap@lboro.ac.uk to indicate if you’re intending to come along.

Making assessments work

Exhausted from a deluge of assessment marking? You may be in agreement with Einstein, “It is simply madness to keep doing the same thing, and expect different results.”  As Fisch and McLeod put it back in 2007…

Jobs That Don't Exist…we are at risk if we are using the same assessment practices to prepare students for a very different world. Much of the information and knowledge once taught and assessed is now accessible online via the ease of a Google search which doesn’t require learning or even engagement.

For us all, annually reviewing our assessments is a chance to think of new questions, but also to question what and how we are assessing.

What do we want to assess? Are we assessing what our students remember of what we know? Or what students found on the Internet? Or what our students can do with the information they’ve accessed from all sources?

If memory is important then we can set an unseen exam, but if critical thinking and analytical thought are key then case studies or open book exams, or live project assessments are often more effective. Interestingly many academics believe their accrediting bodies require exams, but many of these bodies say they now see traditional exams as less relevant today than more innovative assessments demonstrating independent learning, attitude and knowledge along with proficiencies and skills including digital competencies.

Our assessments are key elements of learning – the activities and developmental advantages of assessment are immense (assessment for learning). There’s also the judgment of what students know and what they need to develop (assessment of learning).  Thinking about how students can be involved in assessment in terms of peer and self-assessment, type of assessment, or assignment question development all support learning and help spread the assessment load.

Loads are important – we need to think if the assessment or resulting marking excessive, could it be lightened to ensure best investment of staff and student time? Quality rather than quantity is what we want to assess. We need to take action to make it work most efficiently. With the world’s best brains available in academia, this has to be the place we can assessment the most effective.

Focus on assessmentThis post is part of our January ‘Focus On…’ activities around assessment.

The Centre for Academic Practice offers a range of workshops on learning and teaching including assessment  http://www.lboro.ac.uk/services/cap/courses-workshops/workshops/  and is focusing on a different topic each month during the first half of 2015 http://www.lboro.ac.uk/services/cap/courses-workshops/focus/

JISC (2013) Supporting assessment and feedback practice with technology: from Tinkering to Transformation.  Available online http://repository.jisc.ac.uk/5450/4/Jisc_AF_Final_Synthesis_Report_Oct_2013_v2.pdf

CEDE project reports on Developing Assessment Criteria http://eden-share.lboro.ac.uk/id/item/61

Quality Assurance Agency guidance – Quality Code B6 Assessment  of students and the recognition of prior learning http://www.qaa.ac.uk/en/Publications/Documents/quality-code-B6.pdf

Transforming the experience of students through assessment: TESTA

Focus on assessmentTESTA is a methodology designed to address assessment and feedback issues at the programme-level. It is built on a robust, triangulated research methodology with qualitative and quantitative elements, and underpinned by educational principles and research literature.

What problem is TESTA addressing?

• The disconnect between assessment innovations at the individual module level and assessment problems at the programme-level
• The imbalance between summative assessment and formative assessment
• The lack of clarity students have over understanding the goals and standards they should be orienting their overall effort towards (often a result of the QAA course specification requirements)
• The lack of co-ordinated programme-wide assessment policy and practice

The main aim of TESTA is to enhance the student learning experience from assessment by providing evidence to programme teams about assessment and feedback patterns and to help teams to identify ways of improving assessment design in the interests of better learning outcomes. The approach of TESTA has been to collect programme data, analyse and collate this into a readable case study, and engage in a conversation with the whole programme team about the findings.

TESTA has been implemented on over 100 programmes across 40 national and international universities, which reflects the level of success the approach has achieved.

The aim at Loughborough University is to pilot the approach and in light of this I am seeking to identify programmes that may be suitable. If you are interested or if you would like more information then please get in touch. h.k.basra [at] lboro.ac.uk

Assessment is this month’s ‘Focus On…’ theme in the Centre for Academic Practice. Find out more here.

Assessment that is “equitable, valid and reliable”

Assessment is a complex topic since it involves two distinct aspects. First, it forms an essential element of the learning process. Students learn both from assessment activities and from their interaction with staff about their performance in those activities. Second, it is the means by which academic staff form judgements as to what extent students have achieved the intended learning outcomes of a module or programme. These judgements form the basis for the grading of student performance through the allocation of marks and through the award of credit or qualification.

Constructing assessment tasks which test students appropriately, can be marked efficiently and meet ‘quality’ expectations is a challenge.  In regard to the latter, the QAA for Higher Education have developed The UK Quality Code for Higher Education, which is a definitive reference point for all those involved in delivering UK higher education. It makes clear what institutions are required to do, what they can expect of each other, and what the general public can expect of all higher education providers. In regard to assessment, is it a key expectation of the Quality Code that:

“Higher education providers operate equitable, valid and reliable processes of assessment … which enable every student to demonstrate the extent to which they have achieved the intended learning outcomes for the credit or qualification being sought.”

The University provides academic staff with policies, regulations and guidance to help ensure that assessment at Loughborough is “equitable, valid and reliable”, which can be accessed via the following links:

The Centre for Academic Practice is organising a number of activities throughout January to support academic colleagues to consider the assessments they use. Follow this link to explore how you can participate in these activities.

Focus On… Assessment

Focus on assessmentStarting this month, the Centre for Academic Practice is going to have a different thematic focus every month throughout the academic year. The focus for January 2015 is ‘assessment’ which seems appropriate as we approach the examination period.

For each theme we’ll have a range of workshops, ‘coffee and cake’ sessions as well as blog posts and tweets. Take a look at the CAP website (http://www.lboro.ac.uk/services/cap ) to see what’s coming up.