I’m going to start this post with the assumption that there is universal agreement that assessment should include helpful and constructive feedback to learners.
There is no clause in the Standards for Registered Training Organisation that refers directly to assessment feedback, but the User’s Guide does provide some guidance.
In the introduction to Chapter 4 – Training and assessment, the user’s guide states the following:
Students have told ASQA that it’s important to them that:
- their teachers, trainers and assessors are professional and knowledgeable about their subjects and industry areas
- the amount of training is enough to allow them to practise new skills before they are assessed
- they can access good-quality learning resources and facilities
- assessment activities are fair and well explained and students are given helpful feedback
Chapter 4 of the User’s Guide also provides “Tips on compliant practice – Effective assessment”. One of these tips is as follows:
- Make sure assessments enable a student to consistently demonstrate competence. Assessment should:
- be practised in multiple situations
- be practised over time
- incorporate the provision of feedback.
So, in total, the User’s Guide tells us the following:
- Students want helpful feedback
- ASQA recommends (provides a ‘tip’) that assessments should incorporate the provision of feedback.
This point is inarguable, so we should all still be on the same page in terms of assessment feedback.
Defining ‘helpful feedback’
I think most of us would agree that providing helpful feedback within the assessment process is really important.
I could spend the rest of this post defining feedback, but there are great explanations elsewhere. I wish the VET sector had a definitive place where we could go to define helpful or effective feedback, but as far as I know, that doesn’t exist. So, instead, I recommend you visit the NSW Education Standards Authority who provide a great rundown on effective feedback.
Implementing systems to ensure the provision of helpful feedback
Here is what the standards, and ASQA, do not tell us about assessment feedback:
- What defines helpful feedback
- How much feedback should be provided
- When and where feedback should be given
And this is also where all agreement on assessment feedback falls apart, because from this point, we’re just standing on our own opinions of best practice.
From here, I’m just going to share my own opinion. I do not state that my opinion is superior to anyone else’s, but I have put some thought into this. If nothing else, I hope this blog gets us all thinking about how we can set up systems that provide students with helpful and effective feedback.
Overdoing feedback systems
Setting up assessment systems and tools that require assessors to provide feedback comments for every single question and criterion WILL NOT result in helpful feedback. I’m sorry, all you are going to get is a bunch of unhelpful, generic, cut and paste feedback comments to tick off a (non-existent) compliance box.
Requiring assessors to write feedback comments for every question and criterion is not helpful! It creates a situation where learner’s are left with the task of reading mountains of feedback comments, and trying to work out which feedback is actually important and helpful. In fact, if assessors are forced to contrive or write unnecessary feedback, many learners will notice that, and eventually won’t even bother to review the feedback at all.
Written question tasks
If a written question asks the learner to identify the name of a legislation, how can you expect an assessor to write ‘helpful and constructive’ feedback for a correct answer? It’s right or it’s wrong! Making assessors write some kind of feedback sentence for each question makes zero sense to me. What are you expecting? “Great work!”, “Well done!” “Great answer!” – a smiley stamp? In that context, this type of feedback is not helpful. Frankly, it borders on condescending.

If a student answered this question satisfactorily, marking them as satisfactory is more than enough feedback. The student got the answer right. This is adult education, most students don’t need a smiley stamp on each correct answer!
Students may need feedback if there is an improvement opportunity, and of course they definitely need feedback if they answered the question incorrectly.

I believe assessors should be providing feedback for written questions under the following circumstances:
- The answer is incorrect / not satisfactory
- The answer or demonstration was correct or satisfactory, but there is an opportunity for the learner to improve. For example going beyond minimum requirements to achieve best practice.
- The learner response is particularly insightful, thoughtful, innovative or shows improvement, and deserves recognition.
- Maybe some of the student’s answers/responses have room for improvement, but others are of a high standard. Point that out to the student, let them know which answers they should ‘benchmark’ for future assessment tasks.
This type of feedback is aimed at helping the learner. If assessors are creating feedback to keep an overly cautious compliance manager happy, rather than being helpful to the learner….that is not a good result.
When reviewing feedback for written answers, ask yourself, how did that feedback actually help the learner? If there are a whole bunch of “great work!” comments, those ARE NOT helpful. Why is it great work? What did the learner specifically do to make the submission great? Was it actually great or was the assessor just pasting in generic feedback, so the feedback box wasn’t empty?
Checklists
Projects should have some kind of matrix or criteria checklist attached to them. Assessors should assess the project submission based on that criterion. Likewise, observation checklists are a set of criterion that assessors use to determine satisfactory performance.
I’m not a believer in marking feedback against every single criterion either. If the learner performed particularly well, particularly poorly, their performance was particularly insightful or there was room for improvement – let them know. They can use that information to benchmark future performance – it’s helpful. But otherwise, what is the point of the feedback? Again, if the point is to fill an empty box, then the point is being missed!
Context and environment versus feedback
The checklist below is a format I’ve often seen. There is a little box next to each criteria for ‘comments’. The size of the box is usually determined by the length of the criteria (not by what the criteria actually is!).

What are these comment boxes actually for? Is this for feedback to the learner, or is this where the assessor provides information on the context and environment of the assessment? Maybe the assessor has to write down what they observed? Or maybe it’s a bit of everything? Often the assessor guide doesn’t clarify what should be in these comments boxes either. I’ve validated a truck load of these types of tools, and I’m still confused! Everyone has a vague but different opinion on what should be going into those comment boxes.
Personally, I think this format has been around for so long that no one bothers to question it any more. I suspect that they are there for context information – to describe what the learner actually did. For example:

Tip – context and feedback are different things!
Summary
Great work! ? Just no. Its childish and mindless and not helpful for an adult learner.
Don’t set up systems that generate unhelpful, cut and paste feedback comments just so every box is completed. If there are too many feedback boxes, remove some! Focus on ensuring that the feedback provided is genuinely useful to the students learning journey.
ASQA does not (and can not) determine you non-compliant because every question and criteria doesn’t have written feedback. They simply want to see that you are providing the learners with quality training and assessment, which includes providing learners with helpful feedback. That cannot be measured by the amount of feedback boxes you put into your assessment templates or the amount of words your assessors use when providing feedback.
Lets stop over complicating compliance requirements! When it comes to assessment, just make sure the Principles of Assessment and Rules of Evidence are covered. Adding additional requirements like requiring feedback on every single item being assessed is not helping ensure compliance. In this case, it’s just going to make your assessor’s job harder, and the end result will be useless, unhelpful generic feedback. No one wins and ASQA won’t give a toss about all of your little feedback boxes being filled in.
Do make feedback requirements crystal clear to your assessors. Ensure that they are providing helpful, constructive and genuine feedback WHEN IT IS APPROPRIATE.
Over and out.
PS – happy to hear your feedback!
