Aim: Foodservice is a key component of dietetics education and practice internationally yet benchmarks for competency
are limited. This study sought to review and moderate an assessment artefact of foodservice work integrated
learning (WIL) to develop a shared understanding of one tool which may be used in a suite of evidence to demonstrate
Methods: The foodservice curricula and assessment artefacts were described for the foodservice program at each of
four participating universities. An assessment artefact from WIL, the report, was identified as an indicator of foodservice
competence common to each program. Each university provided four purposively sampled WIL reports,
assessed in duplicate by two academics from other participating universities using the corresponding university
assessment rubric. Collated assessment results, along with the original assessment, were presented back to assessors.
A semi-structured group discussion explored variations in assessment results, factors influencing decisions,
and potential changes needed for assessment documentation.
Results: There was variation in assessment outcomes between independent assessors. In some instances assessors
did not consistently deliver the same assessment outcome, nor rank students in sequential order of performance.
This variation was less where an absolute ranking of satisfactory/unsatisfactory was applied. The assessor discussion
revealed three key concepts: importance of understanding the project scope; challenges which influence assessment
decision making; importance of understanding the broader program of assessment.
Conclusions: Assessment inconsistencies emphasise the importance of multiple assessors and assessment artefacts
across a programmatic assessment model, and the need for a clear understanding of competence in nutrition and