One of the problems which very nearly forced us to abandon our first test of Rogo yesterday was our lack of understanding of the importance of paper ‘type’ in assessment delivery in Rogo and, arguably, the degree to which we are used to the way that QuestionMark Perception does things.
Test Type |
Feedback? |
Restart? |
Fire Exit? |
Multiple Attempts? |
Review Previous attempts? |
Self-Assessment Test |
Yes, granular control |
No |
No |
Yes |
Yes |
(Quiz) |
(Yes) |
(Yes if SAYG) |
(n/a) |
(Yes) |
(Yes if enabled) |
Progress Test |
No – but options shown in Assessment properties screen. |
Yes |
No |
No |
No |
(Test) |
(User decides) |
(Yes if SAYG) |
(n/a) |
(Yes) |
(Yes if enabled) |
Summative |
No |
Yes |
Yes |
No |
No |
(Exam) |
(No) |
(Yes if SAYG) |
(n/a) |
(Yes) |
(Yes if enabled) |
The three main assessment types in Rogo (with a comparison with Perception in brackets)
In Questionmark Perception, ‘Assessment Type’ is a convenience method for setting various parameters of assessment delivery. However, the parameters are set explicitly and are visible to administrators. They are also all individually configurable regardless of assessment type. In Rogo, paper type is considerably more important as, although it sets very similar parameters to those in Perception, they do not seem then to be independently configurable or, crucially, to be visible to administrators. As a result it is very easy to inadvertently, but radically, change the way in which an assessment is delivered. Or as we found, it was not possible to deliver the assessment in the way required at all.
We wanted to be able to deliver a formative paper under exam conditions which would display marks and feedback to students at the end but which would also allow students to restart their assessment if something went wrong before they had finished. We began by setting paper type to ‘Progress Test’ as this gave us the feedback we required but then realised this wouldn’t allow students to restart in the event of a hardware failure. So we tried ‘Summative’ but, despite having ticked the two feedback tick boxes, no feedback appeared. Luckily, since we were only testing the system, we could nip in and alter the offending bit of code (paper/finish.php, line 234) to allow feedback with a summative paper:
$show_feedback = true;
but this wouldn’t be acceptable on a production system.
It seems to me that, in this respect, the QuestionMark Perception model is better – paper type should help by suggesting appropriate settings not by constraining how an assessment can be delivered.