Accessibility – Rogo vs Perception

When considering and comparing Rogo and Questionmark Perception, one thing we looked at was the accessibility options available in both. As would be expected both give consideration to accessibility requirements, but the approaches and options vary.

Questionmark Perception (V5)

Rogo

In Rogo, accessibility settings are set for an individual, rather than for an assessment/schedule. Once specified, these settings are used for every assessment that the user sits. The following settings are available:

  • Extra Time (doesn’t actually do anything as assessments are not timed)
  • Font Size
  • Typeface
  • Background Colour
  • Foreground Colour
  • Marks Colour
  • Heading/Theme Colour
  • Labels Colour

User’s cannot adjust any settings for themselves within exam delivery, but this is unlikely to be a problem, as accessibility requirements are almost always known about prior to an assessment. Furthermore, they can easily be changed ‘on the fly’.

Personalising WebLearn (Sakai) – The BMS Portal page

This year (2011-12), a new course, Biomedical Sciences, started within the Medical Sciences Division. This course combines  teaching specific to the course with teaching shared with other courses. In response to this, we wanted to ensure that the students’ experience of the course in WebLearn (Oxford’s Sakai-based VLE) was coherent and personalised, and didn’t require them to search through different parts of WebLearn to find what they needed.

Therefore, we decided to create a portal page that makes it easy for students to access the information – timetables, documents, etc – relevant to them. We wanted the page, and all of the content, to remain in WebLearn, to ensure that managing the content and the users remained straightforward for lecturers and administrators accustomed to using WebLearn.

Biomedical Sciences Portal Page
The Biomedical Sciences Portal Page (click to enlarge)

The resulting portal page, shown above, provided students with a slick, modern-looking page, on which they could see any recent announcements, view their timetable and access documents both relating to their course and from their personal site within WebLearn.

In order to achieve this, it was necessary to create a multi-level structure for the site, with the main site containing a subsite for each year of the course, and each year site containing a subsite for each module.

To dip quickly into the technical aspects, the portal page makes significant use of JavaScript, in particular the JQuery library. Where possible, the content, along with the user’s status and year-group, is gathered using Ajax requests to ‘WebLearn direct’ URLs, which return information, such as a user’s recent announcements, in a computer-friendly format, e.g. JSON. A brief summary of how the different sections of the page are created is given below:

Announcements

WebLearn’s direct methods are used to get a user’s announcements, specifying the number and age to show. These are then presented to the user in an ‘accordion’, where clicking on an announcement title expands further details of that announcement.

Calendar

The requirement for the calendar was to bring together multiple module calendars into a single view, with a different colour for each module. This was achieved as follows:

  • The calendars for each module reside in the module sites.
  • A Google account is subscribed to the calendar (ICS) feed provided by WebLearn for each module.
  • A Google-calendar view of all the module calendars, with each one assigned a different colour, is embedded into the page.
  • In order to combine the multiple feeds back into a single ICS feed that students could sign up to, e.g. on a smart phone, we used a tool called MashiCal.  However, requires manual input of the feeds to be ‘mashed’ – this has not been a problem so far as the students all do the same module in Year 1.

Course Docs

Documents and resources are held in the subsites for each year/module, with some general resources in the top level site. At the time of creating the portal page, there were no direct methods for accessing resources, so a somewhat clunkier method was used. The portal page requests the web view (an HTML page) of the appropriate resources and then uses JQuery to dig down through the folder structure to extract the links to all of the resources and present them in a tree view.

My Stuff

This provides a view of everything in a student’s My Workspace resources folder, produced in the same way as the Course Docs. Students can only view their resources from the portal page – they have to actually go to their workspace to upload/edit resources.

Future Developments

  • Access resources for Course Docs and My Stuff using direct methods (now available after a recent upgrade), as the current process of extracting links from HTML pages is slow and error-prone.
  • Extending functionality of My Stuff, in particular enabling drag-and-drop upload of files, so students can quickly upload files from any computer, e.g. results in the lab.
  • Creation of our own  ‘calendar aggregator’, to automatically combine ICS feeds for each student based on the modules they are studying.

Safe Exam Browser

Because Rogo doesn’t itself come with a secure browser it prompted us to look for something third-party. A helpful pointer from Farzana Khandia from Loughborough on the very helpful QUESTIONMARK@JISCMAIL.AC.UK list, led us to Safe Exam Browser (SEB). This is a fantastic bit of free software which has a number of advantages over QuestionMark Secure (QMS), including:

  1. Once installed by an administrator, SEB uses a Windows service to prevent access to the Task Manager, Alt-Tab, etc. This means a typical computing lab ‘user’ will be able to run it with no further work. In contrast, QMS requires that typical user to be given read/write permissions on a number of registry keys – a fiddly process and one which upsets many IT officers.
  2. SEB is launched from a desktop shortcut and then calls the assessment server (or other system specified in an ini file before installation). It then carries on running until an IT officer closes it down. QMS starts and stops when a secure assessment is opened and submitted respectively. This leaves the machine unsecured once a participant has submitted.
  3. SEB allows administrators to allow access to ‘Permitted Applications’  such as  the system calculator and Notepad – this is not possible in the version of QMS that we are using.

The only disadvantages over QMS that we have discovered so far are:

  1. The requirement to enter a key sequence to close down the SEB slightly increases the time required to reset computers between sittings of students.
  2. If the machine crashes or is switched off while SEB is running, a bat file needs to be run to re-enable the functions disabled by SEB i.e it only re-enables them itself when it is closed normally.

We’re now considering whether we could use SEB instead of QMS, even with Perception-delivered assessments as it would save us the extra annual subscription for support on that proportion of our licence.

 

The importance of paper type in Rogo

One of the problems which very nearly forced us to abandon our first test of Rogo yesterday was our lack of understanding of the importance of paper ‘type’ in assessment delivery in Rogo and, arguably, the degree to which we are used to the way that QuestionMark Perception does things.

Test Type Feedback? Restart? Fire Exit? Multiple Attempts? Review Previous attempts?
Self-Assessment Test Yes, granular control No No Yes Yes
(Quiz) (Yes) (Yes if SAYG) (n/a) (Yes) (Yes if enabled)
Progress Test No – but options shown in Assessment properties screen. Yes No No No
(Test) (User decides) (Yes if SAYG) (n/a) (Yes) (Yes if enabled)
Summative No Yes Yes No No
(Exam) (No) (Yes if SAYG) (n/a) (Yes) (Yes if enabled)

The three main assessment types in Rogo (with a comparison with Perception in brackets)

In Questionmark Perception, ‘Assessment Type’ is a convenience method for setting various parameters of assessment delivery. However, the parameters are set explicitly and are visible to administrators. They are also all individually configurable regardless of assessment type.  In Rogo, paper type is considerably more important as, although it sets very similar parameters to those in Perception, they do not seem then to be independently configurable or, crucially, to be visible to administrators. As a result it is very easy to inadvertently, but radically, change the way in which an assessment is delivered. Or as we found, it was not possible to deliver the assessment in the way required at all.

We wanted to be able to deliver a formative paper under exam conditions which would display marks and feedback to students at the end but which would also allow students to restart their assessment if something went wrong before they had finished. We began by setting paper type to ‘Progress Test’ as this gave us the feedback we required but then realised this wouldn’t allow students to restart in the event of a hardware failure. So we tried ‘Summative’ but, despite having ticked the two feedback tick boxes, no feedback appeared. Luckily, since we were only testing the system, we could nip in and alter the offending bit of code (paper/finish.php, line 234) to allow feedback with a summative paper:

$show_feedback = true;

but this wouldn’t be acceptable on a production system.

It seems to me that, in this respect, the QuestionMark Perception model is better – paper type should help by suggesting appropriate settings not by constraining how an assessment can be delivered.

Hurray – Rogo performed brilliantly in its first real test at Oxford

Monday 23rd April saw a total of 73 first year medics, half of each of two sittings, take their voluntary assessment in Organisation of the Body on Rogo while the other half of each sitting used QuestionMark Perception as normal.

After a longer than usual introduction (see below), to explain the differences between this and ‘normal’ online assessments, we started the group in two halves, approximately 10s apart. There was no perceptible delay despite the fact that both application and db are running on one >3yr old server.

This was a great outcome given that we very nearly abandoned this test of Rogo at the last minute because of serious potential problems – one to do with server errors after amending ‘locked’ questions, the other to do with paper ‘types’. Disaster was averted by my colleague Jon Mason, who spotted and corrected both problems just in time.

Extra instructions at the beginning of the assessment:

“You are in the very privileged position today to be the first group of students to try out a new assessment system, Rogo. This works in more or less the same way as the ‘normal’ system, Perception, except that:
1. The assessment will not submit itself at the end, we will ask you to click ‘Finish’ after 45 minutes;
2. Because it doesn’t time itself, I will tell you when there are 5 minutes remaining to you. There is a clock at the bottom left of your screen – I suggest you make a note of your start time as you would with a normal exam.
3. The questions will be presented on 6 ‘screens’ which you can move between (backwards and forwards) using the controls at the bottom right.
4. When you go back to a screen you have previously visited, unanswered questions will be highlighted in pink.
5. Please make sure you do not click the ‘Finish’ button until you have answered all questions as you will not then be able to return to them.
6. We have appended three questions to the end of the assessment which ask for your thoughts on this new software. You do not have to answer these but we would be grateful for any feedback you can give us – it will help us to decide whether this is a viable alternative to the existing system.”

 

Question Analysis – Difficulty/Facility and Discrimination

Difficulty/Facility

This is simply a measure of the proportion of students that answered a question correctly and has a value of between 0 and 1. It is calculated by taking the sum of the actual marks for each candidate and dividing it by (the maximum multiplied by the number of candidates).

It is often referred to as difficulty, but should probably be known as facility, as a value of 0 means that no-one answered the question correctly, and a value of 1 means that everyone answered the question correctly.

General wisdom seems to be that questions with facility values of around the pass mark for the assessment (e.g. pass mark = 70%, facility = 0.7) will give the most most useful information about the candidates.

Discrimination

The purpose of item discrimination is to identify whether good students perform better, worse or the same as poor students on a question. Based on Kelley (1939), good and poor students are defined by taking the top and bottom 27% based on overall assessment mark.

Discrimination for a particular answer option is then calculated by subtracting the fraction of the bottom group who gave the answer from the fraction of the top group who gave the answer. So:

  • A positive item discrimination means a higher proportion of people in the top group chose the answer than in the bottom group. A high positive value for the correct answer generally means the question is a good discriminator, which is what we want (but is very difficult to achieve!). A positive discrimination for an incorrect answer may suggest an issue, but could equally just mean that it is a good distractor.
  • An item discrimination of 0 means the same number of people from each group gave the answer, so the answer doesn’t discriminate at all. Questions where everyone got the correct answer will always have a discrimination of 0.
  • A negative item discrimination means a higher proportion of people in the bottom group chose the answer. This would be expected for an incorrect answer. A negative discrimination on a correct answer may indicate something is wrong, as more able students are choosing an incorrect answer.

To make a question a good discriminator, the correct answer should have a high positive discrimination, and the incorrect answers should have a negative discrimination.