Phonegap for Android JSCallback error meaning OnDeviceReady not called

Struggled for hours last night while trying to migrate this great example app Sample App using the PhoneGap Database API to work on Android using Eclipse and an Android emulator. Everything was going well until I tried to link to a second page using a query string parameter to pass through the id of the the employee. This resulted in:

JSCallback Error: Request failed. at file:///android_asset/www/js/cordova.js:3698

Eventually stumbled across PhoneGap – migrating iOS applications to Android (Part 1) which pointed out that this is a documented Android issue and proposed the following workaround:

function loadEmployeeDetail(id) {
localStorage.setItem( “employeeId”, id );
navigator.app.loadUrl(“file:///android_asset/www/employeedirectory/employeedetails.html”);
}

And then reading this  in employeedetails.html using:

id = localStorage.getItem( “employeeId”);

However, I was still having problems calling OnDeviceReady() until I stumbled across deviceReady not working in PhoneGap application, how to? which suggested that adding brackets into the function name in the eventListener method might be the problem, giving:

 document.addEventListener(“deviceready”, onDeviceReady(), false);

Problem solved.

 

 

Excel ‘helpfully’ converting numbers longer than 15 digits to scientific notation when importing from .csv

When you attempt to open or import .csv data into Excel, long numbers are helpfully converted into scientific notation. In most situations, this is fine – it make it easier to see the number in a cell and the original number can be seen in full if the cell number format is changed.

However, when numbers are longer than 15 digits, Excel silently rounds anything after the 15th digit – a real problem with GUIDs in general and QuestionMark Perception Question IDs in particular.

There are a number of potential solutions to this, any of which may work for you:

  1. Change the file’s .csv extension to .txt or  choose Data | Get External Data | From Text to bring up the Text Import Wizard and, in Step 3 of 3 where you set the data format, select the appropriate column (or columns with Shift held down), set the Column data format to Text. However, this appears to fail with very high numbers of columns (in this case >200), leading me to solution 2:
  2. Upload the .csv file into Google Docs, making sure that Convert documents, presentations, spreadsheets, and drawings to the corresponding Google Docs format is ticked. Then export as Excel by File | Download As | Microsoft Excel. Thanks to kpm for this. However, this still seems to randomly covert a few GUIDs into 15 significant figure sceintific numbers.
  3. The solution which worked for me was to open the .csv file in OpenOffice 3 Calc and, as with Excel, in the Text Import dialogue, select all the  columns in the Fields section and then change their Column type to Text. Unlike Excel, this did reliably import all >200 columns without converting any GUIDs.

 

 

Getting excited by maths handling in MathAssessEngine

Just been trying out some sample QTIv2.1 questions sent to me by Dr Sue Milne from ELandWeb Ltd which address some issues with maths in online assessment that we have been struggling with for some time:

  • How do we allow students to choose the units in which they answer a numeric question?
  • How do we allow students to answer an algebraic question with an algebraic answer?
  • How can we randomise the values presented whilst testing the same mathematical concept from student to student and from attempt to attempt (to allow students to retest themselves)?
The answer appears to be that what we need is MathAssessEngine from the University of Edinburgh and QTIv2.1 .
Answering questions with algebra
Answering a question with algebra in MathAssessEngine
We haven’t yet had time to look at this is any great detail but these two examples demonstrate that there is a whole new world of assessments out there waiting to be explored.
qtiv21_examples contains two examples:
Example 1  (SineRule-002-mathOpWithSol.xml): Demonstrates a question that ‘understands’ units and the impact they will have on the expected answer
Example 2 (mela012252.xml): Demonstrates answering a question with algebra – in this case ‘sqrt’.  As the participant types, the answer the computer will mark is displayed in MathML.
Both questions randomise the variables shown each time you retake the question. To give them a go yourself, unzip the file then visit MathAssessEngine. ‘Choose file’ under ‘Upload Assessment Items and Tests’ and click ‘Go’.

 

Safe Exam Browser

Because Rogo doesn’t itself come with a secure browser it prompted us to look for something third-party. A helpful pointer from Farzana Khandia from Loughborough on the very helpful QUESTIONMARK@JISCMAIL.AC.UK list, led us to Safe Exam Browser (SEB). This is a fantastic bit of free software which has a number of advantages over QuestionMark Secure (QMS), including:

  1. Once installed by an administrator, SEB uses a Windows service to prevent access to the Task Manager, Alt-Tab, etc. This means a typical computing lab ‘user’ will be able to run it with no further work. In contrast, QMS requires that typical user to be given read/write permissions on a number of registry keys – a fiddly process and one which upsets many IT officers.
  2. SEB is launched from a desktop shortcut and then calls the assessment server (or other system specified in an ini file before installation). It then carries on running until an IT officer closes it down. QMS starts and stops when a secure assessment is opened and submitted respectively. This leaves the machine unsecured once a participant has submitted.
  3. SEB allows administrators to allow access to ‘Permitted Applications’  such as  the system calculator and Notepad – this is not possible in the version of QMS that we are using.

The only disadvantages over QMS that we have discovered so far are:

  1. The requirement to enter a key sequence to close down the SEB slightly increases the time required to reset computers between sittings of students.
  2. If the machine crashes or is switched off while SEB is running, a bat file needs to be run to re-enable the functions disabled by SEB i.e it only re-enables them itself when it is closed normally.

We’re now considering whether we could use SEB instead of QMS, even with Perception-delivered assessments as it would save us the extra annual subscription for support on that proportion of our licence.

 

The importance of paper type in Rogo

One of the problems which very nearly forced us to abandon our first test of Rogo yesterday was our lack of understanding of the importance of paper ‘type’ in assessment delivery in Rogo and, arguably, the degree to which we are used to the way that QuestionMark Perception does things.

Test Type Feedback? Restart? Fire Exit? Multiple Attempts? Review Previous attempts?
Self-Assessment Test Yes, granular control No No Yes Yes
(Quiz) (Yes) (Yes if SAYG) (n/a) (Yes) (Yes if enabled)
Progress Test No – but options shown in Assessment properties screen. Yes No No No
(Test) (User decides) (Yes if SAYG) (n/a) (Yes) (Yes if enabled)
Summative No Yes Yes No No
(Exam) (No) (Yes if SAYG) (n/a) (Yes) (Yes if enabled)

The three main assessment types in Rogo (with a comparison with Perception in brackets)

In Questionmark Perception, ‘Assessment Type’ is a convenience method for setting various parameters of assessment delivery. However, the parameters are set explicitly and are visible to administrators. They are also all individually configurable regardless of assessment type.  In Rogo, paper type is considerably more important as, although it sets very similar parameters to those in Perception, they do not seem then to be independently configurable or, crucially, to be visible to administrators. As a result it is very easy to inadvertently, but radically, change the way in which an assessment is delivered. Or as we found, it was not possible to deliver the assessment in the way required at all.

We wanted to be able to deliver a formative paper under exam conditions which would display marks and feedback to students at the end but which would also allow students to restart their assessment if something went wrong before they had finished. We began by setting paper type to ‘Progress Test’ as this gave us the feedback we required but then realised this wouldn’t allow students to restart in the event of a hardware failure. So we tried ‘Summative’ but, despite having ticked the two feedback tick boxes, no feedback appeared. Luckily, since we were only testing the system, we could nip in and alter the offending bit of code (paper/finish.php, line 234) to allow feedback with a summative paper:

$show_feedback = true;

but this wouldn’t be acceptable on a production system.

It seems to me that, in this respect, the QuestionMark Perception model is better – paper type should help by suggesting appropriate settings not by constraining how an assessment can be delivered.

Hurray – Rogo performed brilliantly in its first real test at Oxford

Monday 23rd April saw a total of 73 first year medics, half of each of two sittings, take their voluntary assessment in Organisation of the Body on Rogo while the other half of each sitting used QuestionMark Perception as normal.

After a longer than usual introduction (see below), to explain the differences between this and ‘normal’ online assessments, we started the group in two halves, approximately 10s apart. There was no perceptible delay despite the fact that both application and db are running on one >3yr old server.

This was a great outcome given that we very nearly abandoned this test of Rogo at the last minute because of serious potential problems – one to do with server errors after amending ‘locked’ questions, the other to do with paper ‘types’. Disaster was averted by my colleague Jon Mason, who spotted and corrected both problems just in time.

Extra instructions at the beginning of the assessment:

“You are in the very privileged position today to be the first group of students to try out a new assessment system, Rogo. This works in more or less the same way as the ‘normal’ system, Perception, except that:
1. The assessment will not submit itself at the end, we will ask you to click ‘Finish’ after 45 minutes;
2. Because it doesn’t time itself, I will tell you when there are 5 minutes remaining to you. There is a clock at the bottom left of your screen – I suggest you make a note of your start time as you would with a normal exam.
3. The questions will be presented on 6 ‘screens’ which you can move between (backwards and forwards) using the controls at the bottom right.
4. When you go back to a screen you have previously visited, unanswered questions will be highlighted in pink.
5. Please make sure you do not click the ‘Finish’ button until you have answered all questions as you will not then be able to return to them.
6. We have appended three questions to the end of the assessment which ask for your thoughts on this new software. You do not have to answer these but we would be grateful for any feedback you can give us – it will help us to decide whether this is a viable alternative to the existing system.”