Excel ‘helpfully’ converting numbers longer than 15 digits to scientific notation when importing from .csv

When you attempt to open or import .csv data into Excel, long numbers are helpfully converted into scientific notation. In most situations, this is fine – it make it easier to see the number in a cell and the original number can be seen in full if the cell number format is changed.

However, when numbers are longer than 15 digits, Excel silently rounds anything after the 15th digit – a real problem with GUIDs in general and QuestionMark Perception Question IDs in particular.

There are a number of potential solutions to this, any of which may work for you:

  1. Change the file’s .csv extension to .txt or  choose Data | Get External Data | From Text to bring up the Text Import Wizard and, in Step 3 of 3 where you set the data format, select the appropriate column (or columns with Shift held down), set the Column data format to Text. However, this appears to fail with very high numbers of columns (in this case >200), leading me to solution 2:
  2. Upload the .csv file into Google Docs, making sure that Convert documents, presentations, spreadsheets, and drawings to the corresponding Google Docs format is ticked. Then export as Excel by File | Download As | Microsoft Excel. Thanks to kpm for this. However, this still seems to randomly covert a few GUIDs into 15 significant figure sceintific numbers.
  3. The solution which worked for me was to open the .csv file in OpenOffice 3 Calc and, as with Excel, in the Text Import dialogue, select all the  columns in the Fields section and then change their Column type to Text. Unlike Excel, this did reliably import all >200 columns without converting any GUIDs.

 

 

Early Thoughts on LTI

We have been investigating using the Learning Tools Interoperability (LTI) tool available through our VLE, WebLearn (Sakai), in order to authenticate users for various services that we provide (or are considering providing), including Rogo, iCases and OxPro.  We initially thought that this would mean we could take user/authentication control completely out of these systems, but unfortunately (in terms of ease, but fortunately in terms of security) it is not so easy.

The LTI specification has been created by the IMS Global Learning Consortium and uses OAuth to sign requests for accessing a tool. The Tool Provider (TP), e.g. Rogo, iCases or OxPro, stores a secret and an ‘oauth_consumer_key’ for each Tool Consumer (TC), e.g. WebLearn. When a tool is launched from the TC through LTI, a signed request is sent to the TP. The TP uses the oauth_consumer_key sent with the signed request to look up the secret, recreate the signature and check that the signatures match. Matching signatures results in a valid request, and the user is allowed to access the tool. If everything within the tool is available to every user this is as complicated as it gets. It is also possible to differentiate access rights based on a user’s role, which should (i.e. it is recommended by the LTI specification) be passed through in the LTI launch data. Both of these scenarios do not require any user information or privileges to be stored by the TP.

The situation gets more complicated when a user should only be allowed to access certain resources within the TP. This is the situation with Rogo, for example, as a student who is allowed to access Rogo should not necessarily be allowed to access every paper within Rogo. Therefore, the Rogo team at the University of Nottingham have implemented LTI in such a way that it is necessary for Rogo to contain it’s own user information. When a user first accesses Rogo through LTI, they must login separately to Rogo. Rogo then stores a link between the user_id passed through in the LTI launch data (which, in combination with the oauth_consumer_key, uniquely identifies the user) and the user within the Rogo system. Thereafter that user should never again need to login to Rogo when coming in through LTI from the same TC.

An LTI link to Rogo only provides a user with access to a single paper. In order to define which paper is linked, the first time an administrator/module convenor (as defined in Rogo) accesses the LTI link (which should be when the link is set up in the TC), they choose the paper from the list of papers for which they are an editor. Thereafter, students coming in through this LTI link will only be able to access that paper (and then only if they are allowed to access that paper within the Rogo system, i.e. they are a member of the module to which the paper belongs).

The situation with Rogo causes problems for us, as we want our students to be able to login using the University of Oxford single-sign-on system (which is used to login to Weblearn), and not need a separate set of login details for Rogo, even if they only need them for their first login. Therefore, we are looking into logging users into Rogo directly through the single-sign-on system, rather than using LTI.

For iCases, we do not want to have to pre-register all users within the system, as is necessary in Rogo. Unlike in Rogo, we are planning only allow access through LTI. We will then create a user profile when a user first accesses the system and this user profile will be linked to the LTI user_id/oauth_consumer_key. WebLearn passes through sufficient information (email address, name, and hopefully single-sign-on username) that we can know who a student is based on this LTI launch data, so we can monitor students’ activity and performance. This information is not required under the LTI specification, but we will make other institutions who use iCases aware that in order to track students’ activity and performance, their TC will need to provide this information, otherwise they will need to find out who a user is based on the user_id passed through in the launch data.

An LTI link to iCases will only allow a student access to a single iCase. If a user wants to access a different scenario, they will have to come in via a different LTI link, with the appropriate resource_link_id. We would prefer not to maintain a list of user/scenario associations, which would enable users to access multiple scenarios through a single link. However, we are still in the early stages of implementing LTI for iCases, and it may be something we choose to do further down the line.

 

Zoomify – Flash vs HTML5

We use Zoomify as part of our CSlide project, to enable viewing of microscope slides scanned at up to 40x resolution. It works in a similar way to Google Maps, loading small image tiles to make up a larger image, without the need to load the entire image (up to 20,000 megapixels!) in one go. This makes it much quicker to load the section of the image that the user wants to look at.

Zoomify have just released an HTML5 version of the viewer, in addition to the Flash-based viewer that has been available since Zoomify was first released. The major advantage of this from our point of view is that it enables microscope slides to be viewed on mobile devices, in particular iPads and iPhones, that do not support Flash. The two viewing options provide a similar viewing experience, but there are a few differences, as follows:

  • Mouse scroll wheel zooming – in the Flash viewer it is possible to zoom in and out using the mouse scroll wheel. This does not seem to be possible using the HTML5 viewer, although I am not aware of any reason why this could not be made possible.
  • Full screen view – in the Flash viewer, going to ‘full screen’ view mode results in the viewer taking over the entire screen. In the HTML5 viewer, going to ‘full screen’ view mode results in the viewer taking up the entirety of the browser window. These behaviours are a result of the nature of the viewing options: HTML5 exists entirely within the browser and is therefore constrained by the browser dimensions; Flash on the other hand is, to some extent, ‘outside’ of the browser, and so can occupy the entire screen. My personal opinion is that the HTML5 viewer way of doing this is preferable, as it enables a user to see a large view, but also switch to other programs/browser tabs without needing to close ‘full screen’ view.
  • Annotation Viewing – currently the Flash annotation viewer has a few more capabilities, as described below. However, I do not feel that any of these represent significant advantages for normal use. Screenshots of the two annotation viewers are shown at the end of the post:
    • Display options menu – Flash annotation viewer has a display options menu, with various options that are not available in the HTML5 annotation viewer, e.g. adding a pointer, or saving the currently shown section of the image to file. However, the most of useful of the display options in the Flash annotation viewer, ‘full screen view’, is available on the toolbar in HTML5 annotation viewer, so full screen mode can be activated in a single click, whereas it requires 2 clicks in the Flash annotation viewer.
    • Ruler – The Flash annotation viewer has a ruler panel, to give an indication of scale at the current zoom level. This is a potentially handy, but not essential, feature.
    • Point of Interest preview – the Flash annotation viewer shows a preview of each Point of Interest, so a user can see what they are going to be taken to before they choose to go to a point of interest. Again this is handy, but not essential.
  • Annotation Editing – currently, the only annotation editing that can be done using the HTML5 viewer is to add polygons, or change the shape of existing polygons. It is not possible to add points of interest, labels or notes. Therefore the HTML5 viewer falls a long way short in this respect, and, at present, we are not going to enable any editing using the HTML5 version. However, Zoomify are presumably (I am awaiting confirmation of this) planning to extend the annotation editing capabilities in HTML5, to bring them in line with the Flash viewer.
One advantage of the HTML5 viewer over the Flash viewer, from our perspective at least, is that we are more familiar with HTML/JavaScript/CSS, on which the HTML5 viewer is based, that we are with Flash. This will make it easier for us to modify the HTML5 viewer, if required. Furthermore, I believe that, with HTML5, CSS and JavaScript being open, and universally supported, standards, HTML5 is the best long term solution.
Zoomify HTML5 Annotation Viewer Screenshot
HTML5 Annotation Viewer Screenshot
Zoomify Flash Annotation Viewer Screenshot
Flash Annotation Viewer Screenshot

Accessibility – Rogo vs Perception

When considering and comparing Rogo and Questionmark Perception, one thing we looked at was the accessibility options available in both. As would be expected both give consideration to accessibility requirements, but the approaches and options vary.

Questionmark Perception (V5)

Rogo

In Rogo, accessibility settings are set for an individual, rather than for an assessment/schedule. Once specified, these settings are used for every assessment that the user sits. The following settings are available:

  • Extra Time (doesn’t actually do anything as assessments are not timed)
  • Font Size
  • Typeface
  • Background Colour
  • Foreground Colour
  • Marks Colour
  • Heading/Theme Colour
  • Labels Colour

User’s cannot adjust any settings for themselves within exam delivery, but this is unlikely to be a problem, as accessibility requirements are almost always known about prior to an assessment. Furthermore, they can easily be changed ‘on the fly’.

Personalising WebLearn (Sakai) – The BMS Portal page

This year (2011-12), a new course, Biomedical Sciences, started within the Medical Sciences Division. This course combines  teaching specific to the course with teaching shared with other courses. In response to this, we wanted to ensure that the students’ experience of the course in WebLearn (Oxford’s Sakai-based VLE) was coherent and personalised, and didn’t require them to search through different parts of WebLearn to find what they needed.

Therefore, we decided to create a portal page that makes it easy for students to access the information – timetables, documents, etc – relevant to them. We wanted the page, and all of the content, to remain in WebLearn, to ensure that managing the content and the users remained straightforward for lecturers and administrators accustomed to using WebLearn.

Biomedical Sciences Portal Page
The Biomedical Sciences Portal Page (click to enlarge)

The resulting portal page, shown above, provided students with a slick, modern-looking page, on which they could see any recent announcements, view their timetable and access documents both relating to their course and from their personal site within WebLearn.

In order to achieve this, it was necessary to create a multi-level structure for the site, with the main site containing a subsite for each year of the course, and each year site containing a subsite for each module.

To dip quickly into the technical aspects, the portal page makes significant use of JavaScript, in particular the JQuery library. Where possible, the content, along with the user’s status and year-group, is gathered using Ajax requests to ‘WebLearn direct’ URLs, which return information, such as a user’s recent announcements, in a computer-friendly format, e.g. JSON. A brief summary of how the different sections of the page are created is given below:

Announcements

WebLearn’s direct methods are used to get a user’s announcements, specifying the number and age to show. These are then presented to the user in an ‘accordion’, where clicking on an announcement title expands further details of that announcement.

Calendar

The requirement for the calendar was to bring together multiple module calendars into a single view, with a different colour for each module. This was achieved as follows:

  • The calendars for each module reside in the module sites.
  • A Google account is subscribed to the calendar (ICS) feed provided by WebLearn for each module.
  • A Google-calendar view of all the module calendars, with each one assigned a different colour, is embedded into the page.
  • In order to combine the multiple feeds back into a single ICS feed that students could sign up to, e.g. on a smart phone, we used a tool called MashiCal.  However, requires manual input of the feeds to be ‘mashed’ – this has not been a problem so far as the students all do the same module in Year 1.

Course Docs

Documents and resources are held in the subsites for each year/module, with some general resources in the top level site. At the time of creating the portal page, there were no direct methods for accessing resources, so a somewhat clunkier method was used. The portal page requests the web view (an HTML page) of the appropriate resources and then uses JQuery to dig down through the folder structure to extract the links to all of the resources and present them in a tree view.

My Stuff

This provides a view of everything in a student’s My Workspace resources folder, produced in the same way as the Course Docs. Students can only view their resources from the portal page – they have to actually go to their workspace to upload/edit resources.

Future Developments

  • Access resources for Course Docs and My Stuff using direct methods (now available after a recent upgrade), as the current process of extracting links from HTML pages is slow and error-prone.
  • Extending functionality of My Stuff, in particular enabling drag-and-drop upload of files, so students can quickly upload files from any computer, e.g. results in the lab.
  • Creation of our own  ‘calendar aggregator’, to automatically combine ICS feeds for each student based on the modules they are studying.

I am a Grammar Nazi – who knew?

I set about entering some questions in Rogo today which will form the basis of a two part test for school children, testing, from an experimental psychology perspective, their understanding of grammar.

As this was my first foray in to creating a paper, I went at it by using the Rogo documentation – and fell at the second hurdle.

I began by opening “Help and Support” and read with interest the Getting Started with Formative Quizzes and with Summative Exams. Emboldened, I clicked on the link “Making a New Paper” on the latter page – and got a 404 (it does actually work as a link from the Formative Quizzes page though).

In any event, it was good that I applied to Jon because it turned out that I needed to create a complete hierarchy of Faculty, School, Course and Module before I could get into the paper. Once that was created, I set to work on the paper itself and that went swimmingly.  I was able to choose between Matrix and Dichotmous questions for what I wanted; I decided I wanted the flexibility of using my own labels and went with the Matrix.

This was very straightforward – my main difficulty was that the text boxes for entering options are rather small. That wasn’t too much of a problem as I was cutting and pasting, but I think that I would find it very frustrating if I was inputting text and was only able to see approximately 11 characters at a time.

I was amused to discover a mis-placed apostrophe as I was entering the questions – and in a test on grammar too!

And then, as I checked the test, I noticed the sign off message:
NOTE:
Please complete all questions before clicking ‘Finish’, you will not be able to go back.

We need to get that weak comma changed, I think. I’d like to see a conjunction added so that it reads:

NOTE: Please complete all questions before clicking ‘Finish’, as you will not be able to go back.

Yes, I am being pedantic, I’m afraid. Comes of being an ex-English teacher.

One thing that is clear to me as a user – Rogo is a system that stands on its own merits.  For me, conceptualising in terms of “Element A is similar to Element Y in Perception” is not really valid, and just holds me back. I think I need to explore Rogo as an entity rather than as a comparison if its full power is to be understood!

 

Getting excited by maths handling in MathAssessEngine

Just been trying out some sample QTIv2.1 questions sent to me by Dr Sue Milne from ELandWeb Ltd which address some issues with maths in online assessment that we have been struggling with for some time:

  • How do we allow students to choose the units in which they answer a numeric question?
  • How do we allow students to answer an algebraic question with an algebraic answer?
  • How can we randomise the values presented whilst testing the same mathematical concept from student to student and from attempt to attempt (to allow students to retest themselves)?
The answer appears to be that what we need is MathAssessEngine from the University of Edinburgh and QTIv2.1 .
Answering questions with algebra
Answering a question with algebra in MathAssessEngine
We haven’t yet had time to look at this is any great detail but these two examples demonstrate that there is a whole new world of assessments out there waiting to be explored.
qtiv21_examples contains two examples:
Example 1  (SineRule-002-mathOpWithSol.xml): Demonstrates a question that ‘understands’ units and the impact they will have on the expected answer
Example 2 (mela012252.xml): Demonstrates answering a question with algebra – in this case ‘sqrt’.  As the participant types, the answer the computer will mark is displayed in MathML.
Both questions randomise the variables shown each time you retake the question. To give them a go yourself, unzip the file then visit MathAssessEngine. ‘Choose file’ under ‘Upload Assessment Items and Tests’ and click ‘Go’.

 

The Curious Case of the Disappearing Image

During our first live test of Rogo, we were somewhat alarmed when the students pointed out that one of the questions was missing an image. Although far from ideal, it was thankfully not disastrous, as it was noticed fairly early on and we were able to print out a copy of the image for all the students.

The question is, what happened to this image? The paper was checked thoroughly against the QuestionMark Perception version of the paper immediately after it was imported, so it would have been spotted at that point had it not come through in the import. However, I clearly did not check the paper thoroughly enough immediately before the exam, as the image was not there.

So the image was lost from the question between it being imported into Rogo and the paper being taken. To find out when and why, I reimported just this question. The image is there in the QTI XML export from Perception, and was there in the question immediately after import. However, after saving the question from the edit screen, without having made any changes to the question, the image had disappeared. The table cell that should contain the image was left just containing an  .

We presumed there was something in the imported HTML that was causing this to occur. Initially we thought it might be that there was no closing tag for the <IMG …> (nor a closing slash at the end of the tag), but adding this made no difference, and there were other questions that have the same lack of closing tag and there were no problems with these. On closer inspection of the html for the questions, we noticed that there was a span in the question intro in the faulty question that was not there in any others. This span contained some attributes that came from MS Word:

<SPAN style=”mso-bidi-font-family: Arial; mso-hansi-font-family: Arial” lang=EN-GB><STRONG>diaphragm <BR></STRONG></SPAN>

Removing the span altogether fixed the problem, so we drilled down in order to identify what aspect of the tags was causing it. We looked into the following:

  • The <br> inside span the span? Moving it outside made no difference
  • No quotation marks around EN-GB? Adding them made no difference
  • The lang attribute? Removing it made no difference
  • Non-standard CSS properties in style attribute? Removing the styles fixed the problem

Although it is nice to have found the specific cause, there is no reason for this span to exist at all, so we have removed it completely from the imported version. Hopefully questions containing spans such as this will be few and far between, but since many questions are copied in from Word, care needs to be taken, as it seems that the Rogo text editor, not unusually, does not like them.

For future imports, a quick find/replace through the QTI XML exports when we transfer questions en masse from Perception to Rogo should enable us to remove problematic spans. We will have to do some modification of the XML anyway (e.g. changing the path to images), so it will not be much more work to sort this out as well. I have also blogged about other issues relating to importing questions.

Importing Questions into Rogo

In preparation for our first running our first live test of Rogo, we needed to get an exam paper that had been created in Questionmark Perception into Rogo. This seemed simple enough, simply by exporting the QTI XML from Perception and importing it into Rogo. This generally worked reasonably well, but there were a few issues that had to be dealt with:

  • Not unsurprisingly, images needed special attention. The links to the images in Perception (e.g. “%SERVER.GRAPHICS%topicresources/1065007843”, where the number at the end is the topic id) were replaced (using a blanket find/replace) with “/media(/subdirectorypath)”, and the images copied into the appropriate subdirectory.
  • There were some issues with tables where an image was shown side by side with the answer options, which required a hack in Perception. For some questions there were problems in the HTML in Perception (left over from the dark ages), so fixing the HTML in Perception fixed the import problems relating to this.
  • EMQ feedback – the general question feedback from Perception is given as the feedback for all stems in Rogo, which does not have general feedback for EMQs. This can be fixed by removing the feedback from all except the last question, so that it just appears at the end of the responses. Alternatively, the feedback can be split up to give each stem its own feedback. Either of these options would be laborious for large numbers of questions, so the best long-term alternative may be to alter the import code to deal with this.
  • MCQ feedback – when importing MCQs (including those created in Rogo, exported and imported back in without any modification), in which only general feedback is provided, the general feedback is put in the answer feedback for each answer and general feedback is left empty. This has been added to the Rogo Trac system (#683,  https://suivarro.nottingham.ac.uk/trac/rogo/ticket/683)
  • Correct answers for EMQs – after importing all the correct answers are set to A. This is also the case if a question is created in Rogo, exported from Rogo and imported back in. This issue was added to Trac (#682, https://suivarro.nottingham.ac.uk/trac/rogo/ticket/682) and has now been fixed by the Rogo team, presumably for a future release. For this assessment, fixing this was simply, but annoyingly, a case of going through all the questions and changing the correct answers.
  • Rogo does not have a multiple MCQ question type, i.e. several MCQs that share the same stimulus and so are combined into a single question, so that it looks like an EMQ, but there are different answer options for each stem. Such questions are recognised as Matrix questions by Rogo when they are imported, but this format is not appropriate. There is no way of modifying an EMQ in Rogo so that subsets of the answer options are only available for certain stems. We have a large number of questions of this format, and currently the only option is to separate them out into MCQs, which is not a sustainable solution. Therefore, we may need to create a multiple MCQ question type, which hopefully would not be too difficult using the EMQ question type as the basis.

Having fixed all these problems, it seemed as though we had an exact copy of the assessment, with everything working as it should. The only difference between the questions as far as the students were concerned was that the multiple MCQ question (there was only one of this type in this paper) was split into 5 separate MCQs.

However, while running the assessment, we were to come across the Curious Case of the Disappearing Image, which, it turned out, was caused by a problem related to importing.