Meeting the Reading List Challenge 2013 announced

The annual Meeting the Reading List Challenge event will be taking place at Loughborough again. The announcement about the free event can be seen below.

Meeting the Reading List Challenge
Keith Green Building, Loughborough University
Thursday 4th April 2013, 10:30am – 3:30pm

This free showcase event will highlight experiences from a number of institutions in their use and development of resource/reading list management systems.  There will be five presentations (further details to follow) throughout the day.  In addition there will be a buffet lunch provided during which there will be a suppliers’ exhibition.

This is a free event.  If you would like to attend please email Gary Brewerton (g.p.brewerton@lboro.ac.uk) to reserve a place stating your name, institution and any specific dietary requirements.

More details of this year’s event and the results of previous years’ events can be seen on the Meeting the Reading List Challenge website.

LORLS v7 now available

With great pleasure (trumpet fanfare not included) I can now announce the release of LORLS version 7. Key features include:

  • Import of Harvard citations from docx documents
  • Academic dashboard displaying usage data and other key metrics (see image to right)
  • Collapsible sections on reading lists
  • Improved support for diacritics
  • Allow students to like/dislike items on a list
View of the new academic dashboard

LORLS v7 is available for download from this website.

Trouble installing? Disable SELinux

We are currently working on a new distribution of LORLS (That’s right version 7 is coming soon) and to test the installer’s ability to update an existing installation we needed a fresh v6 install to test on.   So I dropped into a fresh virtual machine we have dedicated specifically to this kind of activity, downloaded the version 6 installer and ran through the installation only to find that, while it had created the database and loaded the initial test data just fine, it hadn’t installed any of the system files.

So for the next 3 hours I was scouring apache’s logs, checking the usual culprits for these sort of issues and debugging the code.  One of the first things I did was check the SELinux configuration and it was set to permissive, which means that it doesn’t actually block anything just warns the user.  This lead me to discount SELinux as the cause of the problem.

After 3 hours of debugging I finally reached the stage of having a test script that would work when run by a user but not when run by apache.  The moment that I had this output I realised that while SELinux may be configured to be permissive, it will only pick up this change when the machine is restarted.  So I manually tried disabling SELinunx (as root use the command ‘echo 0 > /selinux/enforce’) and then tried the installer again.

Needless to say the installer worked fine after this, so if you are installing LORLS and find that it doesn’t install the files check that SELinux is disabled.

Another new reading list system

This is becoming a bumper year for RLMS’s with the launch of Rebus:list earlier this summer and now Unilibri:

unilibri is launching a new reading list management system, ready for launch in Semester 2 of this academic year. Take a look at our website (www.unilibri.com)

unilibri takes a new approach to the provision of the RLMS. Provided as a software as a service hosted in the cloud; it focuses on ensuring academic and student engagement whilst providing the fundamental services every library and institution needs.

unilibri has an exciting roadmap planned that will ensure it is providing the best service to all the key stakeholders of reading lists. unilibri’s aim is to improve students’ educational experience whilst increasing library efficiencies and cost-effectiveness.

Free workshop at Loughborough

For the second year running were holding a free workshop at Loughborough to discuss reading list management. The announcement about the event can be seen below.

Meeting the reading list challenge 2012: A workshop
Keith Green Building, Loughborough University
Thursday 16th August 2012, 10:00am – 3:30pm

How do you get stakeholders interested in online reading lists? And how does
a reading list management system relate to other systems?

These issues, and many others, will be discussed at this forthcoming workshop.

The morning session will explore academic engagement with reading lists whilst
the afternoon will focus on how a reading lists management system (RLMS) could
interact with other systems. Both sessions will include presentations and group
discussions.

There will be a buffet lunch provided during which there will be a poster session.
All attendees are encouraged to submit a poster highlighting what they expect
from a RLMS, what their experiences of a RLMS are, or what interesting things
they’ve done with an RLMS?

This is a free event. If you would like to attend please email Gary Brewerton
(g.p.brewerton@lboro.ac.uk) to reserve a place stating your name, institution
and any specific dietary requirements.

A new reading list system added to the list of other systems

Today PTFS Europe announced their new reading list management system – Rebus:list.  So we have added it to our list of other systems (alongside Telstar, Talis Aspire, etc.)

If anyone knows of any reading list managements systems not included in our list of other systems, then please let us know and we will add it to the list.

Academic dashboard in beta

One area of discussion at last year’s Meeting the reading list challenge workshop was around the sort of statistics that academics would like about their reading lists.  Since then we have put the code in to log views of reading lists and other bits of information.

2 months ago, having collected 10 months of statistics we decided to put together a dashboard for owners of reading lists.  The current beta version of the dashboard contains the following information

  • Summary – a quick summary of the list and the type of content on it
  • Views of the reading list – how many people have viewed the list
  • URLs – Details of the number of items on the list with a URL, and details of any URLs that appear to be broken (with a option to highlight those items with broken URLs on the reading list)
  • Best rated items – the top user rated item on the list (again with an option to highlight those items on the reading list)
  • Worst rated items – the lowest user rated items on the list (yet again with an option to highlight those items on the reading list)
  • Item Composition – a graphical representation of the types of items on the reading list
  • Item Loans – a graph showing the number of items from the reading list loaned out by the Library over the past year.

Big changes under the hood and a couple of minor ones above

A few weeks ago we pushed out an update to our APIs on our live instance of LORLS and this morning we switched over to a new version of our front end (CLUMP).  The changes introduced in this new version are the following

  • Collapsible sub-headings
  • Improved performance
  • Better support for diacritics

Collapsible sub-headings

Following a suggestion from an academic we have added a new feature for sub-headings.  Clicking on a sub-heading will now collapse all the entries beneath it.  To expand the section out again the user simple needs to click on the sub heading again.  This will be beneficial to both academics maintaining large lists and students trying to navigate them.

Improved performance

In our ever present quest to improve the performance, both actual and perceptual, we decided to see if using JSON instead of XML would help.  After a bit of experimentation we discovered that using JSON and JSONP would both reduced the quantity of JavaScript code in CLUMP’s routines and significantly improved the performance.

Adjusting Jon’s APIs in the back end (LUMP) to return in either XML, JSON or JSONP format was quite easy once the initial code had been inserted in the LUMP module’s respond routine.  Then it was simply a matter of adding 4 lines of code to each API script.

Switching CLUMP to using JSONP was a lot more time consuming.  Firstly every call had to have all it’s XML parsing code removed and then the rest of the code in the routine needed to be altered to use the JavaScript object received from the API.  This resulted in both nicer to code/read JavaScript and smaller functions.

Secondly a number of start up calls had been synchronous, so the JavaScript wouldn’t continue executing until the response from the server had been received and processed.  JSONP calls don’t have a synchronous option.  The solution in the end was to use a callback from the function that processes the JSONP response from the server and with a clever bit of coding this actually enabled us to make a number of calls in parallel and continue only after all of them had completed.  Previously the calls were made one after the other, each having to wait for the preceding call to have been completed before it could start.  While this only saved about half a second on the start up of CLUMP, it made a big difference to the user perception of the systems performance.

Better support for diacritics

This was actually another beneficial side-effect of switching to JSONP over XML for most of our API calls.  In Internet Explorer it was discovered that some UTF-8 diacritic characters in the data would break its XML parser, but because JSONP doesn’t use XML these UTF-8 characters are passed through and displayed fine by the browser.  Of course we do sometimes find some legacy entries in a reading list, created many years ago in a previous version of LORLS, which are in the Latin-1 character set rather than UTF-8, but even these don’t break the JavaScript engine (though they don’t necessarily display the character that they should).

 

Item Ratings

One of the things on the LORLS “to do” list from last summer’s Meeting the Reading Lists Challenge workshop was to have ratings on lists and/or items. Gary and I had a chat about this earlier today and decided that if we’re going to do it, it would probably be better just on items rather than lists. That way students are commenting on how useful they found individual books, articles, etc rather than the academics reading list as a whole, so there would probably be more acceptance from academics.

I’ve thus produced two new API calls to support this:

  • GetSURating – get the ratings for particular structural units and/or users. If you give it a “suid” parameter with a comma separated list of structural unit IDs it will try to find ratings for those. You can also ask for rating set by one or more users by specifying the “user_id” parameter with a comma separate list of user IDs (the latter mostly because I thought we might want to allow folk in the future to see which books they’d rated, a bit like LibraryThing does). The script normally returns some XML with the ratings for each matched SUID (good and bad). You can give it a “details” parameter set to ‘Y’ in which case it will just splurge out XML with all the matching records in (including creation/modification times, etc so we could do fancy time based rating analysis).
  • Editing/EditSURating – create/edit a rating. Needs to have the structural unit ID send in via the “suid” parameter and the rating itself (either “Good” or “Bad”) via the “rating” parameter. No user_id parameter as it takes the logged in user as the person to create the rating from. It returns a summary of the current ratings for the structural unit after updating for this user. If you’re not logged in it does nothing.

Each user can click on ratings for a particular structural unit as many times as they like, but they’ll only have one active record. That means that you can rate something as “Bad” at first, then re-read it later and decide that you were wrong and its actually “Good” and re-rate it. Your old “Bad” rating is replaced by the “Good” rating.

We’ve already been talking about how we can make use of the data once the students start rating items. For example we could have a graph or scatter chart in the academic’s dashboard showing them four quadrants: rarely borrowed items that aren’t liked, rarely borrowed items that are liked, heavily borrowed items that aren’t liked and heavily borrowed items that are liked.  This would provide some feedback to academics on how useful the students found the material on their reading lists, and would also potentially supply some useful information to library staff.  You could imagine that a very expensive book that the library has put off buying a copy of but which is heavily liked by people who’ve acquired/seen copies elsewhere might persuade library staff to order a copy for instance.

I’ve got a proof of concept up and running now on our test/dev server.  This shows thumbs up/down on the bibiographic details popup in CLUMP for leaf items (books, journals, etc). As it requires a change to the LUMP database schema (in fact a whole new user_item_rating table), this isn’t going to be a LORLS v6.x thing but instead a LORLS v7 feature.

Oh crumbs, I’ve started working on the next version of LORLS already! 🙂

Extracting Harvard citations from Word documents

Over the years, one question that we’ve had pop up occasionally from academics and library staff is whether we could import reading lists from existing Microsoft Word documents. Many academics have produced course handouts including reading material in Word format and some still do, even though we’ve had a web based reading list system at Loughborough for over a decade now, and a VLE for a roughly similar period.

We’ve always had to say no in the past, because Microsoft Word’s proprietary binary format was very difficult to process (especially on the non-Microsoft platforms we use to host our systems) and we had other, more important development tasks.  Also we thought that extracting the variety of citation/bibliography formats that different academics use would be a nightmare.

However with the new LUMP based LORLS now well bedded in at Loughborough and Microsoft basing the document format of newer versions of Word on XML, we thought we’d revisit the idea and spend a bit of time to see what we could do.

Microsoft Office Word 2007 was introduced as part of the Office 2007 suite using a default file format based on XML, called Office Open XML Format, or OpenXML for short.  A Word 2007 document is really a compressed ZIP archive containing a directory structure populated with a set of XML documents conforming to Microsoft’s published XML schemas, as well as any media files required for the documents (images, movies, etc).  Most academics are now using versions of Microsoft Word that generate files in this format, which can be identified easily by looking for the “.docx” filename extension.

The XML documents inside the ZIPed .docx archive contain both the text of the document, styling information and properties about the document (ie who created it and when).  There’s actually quite a lot of structural information stored as well, which Microsoft explain how to process in order to work out how different parts of the document are related to each other.  Some of this is rather complex, but for a simple “proof of concept” what we needed was the actual document text structure.  By default this lives in a file called “word/document.xml” inside the ZIP archive.

The document.xml file contains an XML element called <w:body></w:body> that encapsulates the actual document text.  Individual paragraphs are then themselves held in <w:p></w:p> elements and these are then further broken down based on styling applied, whether there are embedded hyperlinks in the paragraph, etc, etc.  Looking through a few sample reading lists in .docx format gave us a good feel for what sort of structures we’d find.  Processing the .docx OpenXML using Perl would be possible using the Archive::Any module to unpack the ZIP archive and then the XML::Simple module to process the XML data held within into Perl data structures.

The next issue was how do we find the citations held inside the Word documents and turn them into Structural Units in LORLS?  We decided to aim to import Harvard style citations and this is where we hit the first major problem: not everyone seems to agree on what a Harvard style bibliographic reference should look like.  For example some Harvard referencing texts say that author names in books should be capitalised, publication dates should follow in brackets and titles underlined like this:

WILLS, H., (1985), Pillboxes: A Study Of U.K. Defences 1940, Leo Cooper, London.

whereas other sources don’t say anything about author capitalisation or surname/firstname/initial ordering but want the title in italics, and no brackets round the publication date:

Henry Willis, 1985, Pillboxes: A Study Of U.K. Defences 1940, Leo Cooper, London.

When you start to look at real lists of citations from academics it becomes clear that many aren’t even consistent within a single document, which makes things even more tricky.  Some of the differences may be down to simple mistakes, but others may be due to cutting and pasting between other documents with similar, but not quite the same, Harvard citation styles.

The end result of this is that we need to do a lot of pattern matching and also accept that we aren’t going to get a 100% hit rate (at least not straight away!).  Luckily the LORLS back end is written in Perl and that is a language just dripping with pattern matching abilities – especially its powerful regular expression processor. So for our proof of concept we took some representative OpenXML format Word .docx files from one of our academic departments and then used them to refine a set of regular expressions designed to extract Harvard-esque citations in paragraph, trying to work out what type of work it is (book, article, etc) based on the ordering of parts of the citation and the use of italics and/or underlining.

The initial proof of concept was a command line script that was given the name of a .docx document and would then spit out a stream of document text interspersed with extracted citations.  Once we’d got the regular expressions tweaked to the point where our test set of documents were generating 80-90% hit rates, we took this code and turned it into a CGI script that could then be used as an API call to extract citations from an uploaded document and return a list of potential hits in JSONP format.

One thing to note about file uploading to an API is that browser security models don’t necessarily make this terribly easy – a simple HTML FORM  file upload will try to refresh the page rather than do an AJAX style client-server transaction.  The trick, as used by a wide variety of web sites now, is to use a hidden IFRAME as the target of the file upload output or some XHR scripting.  Luckily packages such as jQuery now come with support for this, so we don’t need to do too much heavy lifting to make it work from the Javascript front end client.  Using JSONP makes this a bit easier as well, as its much easier to handle the JSON format in Javascript that if we returned XML.

The JSONP results returned from our OpenXML processing CGI script provide structures containing the details extracted from each work.  This script does NOT actually generate LUMP Structural Units; instead we just return what type we think each work is (book, etc) and then some extracted citation information relevant to that type of work.

The reasons for not inserting Structural Units immediately are three fold.  Firstly because we can’t be sure we’ve got the pattern matching working 100% it makes sense to allow the user to see what we think the matched citations are and allow them to select the ones they want to import into LORLS.  Secondly, we already have API calls to allow Structural Units to be created and edited, so we probably shouldn’t reinvent the wheel here – the client already knows how to talk to those to make new works appear in reading lists.  Lastly by not actually dealing with LUMP Structural Units, we’ve got a more general purpose CGI script – it could be used in systems that have nothing to with LORLS or LUMP for example.

So that’s the current state of play.  We’re gathering together more Word documents from academics with Harvard style bibliographies in them so that we can test our regular expressions against them, and Jason will be looking at integrating our test Javascript .docx uploader into CLUMP.  Hopefully the result will be something the academics and library staff will be happy to see at last!

 

Go to Top