On July 14th 2011 a workshop was held at Loughborough University entitled “Meeting The Reading List Challenge”.  42 people attended and, after a couple of presentations on reading lists in the morning, the afternoon was spent in group discussions looking at various aspects of reading list design and implementation.

The groups were each asked two questions, and each question was asked of two groups.  The questions were:

  1. What makes a perfect read list? And how can an academic keep it relevant?
  2. Who should be involved in the development of a reading list and what are their roles?
  3. Who do you want to view a reading list and who don’t you want to see it?
  4. How do you get your whole institution engaged with reading lists?
  5. Is there a formula that describes the relationship between reading list content and library stock?
  6. What other systems does a resource/reading list management system need interact with and why?

You can see the posters made from the results of the discussion online.

After the workshop, Gary, Jason and myself sat down and had a think about how some of the things that had come out of the discussions could be implemented in LORLS, and if they were things that we might find useful at Loughborough.  As a result we’ve got a list of some new things to investigate and potentially implement:

  1. Produce a report that is emailed to library staff and/or academics that flags when a new edition of an existing work is available.
  2. Report back to academics on the usage that their reading list is getting.  As we don’t ask the students to log into our LORLS installation, this will have to be anonymous usage information, either from the webserver or from data recorded by the API.
  3. Look at options for purchasing formulae to assist library staff in placing orders for works.  These formulae would be based on various facets such as the number of reading lists a work is on, how many students are on the corresponding modules, the importance attached to the work by the academic(s), the cost of the work, etc.  We might even factor in some simple machine learning so that past purchasing decisions can help inform the system about the likely outcome of future ones.
  4. Importing works from existing bibliographic management tools, especially from RIS/Refworks format.
  5. Provide the students with an ability to rate items and/or lists.  This would provide academics with feedback on how useful the students found the works on the reading lists and might also help the purchasing decisions.
  6. Do some work on the back end to get cookies, Shibboleth SSO and JSON(P) supported to provide a more integrated system.
  7. Sending suggestion emails to academics when new works are added to library stock that cover similar topics as ones already on their reading lists.
  8. Do some W3C accessibility and mobile web support testing.
  9. Introduce a ‘tickstamp’ data type that is set with the current date/time when someone ticks a check box.  This could then help support workflow for the librarians (ie a list of check boxes that have to be ticked off for each list and/or item).

We’re not at the stage of attaching time scales to the development of any of these, and indeed we might find that we don’t actually implement all of them.  However this list does give an idea of where we’re looking to take LORLS now that we have v6 out in production use at Loughborough.