Other Systems

The middle of the end

So today the University’s new reading list system (https://lboro.rl.talis.com) went live and we set redirects from LORLS to point to it. We’ll still be running the LORLS server for a couple more months for the benefit of library staff but then it will be decommissioned.

The data migration exercise seemed to go very well although it was interesting to note that despite the two systems having very similar internal data structures there were issues around our allowance for “lists within lists” and how the two systems treat items at draft status. But luckily these issues were easily resolved with a little effort.

Integrating LORLS with Koha

At Loughborough we use Ex Libris’s Aleph as our Library Management System (LMS). Our LORLS reading list system also makes use of Aleph in a number of ways:

  1. To provide bibliographic information via Z39.50, allowing people to quickly add works to reading lists by simply entering the ISBN,
  2. In batch jobs to check which works the library holds, again using Z39.50,
  3. To find how many items for a work are held and how many are available for loan which is displayed on a work’s detail pop up in CLUMP.  This has been done using the Ex Libris X Server product (not to be confused with X11 servers!),
  4. To build up lists of current and past loans, used for purchase prediction, high demand reporting and usage tracking.  This is mostly done using custom local Perl scripts that interrogate Aleph’s Oracle database directly and then provide an XML API that LORLS code calls.

We’ve recently had another site say they are interested in using LORLS, but they use the Koha LMS.  They asked us what would be needed to allow LORLS to integrate with Koha?  Luckily Koha is open source, so we could just download a copy and install it on a new virtual machine to have a play with and see what was needed.

Looking up bibliographic information was pretty simple to support.  Koha has a built in Z39.50 server, so all we needed to do was to tweak the LUMP.pm file on our dev server to point the Z3950Hostname(), Z3950Port() and Z3950DBName() to point to our new VM, the port that Koha’s Z39.50 server is running on and the new Z39.50 database name (which appears to default to “biblios”).  That seemed to work a treat.

Get item holdings for works was a bit more involved.  Koha obviously doesn’t have Ex Libris’s X server, so we needed another way to get similar data.  Luckily Koha does implement some of the Digital Library Federation Integrated Library System – Discovery Interface recommendations.  One of these Application Programming Interfaces (APIs) is called GetRecords() and, given a set of system record identifiers, will return an XML document with just the sort of information we need (eg for each item linked to a work we get things like item type, whether it can be loaned, what its due date is if it is on loan, if it is damaged, lost, etc).

Unfortunately LORLS doesn’t know anything about Koha’s system record identifiers, and the GetRecords() ILS-DI API doesn’t appear to allow searches based on control numbers such as ISBNs.  To get the system record numbers we can however fall back on Z39.50 again, which Koha uses to implement the ILS-DI SRU type interfaces.  Searching for a Bib-1 attribute of type 1007 with the value set to be the ISBN gets us some nice USMARC records to parse.  We need to look at the 999 $c field in the MARC record as this appears to be the Koha system record identifier.

Just to make things interesting we discovered accidentally that in Koha you can end up with more than one work for the same ISBN (and each work can then have multiple physical items).  I guess this is a flexibility feature in some way, but it means that we need to make sure that we get all the system record identifiers that match our ISBN from LORLS and then pass all of these to the Koha GetRecords() API.  Luckily the API call can take a whole set of system record identifiers in one go, so this isn’t too much of a problem.

One thing we do have to have in the code though is some way of distinguishing between loan categories (long loan, short loan, week loan, reference, etc).  In Koha you can create an arbitrary number of item types which can correspond to loan categories, to which you can then assign things like differing loan rules. We slipped in:

  • BK – normal long loan book (the default in Koha it seems)
  • REFBOOK – a book that can’t be loaned.
  • SL BOOK – a short loan book (usually loaned for less than a day – our “high demand” stock),
  • WL BOOK – a book that can be loaned for a week (effectively moderately in demand works).

Our code currently then has these hard coded in order to return the same sort of holdings Perl structure that Aleph did. Extra item types assigned in Koha will need to been inserted into this code – we might have to think of a “nice” way of doing this if folk make lots of these changes on a regular basis but I suspect item types are one of those things that are configured when an LMS is setup and rarely, if ever, tweaked again.

In the StructuralUnit.pm Perl module I created a new method called Koha_ILS_DI_Holdings() to implement the new Koha item holdings and availability code.  The existing Holdings() method was renamed to Aleph_Holdings() and a new Holdings() method implemented that checks a new Holdings() method in LUMP.pm for the name of a holdings retrieval algorithm to use (currently now either “Koha:ILS-DI” which selects the new code, or anything else defaulting back to the old Aleph_Holdings() method).  This means that if someone else comes along with XYZ Corp LMS that uses some other whacky way of retrieving holdings availability we can simply write a new method in StructuralUnit.pm and add another if clause to the Holdings() method to allow a quick LUMP.pm change to select it.  The advantage of this is that other code in LORLS that uses the Holdings() method from StructuralUnit.pm doesn’t have to be touched – it is insulated from the messy details of which implementation is in use (ooh, object oriented programming at work!).

This appears to work on our test set up and it means we can just ship a new StructuralUnit.pm and LUMP.pm file to the folk with Koha and see how they get on with it.  Our next trick will be getting loan history information out of Koha – this may take a bit more work and its really replacing functionality that we’ve mostly implemented solely for Loughborough (using our custom scripts as even Aleph didn’t provide a usable API for what we needed).  It doesn’t appear at first glance that Koha’s ILS-DI APIs cover this use case – I guess we’re a bit odd in being interested in work loan histories!

Prototyping Goodreads integration

Part of my day job is developing and gluing together library systems. This week I’ve been making a start on doing some of this “gluing” by prototyping some code that will hopefully link our LORLS reading list management system with the Goodreads social book reading site. Now most of our LORLS code is written in either Perl or JavaScript; I tend to write the back end Perl stuff that talks to our databases and my partner in crime Jason Cooper writes the delightful, user friendly front ends in JavaScript. This means that I needed to get a way for a Perl CGI script to take some ISBNs and then use them to populate a shelf in Goodreads. The first prototype doesn’t have to look pretty – indeed my code may well end up being a LORLS API call that does the heavy lifting for some nice pretty JavaScript that Jason is far better at producing than I am!

Luckily, Goodreads has a really well thought out API, so I lunged straight in. They use OAuth 1.0 to authenticate requests to some of the API calls (mostly the ones concerned with updating data, which is exactly what I was up to) so I started looking for a Perl OAuth 1.0 module on CPAN. There’s some choice out there! OAuth 1.0 has been round the block for a while so it appears that multiple authors have had a go at making supporting libraries with varying amounts of success and complexity.

So in the spirit of being super helpful, I thought I’d share with you the prototype code that I knocked up today. Its far, far, far from production ready and there’s probably loads of security holes that you’ll need to plug. However it does demonstrate how to do OAuth 1.0 using the Net::OAuth::Simple Perl module and how to do both GET and POST style (view and update) Goodreads API calls. Its also a great way for me to remember what the heck I did when I next need to use OAuth calls!

First off we have a new Perl module I called Goodreads.pm. Its a super class of the Net::OAuth::Simple module that sets things up to talk to Goodreads and provides a few convenience functions. Its obviously massively stolen from the example in the Net::OAuth::Simple perldoc that comes with the module.

#!/usr/bin/perl

package Goodreads;

use strict;
use base qw(Net::OAuth::Simple);

sub new {
    my $class = shift;
    my %tokens = @_;

    return $class->SUPER::new( tokens => %tokens,
                               protocol_version => '1.0',
                               return_undef_on_error => 1,
                               urls => {
                                 authorization_url => 'http://www.goodreads.com/oauth/authorize',
                                 request_token_url => 'http://www.goodreads.com/oauth/request_token',
                                 access_token_url => 'http://www.goodreads.com/oauth/access_token',
                               });
}

sub view_restricted_resource {
    my $self = shift;
    my $url = shift;
    return $self->make_restricted_request($url, 'GET');
}

sub update_restricted_resource {
    my $self = shift;
    my $url = shift;
    my %extra_params = @_;
    return $self->make_restricted_request($url, 'POST', %extra_params);
}

sub make_restricted_request {
    my $self = shift;
    croak $Net::OAuth::Simple::UNAUTHORIZED unless $self->authorized;

    my( $url, $method, %extras ) = @_;

    my $uri = URI->new( $url );
    my %query = $uri->query_form;
    $uri->query_form( {} );

    $method = lc $method;

    my $content_body = delete $extras{ContentBody};
    my $content_type = delete $extras{ContentType};

    my $request = Net::OAuth::ProtectedResourceRequest->new(
        consumer_key => $self->consumer_key,
        consumer_secret => $self->consumer_secret,
        request_url => $uri,
        request_method => uc( $method ),
        signature_method => $self->signature_method,
        protocol_version => $self->oauth_1_0a ?
            Net::OAuth::PROTOCOL_VERSION_1_0A :
            Net::OAuth::PROTOCOL_VERSION_1_0,
        timestamp => time,
        nonce => $self->_nonce,
        token => $self->access_token,
        token_secret => $self->access_token_secret,
        extra_params => { %query, %extras },
    );
    $request->sign;
    die "COULDN'T VERIFY! Check OAuth parameters.n"
        unless $request->verify;

    my $request_url = URI->new( $url );

    my $req = HTTP::Request->new(uc($method) => $request_url);
    $req->header('Authorization' => $request->to_authorization_header);
    if ($content_body) {
        $req->content_type($content_type);
        $req->content_length(length $content_body);
        $req->content($content_body);
    }

    my $response = $self->{browser}->request($req);
    return $response;
}
1;

Next we have the actual CGI script that makes use of this module. This shows how to call the Goodreads.pm (and thus Net::OAuth::Simple) and then do the Goodreads API calls:

#!/usr/bin/perl

use strict;
use CGI;
use CGI::Cookie;
use Goodreads;
use XML::Mini::Document;
use Data::Dumper;

my %tokens;
$tokens{'consumer_key'} = 'YOUR_CONSUMER_KEY_GOES_IN_HERE';
$tokens{'consumer_secret'} = 'YOUR_CONSUMER_SECRET_GOES_IN_HERE';

my $q = new CGI;
my %cookies = fetch CGI::Cookie;

if($cookies{'at'}) {
    $tokens{'access_token'} = $cookies{'at'}->value;
}
if($cookies{'ats'}) {
    $tokens{'access_token_secret'} = $cookies{'ats'}->value;
}
if($q->param('isbns')) {
    $cookies{'isbns'} = $q->param('isbns');
}
my $oauth_token = undef;
if($q->param('authorize') == 1 && $q->param('oauth_token')) {
    $oauth_token = $q->param('oauth_token');
} elsif(defined $q->param('authorize') && !$q->param('authorize')) {
    print $q->header,
    $q->start_html,
    $q->h1('Not authorized to use Goodreads'),
    $q->p('This user does not allow us to use Goodreads');
    $q->end_html;
    exit;
}

my $app = Goodreads->new(%tokens);

unless ($app->consumer_key && $app->consumer_secret) {
    die "You must go get a consumer key and secret from Appn";
}

if ($oauth_token) {
    if(!$app->authorized) {
        GetOAuthAccessTokens();
    }
    StartInjection();
} else {
    my $url = $app->get_authorization_url(callback => 'https://example.com/cgi-bin/good-reads/inject');
    my @cookies;
    foreach my $name (qw(request_token request_token_secret)) {
        my $cookie = $q->cookie(-name => $name, -value => $app->$name);
        push @cookies, $cookie;
    }
    push @cookies, $q->cookie(-name => 'isbns',
                              -value => $cookies{'isbns'} || '');
    print $q->header(-cookie => @cookies,
                     -status=>'302 Moved',
                     -location=>$url,
                     );
}

exit;

sub GetOAuthAccessTokens {
    foreach my $name (qw(request_token request_token_secret)) {
        my $value = $q->cookie($name);
        $app->$name($value);
    }
    ($tokens{'access_token'},
     $tokens{'access_token_secret'}) =
        $app->request_access_token(
            callback => 'https://example.com/cgi-bin/goodreads-inject',
            );
}

sub StartInjection {
    my $at_cookie = new CGI::Cookie(-name=>'at',
                                    -value => $tokens{'access_token'});
    my $ats_cookie = new CGI::Cookie(-name => 'ats',
                                     -value => $tokens{'access_token_secret'});
    my $isbns_cookie = new CGI::Cookie(-name => 'isbns',
                                       -value => '');
    print $q->header(-cookie=>[$at_cookie,$ats_cookie,$isbns_cookie]);
    print $q->start_html;

    my $user_id = GetUserId();
    if($user_id) {
        my $shelf_id = LoughboroughShelf(user_id => $user_id);
        if($shelf_id) {
            my $isbns = $cookies{'isbns'}->value;
            print $q->p("Got ISBNs list of $isbns");
            AddBooksToShelf(shelf_id => $shelf_id,
                            isbns => $isbns,
                           );
        }
    }

    print $q->end_html;
}

sub GetUserId {
    my $user_id = 0;
    my $response = $app->view_restricted_resource(
                       'https://www.goodreads.com/api/auth_user'
                   );
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        $xml->parse($response->content);
        my $user_xml = $xml->toHash();
        $user_id = $user_xml->{'GoodreadsResponse'}->{'user'}->{'id'};
    }
    return $user_id;
}

sub LoughboroughShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = 0;
    my $user_id = $params->{'user_id'} || return $shelf_id;

    my $response = $app->view_restricted_resource('https://www.goodreads.com/shelf/list.xml?key=' .  $tokens{'consumer_key'} . '&user_id=' . $user_id);
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        $xml->parse($response->content);
        my $shelf_xml = $xml->toHash();
        foreach my $this_shelf (@{$shelf_xml->{'GoodreadsResponse'}->{'shelves'}->{'user_shelf'}}) {
            if($this_shelf->{'name'} eq 'loughborough-wishlist') {
                $shelf_id = $this_shelf->{'id'}->{'-content'};
                last;
            }
        }
        if(!$shelf_id) {
            $shelf_id = MakeLoughboroughShelf(user_id => $user_id);
        }
    }
    print $q->p("Returning shelf id of $shelf_id");
    return $shelf_id;
}

sub MakeLoughboroughShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = 0;
    my $user_id = $params->{'user_id'} || return $shelf_id;

    my $response = $app->update_restricted_resource('https://www.goodreads.com/user_shelves.xmluser_shelf[name]=loughborough-wishlist',
        );
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        $xml->parse($response->content);
        my $shelf_xml = $xml->toHash();
        $shelf_id = $shelf_xml->{'user_shelf'}->{'id'}->{'-content'};
        print $q->p("Shelf hash: ".Dumper($shelf_xml));
    }
    return $shelf_id;
}

sub AddBooksToShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = $params->{'shelf_id'} || return;
    my $isbns = $params->{'isbns'} || return;
    foreach my $isbn (split(',',$isbns)) {
        my $response = $app->view_restricted_resource('https://www.goodreads.com/book/isbn_to_id?key=' . $tokens{'consumer_key'} . '&isbn=' . $isbn);
        if($response->content) {
            my $book_id = $response->content;
            print $q->p("Adding book ID for ISBN $isbn is $book_id");
            $response = $app->update_restricted_resource('http://www.goodreads.com/shelf/add_to_shelf.xml?name=loughborough-wishlist&book_id='.$book_id);
        }
    }
}

You’ll obviously need to get a developer consumer key and secret from the Goodreads site and pop them into the variables at the start of the script (no, I’m not sharing mine with you!). The real work is done by the StartInjection() subroutine and the subordinate subroutines that it then calls once the OAuth process has been completed. By this point we’ve got an access token and its associated secret so we can act as whichever user has allowed us to connect to Goodreads as them. The code will find this user’s Goodreads ID, see if they have a bookshelf called “loughborough-wishlist” (and create it if they don’t) and then add any books that Goodreads knows about with the given ISBN(s). You’d call this CGI script with a URL something like:

https://example.com/cgi-bin/goodreads-inject?isbns=9781565928282

Anyway, there’s a “works for me” simple example of talking to Goodreads from Perl using OAuth 1.0. There’s plenty of development work left in turning this into production level code (it needs to be made more secure for a start off, and the access tokens and secret could be cached in a file or database for reuse in subsequent sessions) but I hope some folk find this useful.

Reading lists and high demand items

Having a reading list management system has a number of advantages, including providing the Library with better management information to help manage its stock.  At Loughborough we’ve been working a couple of scripts that tie LORLS into our Aleph library management system.  These are pretty “bespoke” and probably won’t form part of the normal LORLS distribution as they rely on quite a bit on the local “information landscape” here, but it might be useful to describe how they could help.

A good example is our new high demand items report.  This script (or rather interlinked set of scripts as it runs on several machines to gather data from more than one system) tries to provide library staff with a report of items that have recently been in high demand, and potentially provide hints to both them and academics about reading lists that might benefit from these works being added.

The first part of the high demand system goes off to our Aleph server to peer into both requests for items and loan information.  We want to look for works that have more than a given number of hold requests placed on them in a set time period and/or more than a threshold number of simultaneous loans made.  These parameters are tunable and the main high demand script calls out via an XML API to another custom Perl script that lives on Aleph server that does the actual Oracle SQL jiggery-pokery.  This is actually quite slow – its trawling through a lot of data potentially (especially if the period of time that the query is applied over is quite long – we tended to limit queries to less than a week usually otherwise the whole process can take many hours to run).

This script returns a list of “interesting” items that match our high demand criteria and the users that have requested and/or borrowed them.  The main script can then use this user information to look up module membership.  It does this using LDAP queries to our Active Directory servers, as every module appears as a group in the AD with users being group members.  We can then look for groups of users interested in a particular item that share module membership. If we get more than a given (again tunable) number we check if this work is already mentioned on the reading list for the module and, if not, we can optionally make a note about the work’s ISBN, authors, title, etc into a new table.  This can then be used in the CLUMP academic dashboard to show suggestions about potentially useful extra works that might be suitable for the reading list.

The main script then marshals the retrieved data and generates an HTML table out of it, one row for each work.  We include details of ISBN, author, title, shelfmark, number of active reservations, maximum concurrent loans, the different types of loan, an average price estimate (in case extra copies may be required) and what reading lists (if any) the work is already on.  We can also optionally show the librarians a column for the modules that users trying to use this work are studying, as well as two blank columns for librarians to make their own notes in (when printed out – some still like using up dead trees!).

The main script can either be run as a web based CGI script or run from the command line.  We mostly use the latter (via cron) to provide emailed reports, as the interactive CGI script can take too long to run as mentioned earlier.  It does demonstrate how reading lists, library management systems and other University registration systems can work together to provide library staff with new information views on how stock is being used and by who.

Another new reading list system

This is becoming a bumper year for RLMS’s with the launch of Rebus:list earlier this summer and now Unilibri:

unilibri is launching a new reading list management system, ready for launch in Semester 2 of this academic year. Take a look at our website (www.unilibri.com)

unilibri takes a new approach to the provision of the RLMS. Provided as a software as a service hosted in the cloud; it focuses on ensuring academic and student engagement whilst providing the fundamental services every library and institution needs.

unilibri has an exciting roadmap planned that will ensure it is providing the best service to all the key stakeholders of reading lists. unilibri’s aim is to improve students’ educational experience whilst increasing library efficiencies and cost-effectiveness.

Free workshop at Loughborough

For the second year running were holding a free workshop at Loughborough to discuss reading list management. The announcement about the event can be seen below.

Meeting the reading list challenge 2012: A workshop
Keith Green Building, Loughborough University
Thursday 16th August 2012, 10:00am – 3:30pm

How do you get stakeholders interested in online reading lists? And how does
a reading list management system relate to other systems?

These issues, and many others, will be discussed at this forthcoming workshop.

The morning session will explore academic engagement with reading lists whilst
the afternoon will focus on how a reading lists management system (RLMS) could
interact with other systems. Both sessions will include presentations and group
discussions.

There will be a buffet lunch provided during which there will be a poster session.
All attendees are encouraged to submit a poster highlighting what they expect
from a RLMS, what their experiences of a RLMS are, or what interesting things
they’ve done with an RLMS?

This is a free event. If you would like to attend please email Gary Brewerton
(g.p.brewerton@lboro.ac.uk) to reserve a place stating your name, institution
and any specific dietary requirements.

A new reading list system added to the list of other systems

Today PTFS Europe announced their new reading list management system – Rebus:list.  So we have added it to our list of other systems (alongside Telstar, Talis Aspire, etc.)

If anyone knows of any reading list managements systems not included in our list of other systems, then please let us know and we will add it to the list.

Open source seminar

Jason and I attended an open source seminar from PTFS Europe last week. Conveniently for us the seminar was being hosted by the Department of Information Science at Loughborough University so all we had to do was depart the office, go up a single flight of stairs and we were there.

The were presentations from PTFS about open source in general was well as specific case studies about implementing Koha at Halton and Staffordshire University. The afternoon included a session about other products which PTFS are hoping to support including VuFind, Cufts, Godot and reading lists!

The original publicity for the event indicated they were considering adopting List8D but this isn’t the case as they’re developing their own system, after all in their own words “developing a reading list system isn’t exactly rocket science!” The system will be a hosted solution and so would be a direct competitor to Talis Aspire.

We look forward to seeing how their reading list system develops over the coming months.

Meeting the Reading List Challenge workshop

On July 14th 2011 a workshop was held at Loughborough University entitled “Meeting The Reading List Challenge”.  42 people attended and, after a couple of presentations on reading lists in the morning, the afternoon was spent in group discussions looking at various aspects of reading list design and implementation.

The groups were each asked two questions, and each question was asked of two groups.  The questions were:

  1. What makes a perfect read list? And how can an academic keep it relevant?
  2. Who should be involved in the development of a reading list and what are their roles?
  3. Who do you want to view a reading list and who don’t you want to see it?
  4. How do you get your whole institution engaged with reading lists?
  5. Is there a formula that describes the relationship between reading list content and library stock?
  6. What other systems does a resource/reading list management system need interact with and why?

You can see the posters made from the results of the discussion online.

After the workshop, Gary, Jason and myself sat down and had a think about how some of the things that had come out of the discussions could be implemented in LORLS, and if they were things that we might find useful at Loughborough.  As a result we’ve got a list of some new things to investigate and potentially implement:

  1. Produce a report that is emailed to library staff and/or academics that flags when a new edition of an existing work is available.
  2. Report back to academics on the usage that their reading list is getting.  As we don’t ask the students to log into our LORLS installation, this will have to be anonymous usage information, either from the webserver or from data recorded by the API.
  3. Look at options for purchasing formulae to assist library staff in placing orders for works.  These formulae would be based on various facets such as the number of reading lists a work is on, how many students are on the corresponding modules, the importance attached to the work by the academic(s), the cost of the work, etc.  We might even factor in some simple machine learning so that past purchasing decisions can help inform the system about the likely outcome of future ones.
  4. Importing works from existing bibliographic management tools, especially from RIS/Refworks format.
  5. Provide the students with an ability to rate items and/or lists.  This would provide academics with feedback on how useful the students found the works on the reading lists and might also help the purchasing decisions.
  6. Do some work on the back end to get cookies, Shibboleth SSO and JSON(P) supported to provide a more integrated system.
  7. Sending suggestion emails to academics when new works are added to library stock that cover similar topics as ones already on their reading lists.
  8. Do some W3C accessibility and mobile web support testing.
  9. Introduce a ‘tickstamp’ data type that is set with the current date/time when someone ticks a check box.  This could then help support workflow for the librarians (ie a list of check boxes that have to be ticked off for each list and/or item).

We’re not at the stage of attaching time scales to the development of any of these, and indeed we might find that we don’t actually implement all of them.  However this list does give an idea of where we’re looking to take LORLS now that we have v6 out in production use at Loughborough.

Momentous events

Well, OK maybe they’re not that momentous but…

A couple of months ago we (Jason and I) met up with Ian Corns of Talis Aspire fame and had a bit of a catch-up session. Much has changed at Talis: their Library Management System division has been sold off, what was Talis Aspire is now called Talis Aspire Campus Edition, they are launching talisaspire.com and Ian has a new job title (which is no laughing matter :-)).

We also bemoaned the lack of any reading list events happening this Summer. So in light of that we were particularly pleased when the Department of Information Science at Loughborough University (i.e. them upstairs) decided to host a workshop on “Meeting the reading list challenge” especially as Ian and myself will be giving a presentation on reading list systems at the event.

As we’re now involved in helping to organise the event I thought advertising it here might be a good idea!

Meeting the reading list challenge: A workshop
Department of Information Science, Loughborough University
Thursday 14th July 2011, 10:30am – 3:00pm

Do you know what resources your academics are recommending to students? How easy do your students find it to locate these key resources?

These issues (and many others) will be discussed at this forthcoming workshop.

Your host for the day will be Dr Ann O’Brien from the Department of Information Science, Loughborough University. The morning session will consist of presentations on “What is a reading list?” and “A magical mystery tour of resource/reading list management systems” given by Gary Brewerton, Project Manager for LORLS (Loughborough online reading list system) and Ian Corns,Customer Liaison Manager for Talis Aspire.

A free buffet lunch will be provided after which there will be wide ranging discussions on topics such as: what makes a good list? How do you engage with academic staff? And, what roles does the library actually have with regard to reading lists? There will also be opportunities for you to ask questions of those present.

This is a free event. If you would like to attend please email Sue Manuel (s.manuel@lboro.ac.uk) to reserve a place stating your name, institution and any specific dietary requirements.

We look forward to hearing what others have to say about reading lists and associated systems on the day.

Twitter hash tag for event: #mtrlc

Go to Top