Prototyping Goodreads integration

Part of my day job is developing and gluing together library systems. This week I’ve been making a start on doing some of this “gluing” by prototyping some code that will hopefully link our LORLS reading list management system with the Goodreads social book reading site. Now most of our LORLS code is written in either Perl or JavaScript; I tend to write the back end Perl stuff that talks to our databases and my partner in crime Jason Cooper writes the delightful, user friendly front ends in JavaScript. This means that I needed to get a way for a Perl CGI script to take some ISBNs and then use them to populate a shelf in Goodreads. The first prototype doesn’t have to look pretty – indeed my code may well end up being a LORLS API call that does the heavy lifting for some nice pretty JavaScript that Jason is far better at producing than I am!

Luckily, Goodreads has a really well thought out API, so I lunged straight in. They use OAuth 1.0 to authenticate requests to some of the API calls (mostly the ones concerned with updating data, which is exactly what I was up to) so I started looking for a Perl OAuth 1.0 module on CPAN. There’s some choice out there! OAuth 1.0 has been round the block for a while so it appears that multiple authors have had a go at making supporting libraries with varying amounts of success and complexity.

So in the spirit of being super helpful, I thought I’d share with you the prototype code that I knocked up today. Its far, far, far from production ready and there’s probably loads of security holes that you’ll need to plug. However it does demonstrate how to do OAuth 1.0 using the Net::OAuth::Simple Perl module and how to do both GET and POST style (view and update) Goodreads API calls. Its also a great way for me to remember what the heck I did when I next need to use OAuth calls!

First off we have a new Perl module I called Goodreads.pm. Its a super class of the Net::OAuth::Simple module that sets things up to talk to Goodreads and provides a few convenience functions. Its obviously massively stolen from the example in the Net::OAuth::Simple perldoc that comes with the module.

#!/usr/bin/perl

package Goodreads;

use strict;
use base qw(Net::OAuth::Simple);

sub new {
    my $class = shift;
    my %tokens = @_;

    return $class->SUPER::new( tokens => %tokens,
                               protocol_version => '1.0',
                               return_undef_on_error => 1,
                               urls => {
                                 authorization_url => 'http://www.goodreads.com/oauth/authorize',
                                 request_token_url => 'http://www.goodreads.com/oauth/request_token',
                                 access_token_url => 'http://www.goodreads.com/oauth/access_token',
                               });
}

sub view_restricted_resource {
    my $self = shift;
    my $url = shift;
    return $self->make_restricted_request($url, 'GET');
}

sub update_restricted_resource {
    my $self = shift;
    my $url = shift;
    my %extra_params = @_;
    return $self->make_restricted_request($url, 'POST', %extra_params);
}

sub make_restricted_request {
    my $self = shift;
    croak $Net::OAuth::Simple::UNAUTHORIZED unless $self->authorized;

    my( $url, $method, %extras ) = @_;

    my $uri = URI->new( $url );
    my %query = $uri->query_form;
    $uri->query_form( {} );

    $method = lc $method;

    my $content_body = delete $extras{ContentBody};
    my $content_type = delete $extras{ContentType};

    my $request = Net::OAuth::ProtectedResourceRequest->new(
        consumer_key => $self->consumer_key,
        consumer_secret => $self->consumer_secret,
        request_url => $uri,
        request_method => uc( $method ),
        signature_method => $self->signature_method,
        protocol_version => $self->oauth_1_0a ?
            Net::OAuth::PROTOCOL_VERSION_1_0A :
            Net::OAuth::PROTOCOL_VERSION_1_0,
        timestamp => time,
        nonce => $self->_nonce,
        token => $self->access_token,
        token_secret => $self->access_token_secret,
        extra_params => { %query, %extras },
    );
    $request->sign;
    die "COULDN'T VERIFY! Check OAuth parameters.n"
        unless $request->verify;

    my $request_url = URI->new( $url );

    my $req = HTTP::Request->new(uc($method) => $request_url);
    $req->header('Authorization' => $request->to_authorization_header);
    if ($content_body) {
        $req->content_type($content_type);
        $req->content_length(length $content_body);
        $req->content($content_body);
    }

    my $response = $self->{browser}->request($req);
    return $response;
}
1;

Next we have the actual CGI script that makes use of this module. This shows how to call the Goodreads.pm (and thus Net::OAuth::Simple) and then do the Goodreads API calls:

#!/usr/bin/perl

use strict;
use CGI;
use CGI::Cookie;
use Goodreads;
use XML::Mini::Document;
use Data::Dumper;

my %tokens;
$tokens{'consumer_key'} = 'YOUR_CONSUMER_KEY_GOES_IN_HERE';
$tokens{'consumer_secret'} = 'YOUR_CONSUMER_SECRET_GOES_IN_HERE';

my $q = new CGI;
my %cookies = fetch CGI::Cookie;

if($cookies{'at'}) {
    $tokens{'access_token'} = $cookies{'at'}->value;
}
if($cookies{'ats'}) {
    $tokens{'access_token_secret'} = $cookies{'ats'}->value;
}
if($q->param('isbns')) {
    $cookies{'isbns'} = $q->param('isbns');
}
my $oauth_token = undef;
if($q->param('authorize') == 1 && $q->param('oauth_token')) {
    $oauth_token = $q->param('oauth_token');
} elsif(defined $q->param('authorize') && !$q->param('authorize')) {
    print $q->header,
    $q->start_html,
    $q->h1('Not authorized to use Goodreads'),
    $q->p('This user does not allow us to use Goodreads');
    $q->end_html;
    exit;
}

my $app = Goodreads->new(%tokens);

unless ($app->consumer_key && $app->consumer_secret) {
    die "You must go get a consumer key and secret from Appn";
}

if ($oauth_token) {
    if(!$app->authorized) {
        GetOAuthAccessTokens();
    }
    StartInjection();
} else {
    my $url = $app->get_authorization_url(callback => 'https://example.com/cgi-bin/good-reads/inject');
    my @cookies;
    foreach my $name (qw(request_token request_token_secret)) {
        my $cookie = $q->cookie(-name => $name, -value => $app->$name);
        push @cookies, $cookie;
    }
    push @cookies, $q->cookie(-name => 'isbns',
                              -value => $cookies{'isbns'} || '');
    print $q->header(-cookie => @cookies,
                     -status=>'302 Moved',
                     -location=>$url,
                     );
}

exit;

sub GetOAuthAccessTokens {
    foreach my $name (qw(request_token request_token_secret)) {
        my $value = $q->cookie($name);
        $app->$name($value);
    }
    ($tokens{'access_token'},
     $tokens{'access_token_secret'}) =
        $app->request_access_token(
            callback => 'https://example.com/cgi-bin/goodreads-inject',
            );
}

sub StartInjection {
    my $at_cookie = new CGI::Cookie(-name=>'at',
                                    -value => $tokens{'access_token'});
    my $ats_cookie = new CGI::Cookie(-name => 'ats',
                                     -value => $tokens{'access_token_secret'});
    my $isbns_cookie = new CGI::Cookie(-name => 'isbns',
                                       -value => '');
    print $q->header(-cookie=>[$at_cookie,$ats_cookie,$isbns_cookie]);
    print $q->start_html;

    my $user_id = GetUserId();
    if($user_id) {
        my $shelf_id = LoughboroughShelf(user_id => $user_id);
        if($shelf_id) {
            my $isbns = $cookies{'isbns'}->value;
            print $q->p("Got ISBNs list of $isbns");
            AddBooksToShelf(shelf_id => $shelf_id,
                            isbns => $isbns,
                           );
        }
    }

    print $q->end_html;
}

sub GetUserId {
    my $user_id = 0;
    my $response = $app->view_restricted_resource(
                       'https://www.goodreads.com/api/auth_user'
                   );
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        $xml->parse($response->content);
        my $user_xml = $xml->toHash();
        $user_id = $user_xml->{'GoodreadsResponse'}->{'user'}->{'id'};
    }
    return $user_id;
}

sub LoughboroughShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = 0;
    my $user_id = $params->{'user_id'} || return $shelf_id;

    my $response = $app->view_restricted_resource('https://www.goodreads.com/shelf/list.xml?key=' .  $tokens{'consumer_key'} . '&user_id=' . $user_id);
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        $xml->parse($response->content);
        my $shelf_xml = $xml->toHash();
        foreach my $this_shelf (@{$shelf_xml->{'GoodreadsResponse'}->{'shelves'}->{'user_shelf'}}) {
            if($this_shelf->{'name'} eq 'loughborough-wishlist') {
                $shelf_id = $this_shelf->{'id'}->{'-content'};
                last;
            }
        }
        if(!$shelf_id) {
            $shelf_id = MakeLoughboroughShelf(user_id => $user_id);
        }
    }
    print $q->p("Returning shelf id of $shelf_id");
    return $shelf_id;
}

sub MakeLoughboroughShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = 0;
    my $user_id = $params->{'user_id'} || return $shelf_id;

    my $response = $app->update_restricted_resource('https://www.goodreads.com/user_shelves.xmluser_shelf[name]=loughborough-wishlist',
        );
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        $xml->parse($response->content);
        my $shelf_xml = $xml->toHash();
        $shelf_id = $shelf_xml->{'user_shelf'}->{'id'}->{'-content'};
        print $q->p("Shelf hash: ".Dumper($shelf_xml));
    }
    return $shelf_id;
}

sub AddBooksToShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = $params->{'shelf_id'} || return;
    my $isbns = $params->{'isbns'} || return;
    foreach my $isbn (split(',',$isbns)) {
        my $response = $app->view_restricted_resource('https://www.goodreads.com/book/isbn_to_id?key=' . $tokens{'consumer_key'} . '&isbn=' . $isbn);
        if($response->content) {
            my $book_id = $response->content;
            print $q->p("Adding book ID for ISBN $isbn is $book_id");
            $response = $app->update_restricted_resource('http://www.goodreads.com/shelf/add_to_shelf.xml?name=loughborough-wishlist&book_id='.$book_id);
        }
    }
}

You’ll obviously need to get a developer consumer key and secret from the Goodreads site and pop them into the variables at the start of the script (no, I’m not sharing mine with you!). The real work is done by the StartInjection() subroutine and the subordinate subroutines that it then calls once the OAuth process has been completed. By this point we’ve got an access token and its associated secret so we can act as whichever user has allowed us to connect to Goodreads as them. The code will find this user’s Goodreads ID, see if they have a bookshelf called “loughborough-wishlist” (and create it if they don’t) and then add any books that Goodreads knows about with the given ISBN(s). You’d call this CGI script with a URL something like:

https://example.com/cgi-bin/goodreads-inject?isbns=9781565928282

Anyway, there’s a “works for me” simple example of talking to Goodreads from Perl using OAuth 1.0. There’s plenty of development work left in turning this into production level code (it needs to be made more secure for a start off, and the access tokens and secret could be cached in a file or database for reuse in subsequent sessions) but I hope some folk find this useful.

Embedding content

Originally the only content that was embedded in CLUMP was book covers and book previews/content available in Google Books.  Access to other resources required users to follow links out of the system. Improving the content that can be embedded into CLUMP has always been something that we have wanted to tackle when we had the time.

When it comes to embedding content in a reading list we wanted to make it as easy as possible for the users.  Most users have no problem using an embedded player, but adding one in some web systems can be very difficult.  We could ask the user for specifics (like what site the content is on, what its unique id is, what format is the content in, etc.), but that would make it both awkward and time consuming for academics/librarians and force them to go back through their lists to update all the relevant resources. Also they would have to do that each time support was added for new embeddable content.

We also didn’t want to have specific embedded content types for list items as any content that can be embedded already fits into our existing content types.  Adding extra metadata to the existing content types was also ruled out as this would make adding new items to lists far more cumbersome.

Our solution, that we have just implemented into our development version of CLUMP, is that embeddable content should be identified and handled by the system.  It should recognise when a item’s URL points to a resource that it can embed and then take the necessary steps to embed the content.  This way, not only is it easy for academics/librarians to add embedded content, but existing resources will start having their content embedded in their item level popups without anyone having to make any changes to the metadata.

New resource that will appear as embedded content include both YouTube and Vimeo videos, any video files in mp4, WebM and ogv formats and any audio files in a mp3, wav or ogg format.  The YouTube and Vimeo content use their embedded players to display the content, while the video and audio files use the HTML5 video and audio tags to provide the user with a player in the browser without needing any additional plugins.

Using the the HTML5 video and audio capabilities has both pros and cons.  Not needing to include any plugins is an advantage, especially with the increasing number of devices that don’t support flash content.  Conversely though older browsers suffer from lack of support for the new tags and so won’t display the content, also each browsers that does support the tags supports them for a different range of media types.

Luckily when a browser doesn’t support the tags or the media type of the resource it doesn’t produce an error, it just doesn’t show the embedded content.  Of course if the users browser doesn’t support any of the embedded content they can still use the URL to access/download the content.

Example of embedded content

Reading lists and high demand items

Having a reading list management system has a number of advantages, including providing the Library with better management information to help manage its stock.  At Loughborough we’ve been working a couple of scripts that tie LORLS into our Aleph library management system.  These are pretty “bespoke” and probably won’t form part of the normal LORLS distribution as they rely on quite a bit on the local “information landscape” here, but it might be useful to describe how they could help.

A good example is our new high demand items report.  This script (or rather interlinked set of scripts as it runs on several machines to gather data from more than one system) tries to provide library staff with a report of items that have recently been in high demand, and potentially provide hints to both them and academics about reading lists that might benefit from these works being added.

The first part of the high demand system goes off to our Aleph server to peer into both requests for items and loan information.  We want to look for works that have more than a given number of hold requests placed on them in a set time period and/or more than a threshold number of simultaneous loans made.  These parameters are tunable and the main high demand script calls out via an XML API to another custom Perl script that lives on Aleph server that does the actual Oracle SQL jiggery-pokery.  This is actually quite slow – its trawling through a lot of data potentially (especially if the period of time that the query is applied over is quite long – we tended to limit queries to less than a week usually otherwise the whole process can take many hours to run).

This script returns a list of “interesting” items that match our high demand criteria and the users that have requested and/or borrowed them.  The main script can then use this user information to look up module membership.  It does this using LDAP queries to our Active Directory servers, as every module appears as a group in the AD with users being group members.  We can then look for groups of users interested in a particular item that share module membership. If we get more than a given (again tunable) number we check if this work is already mentioned on the reading list for the module and, if not, we can optionally make a note about the work’s ISBN, authors, title, etc into a new table.  This can then be used in the CLUMP academic dashboard to show suggestions about potentially useful extra works that might be suitable for the reading list.

The main script then marshals the retrieved data and generates an HTML table out of it, one row for each work.  We include details of ISBN, author, title, shelfmark, number of active reservations, maximum concurrent loans, the different types of loan, an average price estimate (in case extra copies may be required) and what reading lists (if any) the work is already on.  We can also optionally show the librarians a column for the modules that users trying to use this work are studying, as well as two blank columns for librarians to make their own notes in (when printed out – some still like using up dead trees!).

The main script can either be run as a web based CGI script or run from the command line.  We mostly use the latter (via cron) to provide emailed reports, as the interactive CGI script can take too long to run as mentioned earlier.  It does demonstrate how reading lists, library management systems and other University registration systems can work together to provide library staff with new information views on how stock is being used and by who.

CLUMP Improvements

It has been a while since I posted any updates on improvements to CLUMP (the default interface to LORLS) and as we have just started testing the latest beta version, it seemed a good time to make a catch up post.  So other than the Word export option, what other features have been introduced?

Advanced sort logic

With the addition of sub-headings in version 7 of LORLS it soon become obvious that we needed to improve the logic of our list sorting routines as they were no longer intuitive.  Historically our list sorting routines have treated the list as one long list (as without sub-headings there was no consistent way to denote a subsection of a list other than using a sub-list).

The new sorting logic is to break the list into sections and then sort within each section.  In addition to keeping the ordering of the subsections any note entries at the top of a section are considered to be sticky and as such won’t be sorted.

Finally if there are any note entries within the items to be sorted the user is warned that these will be sorted with the rest of the subsections entries.

Article suggestions

Another area that we have been investigating is suggestions to list owners for items they might want to consider for their lists.  The first stage of this it the inclusion of a new question on the dashboard, “Are there any suggested items for this list?”.  When the user clicks on this question they are shown a list of suggested articles based upon ExLibris’s bX recommender service.

To generate the article suggestions current articles on the list are taken and bX queried for recommendations.  All of the returned suggestions are sorted so the at the more common recommendations are suggested first.

Default base for themes

CLUMP now has a set of basic styles and configurations that are used as the default options.  These defaults are then over-ridden by the theme in use.  This change was required to make the task of maintaining custom themes easier.  Where previously, missing entries in a customised theme would have to be identified and updated by hand before the theme could be used, now those custom themes will work as any missing entries will fall back to using the system default.

The back button

One annoyance with CLUMP has been that due to being AJAX based the back button would take users to the page they were on before they started looking at the system rather than the Department, Module, List they were looking at previously.  This annoyance has finally been removed by using the hash-ref part of the URL to identify the structural unit currently being viewed.

Every time the user views a new structural unit the hash-ref is updated with its ID.  Instead of reloading the page when the user or browser changes the hash-ref (e.g. through clicking the back button) a JavaScript event is triggered.  The handler attached to this event parses the hash-ref and extracts the structural unit ID which is used to display the relevant structural unit.

Purchasing Prediction

For a while now we’ve been feverishly coding away at the top secret LORLS bunker on a new library management tool attached to LORLS: a system to help predict which books a library should buy and roughly how many will be required.  The idea behind this was that library staff at Loughborough were spending a lot of time trying to work out what books needed to be bought, mostly based on trawling through lots of data from systems such as the LORLS reading list management system and the main Library Management System (Aleph in our case).  We realised that all of this work was suitable for “mechanization” and we could extract some rules from the librarians about how to select works for purchase and then do a lot of the drudgery for them.

We wrote the initial hack-it-up-and-see-if-the-idea-works-at-all prototype about a year ago.  It was pretty successful and was used in Summer 2012 to help the librarians decide how to spend a bulk book purchasing budget windfall from the University.  We wrote up our experiences with this code as a Dlib article which was published recently.  That article explains the process behind the system, so if you’re interested in the basis behind this idea its probably worth a quick read (or to put it in other terms, I’m too lazy to repeat it here! 😉 ).

Since then we’ve stepped back from the initial proof of concept code and spent some time re-writing it to be more maintainable and extensible.  The original hacked together prototype was just a single monolithic Perl script that evolved and grew as we tried ideas out and added features.  It has to call out to external systems to find loan information and book pricing estimates with bodges stuffed on top of bodges.  We always intended it to be the “one we threw away” and after the summer tests it became clear that the purchase prediction could be useful to the Library staff and it was time to do version 2.  And do it properly this time round.

The new version is modular… using good old Perl modules! Hoorah! But we’ve been a bit sneaky: we’ve a single Perl module that encompasses the purchase prediction process but doesn’t actually do any of it.  This is effectively a “wrapper” module that a script can call and which knows about what other purchase prediction modules are available and what order to run them in.  The calling script just sets up the environment (parsing CGI parameters, command line options, hardcoded values or whatever is required), instantiates a new PurchasePrediction object and repeatedly calls that object’s DoNextStep() method with a set of input state (options, etc) until the object returns an untrue state.  At that point the prediction has been made and suggestions have, hopefully, been output.  Each call to DoNextStep() runs the next Perl module in the purchase prediction workflow.

The PurchasePrediction object comes with sane default state stored within it that provides standard workflow for the process and some handy information that we use regularly (for example ratios of books to students in different departments and with different levels of academic recommendation in LORLS.  You tend to want more essential books than you do optional ones obviously, whilst English students may require use of individual copies longer than Engineering students do).  This data isn’t private to the PurchasePredictor module  though – the calling script could if it wished alter any of it (maybe change the ratio of essential books per student for the Physics department or even alter the workflow of the purchase prediction process).

The workflow that we currently follow by default is:

  1. Acquire ISBNs to investigate,
  2. Validate those ISBNs,
  3. Make an initial estimate of the number of copies required for each work we have an ISBN for,
  4. Find the cost associated with buying a copy of each work,
  5. Find the loan history of each work,
  6. Actually do the purchasing algorithm itself,
  7. Output the purchasing suggestions in some form.

Each of those steps is broken down into one or more (mostly more!) separate operations, with each operation then having its own Perl module to instantiate it.  For example to acquire ISBNs we’ve a number of options:

  1. Have an explicit list of ISBNs given to us,
  2. Find the ISBNs of works on a set of reading list modules,
  3. Find the ISBNs of works on reading lists run by a particular department (and optionally stage within that department – ie First Year Mech Eng)
  4. Find ISBNs from reading list works that have been edited within a set number of days,
  5. Find all the ISBNs of books in the entire reading list database
  6. Find ISBNs of works related to works that we already have ISBNs for (eg using LibraryThing’s FRBR code to find alternate editions, etc).

Which of these will do anything depends on the input state passed into the PurchasePredictor’s DoNextStep() method by the calling script – they all get called but some will just return the state unchanged whilst others will add new works to be looked at later by subsequent Perl modules.

Now this might all sound horribly confusing at first but it has big advantages for development and maintenance. When we have a new idea for something to add to the workflow we can generate a new module for it, slip it into the list in the PurchasePredictor module’s data structure (either by altering this on the fly in a test script or, once the new code is debugged, altering the default workflow data held in the PurchasePredictor module code) and then pass some input to the PurchasePredictor module’s DoNextStep() method that includes flags to trigger this new workflow.

For example until today the workflow shown in the second list above did not include the step “Find ISBNs from reading list works that have been edited within a set number of days”; that got added as a new module written from scratch to a working state in a little over two hours.  And here’s the result, rendered as an HTML table:

Purchase Predictor pp2 HTML showing suggested purchases for works in LORLS edited in the last day.

Purchase Predictor pp2 HTML showing suggested purchases for works in LORLS edited in the last day.

As you can see the purchase predictor HTML output in this case tries to fit in with Jason’s new user interface, which it can do easily as that’s all encompassed in Perl modules as well!

There’s still lots more work we’ve can do with purchase prediction.  As one example, the next thing on my ‘To Do’ list is to make an output module that generates emails for librarians so that it can be run as batch job by cron every night.  The librarians can then sip their early morning coffee whilst pondering which book purchasing suggestions to follow up.  The extensible modular scheme also means we’re free to plug in different actual purchasing algorthims… maybe even incorporating some machine learning, with feedback provided by the actual purchases approved by the librarians against the original suggestions that the system made.

I knew those undergraduate CompSci Artificial Intelligence lectures would come in handy eventually… 🙂

Exporting to a Word document

A new feature we have been working on in our development version of CLUMP, is the option for a list editor to export their reading list as a word document (specifically in a docx format).  This will be particularly beneficial for academics extract a copy of their list in a suitable format for inclusion into a course/module handbook. A key requirement we had when developing it was that it should be easy to alter the styles used for headings, citations, notes, etc.

As previously mentioned by Jon the docx format is actually a zip file containing a group of XML files.  The text content of a document is stored within the “w:body” element in the document.xml file.  The style details are stored in another of the xml files.  Styles and content being stored in separate files allows us to create a template.docx file in word, in which we define our styles.  The export script then takes this template and populates it with the actual content.

When generating an export the script takes a copy of the template file, treats it as a zip file and extracts the document.xml file.  Then it replaces the contents of the w:body element in that extracted file with our own xml before overwriting the old document.xml with our new one.  Finally we then pass this docx file from the servers memory to the user.  All of this process is done in memory which avoids the overheads associated with generating and handling temporary files.

wordExport

To adjust the formatting of the styles in the template it can simply be loaded into Word, where the desired changes to the styles can be made.  After it has been saved it can be uploaded to the server to replace its existing template.

 

Meeting the Reading List Challenge 2013 announced

The annual Meeting the Reading List Challenge event will be taking place at Loughborough again. The announcement about the free event can be seen below.

Meeting the Reading List Challenge
Keith Green Building, Loughborough University
Thursday 4th April 2013, 10:30am – 3:30pm

This free showcase event will highlight experiences from a number of institutions in their use and development of resource/reading list management systems.  There will be five presentations (further details to follow) throughout the day.  In addition there will be a buffet lunch provided during which there will be a suppliers’ exhibition.

This is a free event.  If you would like to attend please email Gary Brewerton (g.p.brewerton@lboro.ac.uk) to reserve a place stating your name, institution and any specific dietary requirements.

More details of this year’s event and the results of previous years’ events can be seen on the Meeting the Reading List Challenge website.

LORLS v7 now available

With great pleasure (trumpet fanfare not included) I can now announce the release of LORLS version 7. Key features include:

  • Import of Harvard citations from docx documents
  • Academic dashboard displaying usage data and other key metrics (see image to right)
  • Collapsible sections on reading lists
  • Improved support for diacritics
  • Allow students to like/dislike items on a list
View of the new academic dashboard

LORLS v7 is available for download from this website and our online demo has been updated to this release.

Trouble installing? Disable SELinux

We are currently working on a new distribution of LORLS (That’s right version 7 is coming soon) and to test the installer’s ability to update an existing installation we needed a fresh v6 install to test on.   So I dropped into a fresh virtual machine we have dedicated specifically to this kind of activity, downloaded the version 6 installer and ran through the installation only to find that, while it had created the database and loaded the initial test data just fine, it hadn’t installed any of the system files.

So for the next 3 hours I was scouring apache’s logs, checking the usual culprits for these sort of issues and debugging the code.  One of the first things I did was check the SELinux configuration and it was set to permissive, which means that it doesn’t actually block anything just warns the user.  This lead me to discount SELinux as the cause of the problem.

After 3 hours of debugging I finally reached the stage of having a test script that would work when run by a user but not when run by apache.  The moment that I had this output I realised that while SELinux may be configured to be permissive, it will only pick up this change when the machine is restarted.  So I manually tried disabling SELinunx (as root use the command ‘echo 0 > /selinux/enforce’) and then tried the installer again.

Needless to say the installer worked fine after this, so if you are installing LORLS and find that it doesn’t install the files check that SELinux is disabled.

Another new reading list system

This is becoming a bumper year for RLMS’s with the launch of Rebus:list earlier this summer and now Unilibri:

unilibri is launching a new reading list management system, ready for launch in Semester 2 of this academic year. Take a look at our website (www.unilibri.com)

unilibri takes a new approach to the provision of the RLMS. Provided as a software as a service hosted in the cloud; it focuses on ensuring academic and student engagement whilst providing the fundamental services every library and institution needs.

unilibri has an exciting roadmap planned that will ensure it is providing the best service to all the key stakeholders of reading lists. unilibri’s aim is to improve students’ educational experience whilst increasing library efficiencies and cost-effectiveness.

Go to Top