Moving from LORLS v5 to v7

If you’ve got an existing LORLS v5 installation, you’ll probably want to export the contents of the old LORLS v5 database into the new format that was introduced originally in LORLS v6 and then extended in the current LORLS v7 release.  You don’t need to worry about this if you’ve already migrated from LORLS v5 to LORLS v6 – the the LORLS v7 installer can deal with altering the database schema from v6 to v7 for you.  

We provide an conversion script to help with migrating from LORLS v5 – its called create_structures and is usually installed in the bin directory under where ever you’ve asked for the Perl library files to live (eg /usr/local/LUMP by default).

This script assumes we’re starting with an empty LUMP database for LORLS v7 – if you’ve loaded the test data beware as it will get merged into your converted “live” data!  You might also want to edit a couple of things in the script before running it.

One of these things is where the LORLS v5 library routines are stored.  You’ll need to have these on the server already – if you’re migrating to a new machine as well as a new LORLS version you’ll need to temporarily install at least the ReadingLists.pm and ReadingListsItem.pm Perl library files.  To tell create_structures where to get the old LORLS v5 code from tweak the line near the top that says:

use lib "/usr/local/ReadingLists";

changing the /usr/local/ReadingLists part to whereever you put the two LORLS v5 modules.

To run it, open a normal shell on your server (we’re assuming you’re running a Linux or UNIX system here – we’ve not tried to run LORLS v7 on a Windows server!) then run the script. It can take some optional parameters to let you specify the name of your institution, the URL for its home page, the URL for a logo image and a debug level flag. The debug flag is a number based on a sum of the following values:

  • 1 : turn on warnings
  • 2 : disable any SQL updates (so it won’t do any writes into the database)
  • 4 : always write SQL dump files, even if it wouldn’t normally be necessary

The debugging flags are really for us developers – you’ll probably either want to ignore them to start with and then just turn on warnings if things don’t work out for you for some reason.

Once running create_structures generates a SQL transaction file called transaction.<PID>.sql where PID is the unique process ID of the create_structures instance.  It’ll also generate some intermediate SQL files for the various tables it needs to load up – these are all amalgamated into the transaction.<PID>.sql file at the end of the run.

Next you launch a mysql command client pointing at your LUMP database and source this transaction.<PID>.sql file into it.  We do things this way as its actually far, far quicker to do batch inserts than it is to do individual INSERT/UPDATE statements as we go along.  It also means that you can nip into the transaction.<PID>.sql file with a text editor and make changes if you want!

One thing to note is that different MySQL installations have different limts on the maximum size of inserts.  You can either alter these in the MySQL configuration files on your server and then restart the database server, or alter the line that says:

my $MAX_INSERT_SIZE = 10000;

in the create_structures script to decrease (or increase!) the number of rows that will be inserted at once.

Of course you may wish to do some local tweaks on the data once you’ve loaded into LUMP.  Having a different data structure in a system with new capabilities might be a good time to review how you use the system.  At Loughborough we did several such local tweaks once we’d run the create_structures imported database – things like changing/adding data types, creating new types of structural unit, etc, etc.

 

Go to Top