Editing API

With EditDataElement under our belt, its time to turn our thoughts to more of the editing/creating side of the system. Gary, Jason and I had a sit down thinking session this afternoon and came up with four things we need an API for:

  1. Given a SUT ID (or a SU ID from which we can find a SUT ID), what SUTs can be children? We need this in order to produce a drop down list of possible SUTs that can be added under an SU in a editor,
  2. Given a SUT ID, return an XML structure describing the data type groups and data types that can be under it. This is needed so that an editor can work out what should be in the form, what format it has, etc,
  3. Create an SU with a given SUT ID (and possibly use the same API for editing an SU with a given SU ID?). This needs to be told the parent SU ID and check that the requested SUT ID is a valid child, and also that the user has permission to create children. If everything is OK it generates a new SU of the given type and links it to the parent SU in structural_unit_link table. It then has to copy the ACLs from the parent SU to the new SU before returning the new SU ID.
  4. An API for managing ACLs. Needs to allow new ACL lines to be added and existing ones changed and/or deleted. We had a talk about how we envisage ACLs working and it seems to be OK as far as we can see: ACLs on new SUs will by default inherit from the parent SU so that the same people have the same rights (ie if you can view or edit a reading list, you’ll be able view or edit a book attached to it). Users can remove ACLs for groups of the same rank or lower than they are (so academics can’t remove the Sysadmin or Departmental Admin groups rights for example).

This evening a first cut at the first two of those has been knocked out. This should let Jason play with front end form ideas for the editors, even if he can’t actually create new SUs yet. Creating SUs shouldn’t be a big deal (its very similar to EditDataElement) but the ACL management API is going to have to be carefully thought out, especially if we might want to add new capabilities in the future (for example a “can_create_children” ACL, so that you can edit an existing SU but not add structure beneath it, and maybe a “can_delete” one as well so that Academics can allow research students to tweak typos in their lists but not add or remove items). Another suggestion from Gary was a “can_publish” ACL type so that only specified folk can authorise the publication/unpublication of an SU (and its children).

Talking of deleting, we also tweaked the structural_unit table today by added two new attributes: deleted and status. The deleted attribute indicates whether an SU has been, well, deleted. We don’t want to actually do a database delete as academics and librarians have in the past had “ID10T” moments and deleted stuff they shouldn’t, and getting it back from backups is a pain. By having a simple flag we can allow sysadmins to “undelete” easily – maybe with a periodic backend script that really deletes SUs that were flagged as deleted several months ago.

The status attribute allows us to flag the publication status of the SU. By default this will start as “draft” and so we can ensure that student facing front ends don’t see it. When ready it can be changed to “published” which allows it to be viewed by guests normally (subject to ACLs of course). Lastly there is a “suppressed” status that is intended to allow published SUs to be temporarily unpublished – for example if a module is not running for one year but will need to reappear the next. Not entirely sure if the status attribute isn’t just replicating what we can do with ACLs though – I’ll need to chew that one over with the lads.

Surviving Database problems

The importation of LORLS v5 code has hit a snag: the object oriented nature of the Perl code created large numbers of connections and would eventually, after processing a largish number of reading lists, blow up because the MySQL database would not handle any more connections. A way round this is to replace the Perl DBI connect() method with a similar call to Perl’s DBIx modules – DBIx supports reconnections.

Some cunning tweaking of the BaseSQL init() method is also required so that we can keep on reusing the same database connection over and over again, as there’s little point generating a new one for each Perl module in use. BaseSQL now use a cunning Perl hack based on the behavour of unnamed embedded subroutines (ie closures) and variable scoping/referencing. This allows a connection to be set up if one exists, but also allows the database handle for the connection to be shared amongst all instances of the Perl modules that inherit from BaseSQL in a process. This means that the LUMP side of things only uses at most one MySQL database connection, and will reconnect if the database goes away (it’ll try 500 times before it gives up, so that covers the database being restarted during log rotations for example, which is what originally highlighted this problem).

However all is not rosy in the garden of SQL: the two old LORLSv5 modules that are needed to read the old reading lists can also generate large numbers of connections. I’m experimenting with closing the handles they create as soon as I’ve issued a new() method call and then telling them to use a hand crafted DBIx connection that is already set up. Seems to work but I keep finding more bits where it sets up new connections unexpectedly – not helped by the recursive nature of the build_structures LUMP import script. Aaggghhhh! 🙂

Creating Indexes

Whilst writing a little dev script to nullify Moodle IDs for Jason, I realised that things could be a bit slow sometimes searching for SUs when the system had a large number of reading lists in it. Once again I’d made the school boy error of forgetting to create indexes in the DB schema for the foreign keys in the various tables. To that end the schema now has indexes on any ID fields in any tables, plus a couple of obvious ones we might want to search on. We’ll probably have to add more in the future, especially on large tables (structural_unit, data_element, access_control_list, user, usergroup for example).

Also been trying to track down uninitialised variables in the build_structures LORLSv4 import script – it works at the moment but looks messy and I’d rather have properly error checking code.

The EditDataElement API and Moodle IDs

Today we added the first web API call that could actually change things in the LUMP database (as opposed to just reading from it). The driver for this came from integration with Moodle. Today we’d got a LUMP reading list to display from within Moodle, but we needed to be able to link a Moodle ID to a reading list SU. The resulting EditDataElement API call allows some flexibility in how the data element in question is specified. It could be by the data element’s ID, in which case the value is simply updated. However the client code might not know the data element ID or it might require a new one, so it can create/edit data elements by specifying the required new value, the ID of the SU that it is within and the data type name of the data element.

One subtle finese required is that some DTs are not repeatable but others are. For repeatable DTs the SU ID and DT name can be used to simply create a new DE. However for non repeatable DTs, a flag can be provided to indicate whether to replace any existing value. If a DE of the required DT already exists in this SU and this flag is not set to ‘Y’, a permission denied error is returned.

Tweaking output when rendering

Had to slightly change the output XML when using GetStructuralUnit CGI script API. Originally if asked for HTML the code would just generate the HTML (or whatever format) for each child to the required depth level and just concatenate it. However this meant that higher levels couldn’t make formatting decisions based on details of lower levels – for example reading list SUs need to know when to turn off/on list tags in HTML depend on if children are notes or book/journal/article/etc SUs. The tweak was to always have the child return a structure:

$child->{su}->{render} = formatted output $child->{su}->{id} = child SU's IDThe child SU's ID field lets the parent's code make more intelligent rendering decisions.

Moodle Proof Of Concept

Today I got a very simple proof of concept together for a Moodle plugin.  I thought it best to do it at this stage and one of the long term goals of redeveloping LORLS is to enable multiple front ends to work with the same back end.  The main problem encountered was finding a way to link the resources in Moodle to the reading lists in LUMP.

The good news was that this was the sort of change that LUMP was designed to handle.  A few minutes later and reading lists in LUMP had a moodleId data element which can be used to link each Moodle resource to the related reading list.

Deciding how to import LORLS materials into LUMP

Had to produce the rules to attempt to convert LORLS rather lax material descriptions into SUs of the appropriate type (book, book chapter, journal, journal article, electronic resource or note). This decision is currently made using a variety of hints. For example book chapters are distinguished from journal article by the lack of an issue number and if the partauthor field isn’t filled in, or if the control number looks like an ISSN rather than an ISBN. It probably isn’t perfect but hopefully it will get 90% of the works right. And as long as most of the rest look OK when rendered, most folk won’t notice anyway!

Acting as other users

We realised that we need to allow member of the Admin group to be able to act as other users. For example Moodle won’t know the user’s AD log in information but we still want LUMP to allow the Moodle logged in user to create, edit, delete and view LUMP objects as themselves (rather than just a default Moodle user). Thus we need to allow admin users to login with their own credientials (using one of the supported authentication mechanisms) and then switch to act as another user. In the web API this is achieved by passing an “act_as” CGI parameter filled in with the username that the script should appear to be running as. Currently only members of the Sysadmin group will have this power (which will need to include the Moodle block as “trusted” code).

Meeting with Talis

Had a really interesting meeting this afternoon with a couple of folk from Talis. Mark Bush, who is new to the company, contacted me a few weeks ago to arrange it. The purpose of the meeting was to gain greater understanding of different approaches to reading list management. Mark was accompanied by Ian Corns who despite his job title of “User Experience Champion” didn’t arrive wearing a cape or with his underpants over his clothes.

The first part of the meeting turned into a show and tell: with me detailing the birth of LORLS and our ongoing project to redevelop the system, and Mark showing me a little of Aspire, Talis’ replacement for their existing Talis List product. The thing that struck me was how similar in concept their new solution is to what we’re doing with LORLS.

The second half of the meeting was given over to discussing the various possible strategies to use when implementing a reading list solution. Obviously selecting an appropriate system is important. However many of the key issues that will determine whether it is success are not necessarily system dependant.

For a listing of some of these key issues send £12.99 and a stamped self-addressed envelope to Ga… OK OK I’ll tell, please just stop hitting me Jon 🙂

  1. Involve all stakeholder as soon as possible in the implementation process – pretty obvious I know but still important to remember
  2. From a Library perspective it’s much easier to work with academics if you’re not seen as the ones forcing the system upon them
  3. Pump priming the system with academics’ existing (often Word based) reading lists can be a real winner – once a list is on the system is much easier to get academics to update it or at least be aware if they haven’t so you can then nag them about it!
  4. Training, training, training
  5. Local champions can often do more for the success of a project than official promotions – identify your champions and support them
  6. It’s important to know the lie of the land – what may work with one department won’t necessary work with another. For example Engineers have a very different approach to reading lists than Social Scientists.
  7. Competition between academic departments or faculties can be a useful means of encouraging adoption of the system but needs to be done with care
  8. Use every opportunity to stress the importance of reading lists to academic departments, for example: bad module feedback, that’s because of your lack of reading lists on the system; external review approaching, why not invest some time in updating your reading lists to demonstrate clear communication between academics, librarians and students.

Why formatting Perl is held in the database

The current ER model includes tables that hold formatting information in order to allow the layout of reading lists in different formats (HTML, BibTeX, Refer, etc). This formatting information is currently planned to be snippets of Perl code, allowing some serious flexibility in formatting the output.

However the question is, “why is this code in the database?”. Here at Loughborough we could just as easily have it stored in the filesystem along with all the other Perl code for the back end. The reason for having the Perl fragments held in the database is that it will potentially make life easier at other institutions, where there is a separation between the folk who run servers and install the software, and the folk who actually manage the library side of the database. In some cases getting software updates loaded on machines can be tricky and time consuming, so having the formatting code held in the database will allow the library administrators to tweak it without having to jump through hoops.

Go to Top