Posts tagged MySQL
Improving performance
As we seemed to have reached a stable period for our codebase I decided it was time to start digging into LUMP’s code to see where we could make some performance increases.
I took a standard performance improvement approach of installing a profiler (NYTProf) on LUMP’s development server and configuring apache to use it for Perl CGI scripts. After a short period of profiling three places where the performance could be improved were identified.
The first improvement was the removal of a system call (using backticks) to the whoami program to identify a users login. Normally this wouldn’t make much difference but the call was in a sub procedure that would be called 30 or 40 times from the FastGetStructuralUnit script. The multiple times it had to spawn a new process to run whoami pushed this line to the top of the profiler’s output. A quick change from `whoami` to getlogin() and the line disappeared from the profiler’s list of time consuming lines.
The second improvement was tucked away in the BaseSQL module. The MySQL call to get the ID of the last inserted item was regularly appearing in the profilers output as a slow call. Upon digging in to the code I saw the line
$sql = "select LAST_INSERT_ID() from ".$self->{_table};
This is a very instinctive line to code when you want to get the last ID of an item inserted into a specific table (almost always inserted by the previous SQL call). The problem is that LAST_INSERT_ID() returns the ID of the last row inserted in the current session and not of the specified table. Calling it in the way that we were was a big waste of time. Here is an example to demonstrate why it is waste of time.
mysql> select count(LAST_INSERT_ID()) from data_element;
1933973
By calling it in the way we were MySQL would pass back a row, consisting of the LAST_INSERT_ID() value, for every row in the table we specified. The solution was simply to change the line to
mysql> select count(LAST_INSERT_ID());
1
This change actually knocked a second of some of our slower API’s running time and also reduced the load on the MySQL server.
The third factor that stood out as slowing down most API calls was the module being used to build the XML response, XML::Mini. XML::Mini is a very powerful module for processing very complex XML documents. We didn’t need to use much of its potential as our XML structure was deliberately kept simple. An hour later we had produced XMLify which would take the same hash reference we passed to XML::Mini and produce compatible output. By using our own XMLify sub procedure we managed to reduce the run time of our large FastGetStructuralUnit calls by over 1 second.