Implementing Change Management in RT

We moved (back) to using the Best Practical Request Tracker (aka ‘RT’) for managing our service desk incident tickets last year. RT is a great open source ticketing system that Best Practical make available for free if you want to support and extend it yourself, or they’ll happily sell you good value support contracts and hosting packages if you’d like them to manage the technical details. Being a bunch of little Perl code monkeys in the MALS team, we went for the “host and support it ourselves” option, especially as one of our number had run RT before so was able to get us up to speed fast.

Once incident ticketing was working successfully, interest within our department turned to using RT to help manage the change request process we have. The previous commercial service desk systems offered some support for change management but it was… erm… “less than friendly” might be a polite way of putting it. It was actively putting people off from submitting change requests. We needed something that IT staff could understand easily, yet would be able to handle our rather complex change workflow.

RT 4.4 comes with an Approvals system built in. This lets you have tickets in queues enter states that need one or more levels of management to approve them. RT also supports custom defined “life cycles” for queues. This means that the states that tickets can have, the transitions between those states and the rights that users and groups have to make those transitions are all capable of being customised, and different queues in the system can have different ticket status life cycles. This is quite flexible and configurable and is often enough for many sites, but from a bit of a play we knew we’d be doing some local modifications to support the complexity of our change process workflow.

This post details the Loughborough IT Services change workflow and provides a broad overview of how we went about implementing this in RT.

The Loughborough Change Workflow

The Figure 1 below shows the workflow we have for change management here at Loughborough. This is the basis of the change life cycle in RT.

Figure 1: Loughborough University IT Services change management workflow

When a new change is created it has a status of draft. It stays in draft until it is either marked as deleted, or submitted for approval. Our changes come in three types: “pre-approved”, “normal” and “emergency” and part of the change request drafting process includes selecting the type (which we keep in one of several custom fields described later). For a “pre-approved” changes (typically changes that happen regularly, have low impact elsewhere and have been well tested), the change request workflow is simple and automated. As soon as the ticket with the request changes state to submitted a scrip changes the status to open so that it can be worked on.

For a “normal” change, we have a two level approval process. Firstly the manager of the team that the person submitting the change is in has to check that the team has sufficient resources and availability to make the change. They have three options open to them. Firstly they can approve the request, in which case the change request ticket is moved to the next phase of change management (see below). If the line manager feels the request is inappropriate, they can turn it down, which moves the change request ticket status to rejected. This is a dead end state – tickets can not be brought back from this and if it is later decided that the change is needed, a new change request will need to be submitted.

Now approving and rejecting are normal RT Approvals options. Where things get interesting is that we have a third option for manager approval: requesting some reworking. This effectively says that the manager thinks the change request is worthwhile, but some aspect of the request needs redrafting before it can be approved. When the manager does this, the change request ticket status is changed to rework_requested and it is passed back to the requestor/owner to make edits on. When this is done, they can alter the change request ticket status to submitted again, and the approval process is restarted. It is possible to go round this loop several times if needs be.

This requires some local modification to /opt/rt4/local/html/Approvals/index.html which cancels the pending approval ticket(s) and then resets the change request ticket’s status to rework_requested. In this file we also take any comments from the manager that go into the approval ticket and copy them into the change request ticket that it is linked to, so that management comments are visible directly to the change requestor/owner. There are also scrips in place that email out these comments from approvers, so that requestors/owners are aware of any conditions or instructions being provided.

Once the line manager has approved a change request ticket, it is moved to our Change Advisory Board (CAB) queue where the Change Manager can review it. This might involve them handling the change request decision themselves, or scheduling it on the weekly CAB meeting agenda for more in depth debate and decision making. As with the line manager, the Change Manager can opt to approve, reject or request rework on the change request, with approval meaning that the change request ticket status changes to open so that it can be worked on. If they request reworking, the change request ticket status is changed to rework_requested and it goes back to the start of this procedure, so once re-submitted it will go to the line manager for a new approval. The reasoning here is that the Change Manager may know that it clashes with other changes, business events, etc, but they don’t necessarily know what resourcing the line manager has in the team at any point in time, so the line manager has to OK the altered change request again.

The last type of change request is the “emergency” change. This are things that need to be done quickly, so there isn’t time to get line manager and CAB/Change Manager approval. An “emergency” change looks like a “normal” change but it goes straight to our Senior Leadership Team’s group for an approval. Typically this is done outside of the system via face to face meetings, texts or phone calls, and the change management system is just recording that the decision has been made, so although the SLT group can theoretically reject or ask for reworking, some “emergency” changes will have already happened (or at least be underway) by the time the change request ticket is submitted.

No matter which of the three change request types are chosen, once the change request ticket has the open status it can be worked on. We have a scrip on the CAB queue that detects the status change and emails the requestor/owner of the change request ticket to let them know this. Work might not take place immediately, as all changes have a “change window” which is recorded in the RT Starts and Due times. Some large, multipart change requests may also be moved to the status of stalled whilst they wait for other things to happen (such as other inter-related changes taking place, or feedback from users on the result of the change). In fact the change request ticket status is free to bounce back and forth between open and stalled as required. The people working on the change can also interact with the ticket as normal at this point, adding comments, etc.

The last phase of any change is its completion. We have four statuses that a change request ticket can take at this point. Firstly it could be closed_complete which means that the change has been successfully made and everything worked. If some bits of the change didn’t work out, but other parts did and a partial result is deemed acceptable, the status can be set to closed_incomplete. If things went seriously wrong and the change had to be rolled back to its previous position and then abandoned, the status is set to closed_rolled_back. Lastly some change requests are obsoleted by other events before they can be undertaken, so these can be abandoned by setting the status of the change request ticket to cancelled.

Change queues and supporting groups

In our department we have a large number of teams, each with a team manager or leader. Many of these teams may be submitting changes, so we made a new change related queue for each team, using the change life cycle described above. The change request tickets only live in these queues whilst being drafted, reworked or awaiting line manager approval. Once a line manager has approved a “normal” change, it is moved by a scrip to the CAB queue for change manager approval. “Pre-approved” and “emergency” change request tickets are immediately moved to the CAB queue once they change status to submitted.

The team change queues are relatively private – only team members and the Change Manager can see the details in them whilst drafting. However once a change request ticket is moved to the CAB queue it is more widely visible to all staff in the department. This lets “more eyes” look at the changes being requested and worked upon, which is helpful in pointing out issues and avoiding clashes.

The team queues usually have at least two groups associated with them (some have more!) which are the team members and their line manager(s). The scrip attached to all the team change queues that watches for change request ticket status changes knows to look for a particular management group based on the change queue that the ticket was created in. This group is used to generate the first line approvals.

We also have groups for our Senior Leadership Team, to whom the Emergency change requests are sent for approval, and the Change Manager, so that we can cover change approval processes during holidays, etc.

Change request user interface

To make the process easier for staff to submit change requests, we produced a set of custom user interfaces and a new menu in RT (see Figure 2). The menu provides staff with links to a simple interface to draft/revise change requests before submission and also see what change requests they have in various stages of drafting, approval, revision and active at the moment. It also has links to another web site that hosts the documentation for both the change request process and the “quick guide” for handling changes in our RT setup.

Figure 2: the change request menu and “My Changes” page

The change drafting/revision user interface helps the user fill in all the custom fields that are required for the particular type of change they are creating. Technically these change requests are just RT tickets, so they are still free to use the existing RT ticket creation/editing tools even at this stage, but the custom user interface makes it clearer that certain fields are required for certain change types. This results in less to-and-fro movement of change requests that are not completed satisfactorily for approvals by line or change management.

The “normal” and “emergency” change request types look very similar (Figure 3), as they require the same basic information and just differ in the approval flows. They require a change subject (which is the RT ticket subject) and a change window start/end date/time (which becomes the RT ticket’s Starts and Due dates). Whilst drafting the rest of the fields are optional, but before submission the custom fields for risk level, change description, risk description, user impact, communications plan, change plan, change testing, backout plan and resource plan must be filled in. Optionally the submitted change can also have attachments added and/or links made to other tickets in our system (including other change requests, which means that larger programmes of work can have overarching change requests for the whole programme across teams, with individual change requests for particular aspects, all linked together).

Figure 3: The “normal” change request drafting user interface. The “emergency” change request user interface is identical to this.

The “pre-approved” change user interface looks slightly simpler (Figure 4). It still requires a change subject, start and end dates and a change plan, but the rest of the custom fields detailed above are placed by a drop down selecting an article that details the required process. These articles are normal RT articles in a separate category that the Change Manager can create and edit when “pre-approved” change processes are decided upon or revised.

Figure 4: The Pre-Approved change request user interface.

Auto-reminders for change due dates

Once a change request is approved, a scrip automatically sets a reminder up for the change owner based on the due date for the change window in the request. This not only reminds people of changes they need to work on soon (as some changes are approved some weeks or months in advance) but also reminds them to close the change request appropriately once the work as been done.

When a change is completed or cancelled, we have a scrip that also removes any pending reminders automatically. This stops people being reminded about tickets that have been completed and closed.

Linking to an Office 365 change calendar

We had a pre-existing Microsoft Office 365 shared calendar which was intended to record change requests, so that people could easily see visually how they related to each other and other work, meetings, etc they were involved with. This was not being utilised before we moved the change requests to RT, and the Change Manager asked if we could get RT to create and manage the events in this calendar automatically.

To do this, we made a new custom action called RT::Action::ChangeCalendar and an extension called RT::Extension::Office365::Calendars. The custom action just makes it easier to create scrips that use this action when change request tickets are updated. It makes use of the extension to do the actual work, which in turns uses the Microsoft GraphAPI to talk to the Office 365 system.

The RT::Extension::Office365::Calendars extension offers the following methods:

  • CreateEventForTicket – creates a new Office 365 calendar for an event
  • UpdateEventForTicket – updates an existing Office 365 calendar event for a given ticket
  • DeleteEvent – remove an event from the Office 65 calendar
  • FindEvents – get a list of events in an Office 365 calendar
  • CalendarExists – check if a named Office 365 calendar exists for a given Office 365 user Id
  • FindUserId – find the Office 365 User Id for a given user principle name.
  • GetAccessToken – gets the GraphAPI access token to use the API.

The Office 365 tenant UUID, client Id and client secret used to gain access to the GraphAPI are held in our RT_SiteConfig setup so they are easy to alter without adjusting the code. This separation between action and extension also means we can reuse the extension for other Office 365 calendar access in the future (for example putting RT reminders into an individual’s default calendar).

The RT::Action::ChangeCalendar and associated scrips do allow change requests that are being drafted/reworked/in the approval process to be shown in Office 365 as “tentative” events, whilst approved change requests are shown as “busy” status. This gives staff a quick way to check if other people are working on changes that might impinge on their own work or planned changes.

Change reporting

The Change Manager and the Senior Leadership Team were keen on having some reporting to show how well the new RT based change system was working. Luckily RT’s existing SearchBuilder functionality makes most of this reporting quite easy to set up. To start with we set up a dashboard called “Change Reporting” to which we attached graphs based on searches. These included total changes per month, break down of changes types by month (to allow seasonal trends to be spotted), the change closure statuses and currently active change requests broken down by requestor. See Figure 5 for an example of part of this dashboard. These may be expanded upon over time, but these initial reports give the department’s management an nice “heads up” on how the change request process is being utilised.

Figure 5: Change reporting graphs

Programmatically inserting Internet Calendar subscriptions into Outlook Web Access via Exchange Web Services

As part of the work required for moving all of our students from Google Apps for Education to Microsoft Office365, we’ve got to somehow replicate our timetabling information calendars.  We did these in Google a year or so ago, and they’ve been pretty successful and popular in production use, both with the students and the folk who run the timetabling. Losing the functionality they provide would be unpopular, and make the Office365 migration somewhat less compelling for the University as a whole.

We’d naively hoped that Microsoft’s new Graph API would let use do this using Office365 groups, but it has limitations when you need to do things as overnight batch jobs running on behalf of thousands of users.  We did hack up a script that used a combination of Graph and Exchange Web Services (EWS) (the older, SOAP based API from Microsoft) to create Office365 group calendars, but we found that we’d need a large number of fake “system” users that would own these groups/calendars due to per user calendar/group creation and ownership limits.  We put this to the project team, but it wasn’t a popular idea (despite that being similar to how we do things in Google land. But then again Google Apps for Education users are free so we don’t care how many we create).

As an alternative it was suggested that we look at generating iCalendar (ICS) format files that the students could subscribe to. We’d already done something similar for staff who are already on Office365, so creating these from our intermediate JSON data files that we currently use to feed into Google was relatively easy. It should be noted that this is subscribing to constant periodic updates from an Internet delivered ICS file, not just loading a single instance of an ICS file into a user’s default calendar – the same file format is used for both operations!

It was then suggested as part of the project that we investigate how we could force calendar subscriptions to these ICS files in Office365.  The existing staff subscriptions are optional and each individual staff user has to subscribe themselves.  Not many do to be honest, and this option would potentially create a lot of work for the service desk and IT support folk when students are migrated, and then at the start of each year.  If the students could have the subscription to the pre-generated ICS file made for them, and kept up to date (so that if they accidentally delete it, it gets reinstalled automatically within 24 hours), we’d have something that would look very much like the existing Google calendars solution.

A quick search (via Google!) showed that quite a few people have asked how to create subscriptions automatically to Internet calendars before… and the usual answer that comes back from Microsoft is that you can’t.  Indeed we’d asked initially if Graph API could do this, but got told “not yet supported”, although it was a “great scenario”.  This is rather surprising when you consider that Microsoft are selling Office365 partly on its calendaring abilities. We know the underlying Exchange system underneath Office365 can do it as you can set it up from Outlook Web Access (which appears to use its own proprietary API that looks like a JSON encapsulation of EWS with some extra bits tacked in!).

Intrigued (and tasked with “investigate” by the project), we attempted to hunt down how Office365 sets up and stores Internet calendar subscriptions.  This was a long and tortuous path, involving sifting through large amounts of EWS and Exchange protocol documentation, using MFCMAPI to look at the “hidden” parts of Exchange accounts, quite a lot of trial and error, and an awful lot of bad language! 😉

It transpires that subscriptions to Internet shared calendars generate what appears to be a normal calendar folder under the “calendars” distinguished folder ID.  This calendar folder has a “Folder Associated Item” attached to it, with a class of “IPM.Sharing.Binding.In”.  We’re not sure what that associated item is for yet, but it doesn’t appear to contain the URL pointing to the remote ICS file.  Its most likely metadata used by the internal system for keeping track of the last access, etc.

The Internet calendar file URL itself is actually stored in a completely different item, in a different folder.  There is a folder called “Sharing” in the root folder (note this is the user’s top level root folder, above the “msgrootfolder” that contains the mailbox, calendars, etc) and this contains items for each internet calendar subscription, including ones that have been deleted from the OWA front end.  These items are in the class “IPM.PublishingSubscription” and, just to make something that is hard even harder, the ICS URL is hidden in some “Extended Properties”. MFCMAPI is a handy tool in the Exchange hacker’s toolbox!

Extended Properties are effectively places that the underlying MAPI objects provide to applications to store custom, non-standard data. You can access and manipulate them using EWS, but only if you know something about them.  The GetItem method in EWS lets you ask for an ItemShape that returns all properties, but it transpires that Microsoft’s idea of “all” means “all the standard properties, but none of the extended ones”. Luckily the MFCMAPI application uses MAPI rather than EWS and so exposes all of these Extended Properties.

The Extended Properties that appear to contain Internet calendar subscription information appear in the property set with the GUID ‘F52A8693-C34D-4980-9E20-9D4C1EABB6A7’.  Thankfully they are named in the property set so we can guess what they are:

Property Tag Property Name Example Contents
0x8801 ExternalSharingSubscriptionTypeFlags 0
0x80A0 ExternalSharingUrl https://my-test-server.lboro.ac.uk/timetables/jon.ics
0x80A1 ExternalSharingLocalFolderId LgAAAACp0ZZjbUDnRqwc4WX[…]
0x80A2 ExternalSharingDataType text/calendar
0x80A6 ExternalSharingRemoteFolderName Timetable for j.p.knight@lboro.ac.uk

We guessed that the ExternalSharingLocalFolderId Extended Property would point to the normal calendar folder.  It does, but there’s a twist still: EWS returns this ID in a different format to all the others.  Normally, EWS returns folder IDs, item IDs as Base64 encoded binary strings in a format called “EwsId”.  Whilst ExternalSharingLocalFolderId is indeed a Base64 encoded binary string, it is in a different format called “OwaId”. If you feed an “OwaId” format identify that you’ve got from a FindItem or GetItem call in EWS back into another call in EWS, you’ll get an error back.  You have to take the Base64 encoded “OwaId”, pass it through an EWS ConvertId call to get a Base64 encoded “EwsId” and then use that.  No, it doesn’t make any sense to us either!

So that lets us get the ICS calendar data back out for calendar subscriptions.  We could theoretically then generate forced subscriptions by creating these folders/items and filling in the extended properties with suitable data.  However from our point of view of providing a supportable, production service we decided that we’d gone too far down the Exchange rabbit hole. We would be relying on internal details of Exchange that Microsoft haven’t documented, and indeed have said isn’t supported. At the moment we’re deciding what to do instead. The default fall back is to just give the students the ICS calendar file URL and explain to them how to set up subscriptions using the OWA web interface. This will be give a poorer user experience than the current Google calendars do, but there’s not really much else we can do unfortunately.

Getting an X.509 certificate into an Office365 Azure AD app manifest

We’ve been playing with Office365 apps as part of the preparation for a move from Google Apps for Education to Office365 for our students (not my idea!). I’ve been trying to use the new REST based Microsoft Graph API to talk to Office365 from a daemon process so that we can update timetable information in calendars nightly (which is what we do at the moment with Google’s calendar API and it works well). Unfortunately Graph API is relatively new and isn’t really ready for prime time: for one thing it doesn’t support daemon processes that are using the confidential client OAuth2 authentication flow creating/updating calendar entries on Unified Groups (even though it does support the same deamon creating and deleting the Unified Groups themselves and adding/removing members. No, I’ve no idea why either… I assume its just because it is a work in progress!).

So the next trick to try is to use Exchange Web Services (EWS) via SOAP to see if that can do what Graph API can’t yet. EWS can use OAuth style bearer tokens, so I hoped I could use the nicer Graph API where possible and just have to put the nasty SOAP stuff in a dark corner of the code somewhere. Unfortunately, the SOAP OAuth didn’t like the access tokens that Graph API was giving back: it complained that they weren’t strong enough.

It turns out that this is because I was using a client ID and secret with the Graph API’s OAuth2 code to get the access token, but EWS SOAP calls require the use of X.509 certificates. And this is where, once again, developing against Office 365 gets “interesting” (or to put another way, “massively painful”). The Azure AD management portal offers a nice interface for managing client IDs/secrets but no user interface for doing the same with X.509 certificates. So how do you link an X.509 certificate to an Azure AD app?

The answer is via the “app manifest”. This is a JSON format file that you can download from the Azure AD management portal’s page for the app. Its also very lightly documented if you don’t happen to be sitting at desk in the bowels of Microsoft. Luckily there are very helpful Microsoft folk on Stack Overflow who came to my rescue as to where I should stick my certificate information to get it uploaded successfully.  My certificates were just self signed ones generated with openssl:

openssl req -x509 -days 3650 -newkey rsa:2048 -keyout key.pem -out cert.pem

The key information I was missing was that the X.509 information goes into its own section in the app manifest JSON file – an array of hashes called “keyCredentials”.  In the original app manifest I’d downloaded, this had been empty, so I’d not noticed it. The structure looks like this:

"keyCredentials": [
{
"customKeyIdentifier": "++51h1Mw2xVZZMeWITLR1gbRpnI=",
"startDate": "2016-02-24T10:25:37Z",
"endDate": "2026-02-21T10:25:37Z",
"keyId": "<GUID>",
"type": "AsymmetricX509Cert",
"usage": "Verify",
"value": 3d4dc0b3-caaf-41d2-86f4-a89dbbc45dbb"MIIDvTCCAqWgAwIBAgIJAJ+177jyjNVfMA0GCSqGSJWTIb3DQEBBQUAMHUxCzAJBgN\
VBAYTAlVLMRcwFQYDVQQIDA5MZWljZXN0ZXJzaGlyZTEVMBMGA1UEBwwMTG91Z2hib3JvdWdoMSAwHg\
YDVQQKDBdMb3VnaGJvcm91Z2ggVW5pdmVyc2l0eTEUMBIGA1UECwwLSVQgU2VydmljZXMwHhcNMTYwM\
<Lots more Base64 encoded PEM certificate data here...>
AiVR9lpDEdrcird0ZQHSjQAsIXqNZ5xYyEyeygX37A+jbneMIpW9nPyyaf7wP2sEO4tc1yM5pwWabn/\
KD9WQ4K8XISjRHOV0NnU4sLni4rAVIcxpNWNPixXg85PDqi6qtL1IW5g7WlSBLPBZJ+u9Y9DORYKka2\
Y/yOFB6YPufJ+sdZaGxQ8CvAWi2CvDcskQ=="
}
],

The keyID is any old GUID you can generate and isn’t anything to do with the cryptography. The customKeyIdentifier is the finger print of the X.509 certificate, and the value field contains the Base64 encoded PEM data for the certificate without its ASCII armour. The startDate and endDate fields have to match up with the similar lifetime validity timestamps minted into the certificate (the Azure AD management portal will barf on the upload if they aren’t).

One nice feature is that a single app can have multiple authentication methods available. Not only can you have more than one client ID/secret pair (which you need to have as they have a lifespan of at most 2 years, so every year you’ll need to generate a new one before the old one dies), but it appears that having the X.509 certificates in place does not stop the easier client ID/secret OAuth2 access from working.

OK, so lets have a bit of Perl that can get us a suitable Bearer access token once we’ve got the X.509 certificate stuffed in place inside the Azure app manifest:


#!/usr/bin/perl

# Script to test getting an OAuth2 access token using X.509 certificates.

use strict;
use LWP::UserAgent;
use Data::Dumper;
use JSON;
use JSON::WebToken;
use Crypt::Digest::SHA256;
$| = 1;

my $tenantUuid = ‘<Our Azure Tenant ID>’;
my $tokenEndpoint = “https://login.microsoftonline.com/$tenantUuid/oauth2/token”;

open(CERT, “cert.pem”);
my $cert = ”;
while(my $line = ) {
$cert .= $line;
}
close CERT;
open(KEY, “key.pem”);
my $key = ”;
while(my $line = ) {
$key .= $line;
}
close KEY;
my $fingercmd = ‘(openssl x509 -in cert.pem -fingerprint -noout) | sed \’s/SHA1 Fingerprint=//g\’ | sed \’s/://g\’ | xxd -r -ps | base64′;
my $fingerprint = `$fingercmd`;
chomp $fingerprint;

my $now = time();
my $expiryTime = $now+3600;
my $claims = {
‘aud’ => $tokenEndpoint,
‘iss’ => ‘< Our client ID for the app >’,
‘sub’ => ‘< Our client ID for the app >’,
‘jti’ => ‘< Our client ID for the app >’,
‘nbf’ => $now,
‘exp’ => $expiryTime,
};

my $signedJWT = JSON::WebToken->encode($claims, $key, ‘RS256’,
{‘x5t’ => “$fingerprint”},
);
my $authorizeForm = {
grant_type => ‘client_credentials’,
client_id => ‘< Our client ID for the app >’,
resource => ‘https://outlook.office365.com’,
scope => ‘Group.ReadWrite.All’,
client_assertion_type => ‘urn:ietf:params:oauth:client-assertion-type:jwt-bearer’,
client_assertion => $signedJWT,
};

my $ua = LWP::UserAgent->new;
$ua->timeout(10);
my $response = $ua->post( $tokenEndpoint, $authorizeForm);
if (!$response->is_success) {
warn “Failed to get access token: ” . $response->status_line . “\n”;
die “Response dump:\n\n”. Dumper($response) . “\n”;
}

my $json = JSON->new->allow_nonref;
my $oauth2_info = $json->decode( $response->decoded_content );

print Dumper($oauth2_info);

The $oauth2_info structure contains the required access token, plus its expiry information and a few other bits and bobs. The access token is used with SOAP requests by adding an HTTP Authorization header contain the the word “Bearer”, a space and then the access token as returned (the access token will be a Base64 encoded JWT itself).  If you need the WSDL file for the SOAP requestss, you can get it from https://outlook.com/ews/services.wsdl (though note you’ll need to authenticate to get it!).

One last nuance: for Office365 you seem to need to tell it what domain its talking to. I don’t know why – once it has decoded and verified the authorization bearer access token you’d think it should know what client ID your using, and thus what Azure AD domain it is in. It doesn’t.  So, you need to add another HTTP header called “X-AnchorMailbox” that contains an email address of a person/resource in the domain you’re playing with.  Once that’s there alongside the Authorization header with the Bearer access token, you’re good to go with some SOAPy EWS requests.