Skip to content

Latest commit

 

History

History
1272 lines (1070 loc) · 70.1 KB

File metadata and controls

1272 lines (1070 loc) · 70.1 KB

Copyright (C) 2019-2025 The Open Library Foundation

This software is distributed under the terms of the Apache License, Version 2.0. See the file "LICENSE" for more information.

MOD-INVENTORY-UPDATE (MIU) is a module for creating, updating, and deleting instances, holdings records, and items in Inventory Storage, from XML or JSON sources files.

The module has two distinct sets of APIs. One is a group of import APIs, with which a client can configure, execute and monitor import jobs that transform and import collections of XML records of arbitrary format to Inventory Storage. The other is a handful of so called "upsert" APIs that a client can use to push JSON files of a specified structure to MIU. MIU will insert or update instances, holdings records and items in Inventory Storage from this composite JSON structure.

Depending on which of use these two use cases that might be relevant, one can read different parts of this readme.

If you have XML records for import, say files with collections of MARC XML records, then you can read the paragraphs in PART I How to import XML files, which will explain how to set up import channels with XSLT transformation pipelines. It is explained what data structure the MARC XML records must be transformed to in order to be imported to Inventory. To understand the specifics of how the module takes the transformed structure and imports it to Inventory, then you can read the paragraphs in PART II How MIU works with the inventory record set.

If you want to import JSON files, then you don't need the import channels or transformations. You can post the ready-made JSON directly to the upsert API. However, this requires the JSON to comply with the schema for the so-named "inventory record set" up-front. There is no transformation phase massaging the data into the inventory record sets needed for the module's CRUD engine. The schema for the JSON is documented in upsert APIs, Open API specification. Again, to understand the specifics of how the module takes that JSON and imports it to Inventory, you can read the paragraphs in PART II How MIU works with the inventory record set.

Diagram of MIU APIs

PART I. How to import XML files

The importing component of MIU consists of so-called import "channels". Each channel has a file queue that source files can be uploaded to, and a processing pipeline that can perform the importing of source files from the queue to inventory storage.

The channel design is meant to allow for multiple input formats, for example, potentially, binary MARC or JSON files. However, the current pipeline implementation supports XML source files.

To use the XML importing API, the channels must be configured with a transformation pipeline. The transformation pipeline is a chain of zero or more XSLT transformation steps that will restructure XML records of arbitrary format into "inventory record sets" suitable for running updates on instances, holdings and item. The pipeline can be created using the provided configuration APIs.

The main elements of the importing component are thus

  • a channel with an associated file queue
  • a processing pipeline, called a "transformation"
  • the "transformation" has an ordered set of "transformation steps" with XSLT style-sheets, where the last step will be a generic (system supplied) crosswalk of the XML to JSON.
  • the outcome of this process is sent to CRUD component that will batch and persist the results to inventory storage.

This diagram shows the configuration data structure for a channel with a transformation pipeline.

Diagram of MIUs import configuration tables

The task of the XSLT transformation steps is to take the incoming XML and transform it to the inventory record set structure.

The high level structure of an inventory record set is

 - Inventory instance {hrid and other properties}
 - holdings records (optional)
   - Inventory holdings record {hrid and other properties}
     - items (optional)
       - Inventory item {hrid and other properties}
       - item
       - ...
   - Holdings record
     - items
       - item
       - item
       - ...
   - ...
 - instance relations (optional)
 - processing (optional)

The details of this structure are described in the API doc Upsert APIs that documents the JSON schema for it, and in PART II. How MIU works with the inventory record set, which explains how MIU uses the structure, especially in regard to the processing instructions element.

The next paragraphs will explain how to populate the configuration structure to create an import channel with a transformation pipeline, and then create a transformation that brings the incoming XML onto the inventory record set structure. The example will focus on the instance element to keep it simple, so no holdings records or items, no instance relations and no processing options specified.

Creating a transformation pipeline and assigning it to a channel

Say we have this collection of one record, a tiny MARC record with basically just a title and some contributors.

<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns:marc="http://www.loc.gov/MARC21/slim">
    <marc:record>
        <marc:leader>00942nam a22002531a 4504</marc:leader>
        <marc:controlfield tag="001">73209622</marc:controlfield>
        <marc:controlfield tag="003">DLC</marc:controlfield>
        <marc:controlfield tag="005">19820325000000.0</marc:controlfield>
        <marc:controlfield tag="008">780306m19009999ohu 00000 grc</marc:controlfield>
        <marc:datafield tag="245" ind1="0" ind2="4">
            <marc:subfield code="a">Demo title</marc:subfield>
            <marc:subfield code="c">J. Arthur Hanson, David Paterson, editors.</marc:subfield>
        </marc:datafield>
        <marc:datafield tag="700" ind1="1" ind2="0">
            <marc:subfield code="a">Hanson, J. Arthur</marc:subfield>
            <marc:subfield code="q">(Joseph Arthur)</marc:subfield>
        </marc:datafield>
        <marc:datafield tag="700" ind1="1" ind2="0">
            <marc:subfield code="a">Paterson, David,</marc:subfield>
            <marc:subfield code="d">1952-</marc:subfield>
        </marc:datafield>
        <marc:datafield tag="710" ind1="2" ind2="0">
            <marc:subfield code="a">Research Associates.</marc:subfield>
        </marc:datafield>
    </marc:record>
</collection>

We will set up a channel to import this MARC record as a FOLIO instance. We will ignore holdings and items to keep it short.

For this we need a transformation pipeline, which we will construct with a single XSLT transformation step.

We can start by posting the step that contains a dummy XSLT, a script that would simply copy the XML. In the next step we will replace that XSLT with a useful script

POST to inventory-import/steps: 

{
  "id": "c2f39026-b8bc-430c-a79d-473f00159563",
  "inputFormat": "XML",
  "name": "marc-to-instance",
  "enabled": true,
  "outputFormat": "XML",
  "script": "<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n    <xsl:template match=\"@*|node()\">\n        <xsl:copy>\n            <xsl:apply-templates select=\"@*|node()\"/>\n        </xsl:copy>\n    </xsl:template>\n</xsl:stylesheet>\n",
  "type": "XmlTransformStep"
}

In the next box of code is an XSLT that will extract some mandatory properties for an inventory Instance, title, instanceTypeId, and provide a hard coded value for the likewise mandatory source property. All records to be updated in Inventory Storage must have a HRID based on a persistent clientside identifier for the record, and the XSLT will extract the control number for that.

Although not strictly needed for this example, the XSLT will also forward a copy of the original MARC record to be used by any subsequent transformation steps. For example there might be an additional step for creating holdings records and items from the original MARC record, in which case the MARC record would have had to be forwarded from step to step.

Also notice the transformation logic for contributors. Since contributors is an array in the instance in Inventory Storage, the XSLT must construct an element named arr and add each contributor as an element named i to the arr element. MIU will use this structure to construct a JSON array. The example shows it for contributors but the same logic and naming standard applies to any other arrays like identifiers, subjects, notes, and even to arrays of holdings records and items embedded in the inventory record set.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet
        version="1.0"
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
        xmlns:marc="http://www.loc.gov/MARC21/slim">


    <xsl:output indent="yes" method="xml" version="1.0" encoding="UTF-8"/>

    <xsl:template match="/">
        <collection>
            <xsl:apply-templates/>
        </collection>
    </xsl:template>

    <!-- MARC meta data -->
    <xsl:template match="marc:record">
        <record>
            <!-- Bibliographic record for FOLIO inventory -->
            <instance>
                <source>DEMO</source>
                <hrid><xsl:value-of select="marc:controlfield[@tag='001']"/></hrid>

                <!-- title -->
                <title>
                    <xsl:variable name="concat-title">
                        <xsl:for-each
                                select="marc:datafield[@tag='245'][1]/marc:subfield[@code='a' or @code='b' or @code='h' or @code='n' or @code='p']">
                            <xsl:value-of select="."/>
                            <xsl:if test="position() != last()">
                                <xsl:text> </xsl:text>
                            </xsl:if>
                        </xsl:for-each>
                    </xsl:variable>
                    <xsl:call-template name="remove-characters-last">
                        <xsl:with-param name="input" select="$concat-title"/>
                        <xsl:with-param name="characters">,-./ :;</xsl:with-param>
                    </xsl:call-template>
                </title>

                <!-- Instance type ID (resource type) -->
                <instanceTypeId>
                    <!-- UUIDs for resource types -->
                    <xsl:choose>
                        <xsl:when test="substring(marc:leader,7,1)='a'">6312d172-f0cf-40f6-b27d-9fa8feaf332f</xsl:when>
                        <!-- language material : text -->
                        <xsl:when test="substring(marc:leader,7,1)='c'">497b5090-3da2-486c-b57f-de5bb3c2e26d</xsl:when>
                        <!-- notated music : notated music -->
                        <xsl:when test="substring(marc:leader,7,1)='d'">497b5090-3da2-486c-b57f-de5bb3c2e26d</xsl:when>
                        <!-- manuscript notated music : notated music -> notated music -->
                        <xsl:when test="substring(marc:leader,7,1)='e'">526aa04d-9289-4511-8866-349299592c18</xsl:when>
                        <!-- cartographic material : cartographic image -->
                        <xsl:when test="substring(marc:leader,7,1)='f'">a2c91e87-6bab-44d6-8adb-1fd02481fc4f</xsl:when>
                        <!-- other --> <!-- manuscript cartographic material : ? -->
                        <xsl:when test="substring(marc:leader,7,1)='g'">535e3160-763a-42f9-b0c0-d8ed7df6e2a2</xsl:when>
                        <!-- projected image : still image -->
                        <xsl:when test="substring(marc:leader,7,1)='i'">9bce18bd-45bf-4949-8fa8-63163e4b7d7f</xsl:when>
                        <!-- nonmusical sound recording : sounds -->
                        <xsl:when test="substring(marc:leader,7,1)='j'">3be24c14-3551-4180-9292-26a786649c8b</xsl:when>
                        <!-- musical sound recording : performed music -->
                        <xsl:when test="substring(marc:leader,7,1)='k'">a2c91e87-6bab-44d6-8adb-1fd02481fc4f</xsl:when>
                        <!-- other --> <!-- two-dimensional nonprojectable graphic : ?-->
                        <xsl:when test="substring(marc:leader,7,1)='m'">df5dddff-9c30-4507-8b82-119ff972d4d7</xsl:when>
                        <!-- computer file : computer dataset -->
                        <xsl:when test="substring(marc:leader,7,1)='o'">a2c91e87-6bab-44d6-8adb-1fd02481fc4f</xsl:when>
                        <!-- kit : other -->
                        <xsl:when test="substring(marc:leader,7,1)='p'">a2c91e87-6bab-44d6-8adb-1fd02481fc4f</xsl:when>
                        <!-- mixed material : other -->
                        <xsl:when test="substring(marc:leader,7,1)='r'">c1e95c2b-4efc-48cf-9e71-edb622cf0c22</xsl:when>
                        <!-- three-dimensional artifact or naturally occurring object : three-dimensional form -->
                        <xsl:when test="substring(marc:leader,7,1)='t'">6312d172-f0cf-40f6-b27d-9fa8feaf332f</xsl:when>
                        <!-- manuscript language material : text -->
                        <xsl:otherwise>a2c91e87-6bab-44d6-8adb-1fd02481fc4f</xsl:otherwise>
                        <!--  : other -->
                    </xsl:choose></instanceTypeId>

                <!-- Contributors -->
                <xsl:if test="marc:datafield[@tag='100' or @tag='110' or @tag='111' or @tag='700' or @tag='710' or @tag='711']">
                    <contributors>
                        <arr>
                            <xsl:for-each
                                    select="marc:datafield[@tag='100' or @tag='110' or @tag='111' or @tag='700' or @tag='710' or @tag='711']">
                                <i>
                                    <name>
                                        <xsl:for-each
                                                select="marc:subfield[@code='a' or @code='b' or @code='c' or @code='d' or @code='f' or @code='g' or @code='j' or @code='k' or @code='l' or @code='n' or @code='p' or @code='q' or @code='t' or @code='u']">
                                            <xsl:if test="position() > 1">
                                                <xsl:text>, </xsl:text>
                                            </xsl:if>
                                            <xsl:call-template name="remove-characters-last">
                                                <xsl:with-param name="input" select="."/>
                                                <xsl:with-param name="characters">,-.</xsl:with-param>
                                            </xsl:call-template>
                                        </xsl:for-each>
                                    </name>
                                    <xsl:choose>
                                        <xsl:when test="@tag='100' or @tag='700'">
                                            <contributorNameTypeId>2b94c631-fca9-4892-a730-03ee529ffe2a</contributorNameTypeId>
                                            <!-- personal name -->
                                            <xsl:if test="@tag='100'">
                                                <primary>true</primary>
                                            </xsl:if>
                                        </xsl:when>
                                        <xsl:when test="@tag='110' or @tag='710'">
                                            <contributorNameTypeId>2e48e713-17f3-4c13-a9f8-23845bb210aa</contributorNameTypeId>
                                            <!-- corporate name -->
                                        </xsl:when>
                                        <xsl:when test="@tag='111' or @tage='711'">
                                            <contributorNameTypeId>e8b311a6-3b21-43f2-a269-dd9310cb2d0a</contributorNameTypeId>
                                            <!-- meeting name -->
                                        </xsl:when>
                                        <xsl:otherwise>
                                            <contributorNameTypeId>2b94c631-fca9-4892-a730-03ee529ffe2a</contributorNameTypeId>
                                            <!-- personal name -->
                                        </xsl:otherwise>
                                    </xsl:choose>
                                    <xsl:if test="marc:subfield[@code='e' or @code='4']">
                                        <contributorTypeId>
                                        </contributorTypeId>
                                    </xsl:if>
                                </i>
                            </xsl:for-each>
                        </arr></contributors>
                </xsl:if>
            </instance>
            <original>
                <xsl:copy>
                    <xsl:copy-of select="@*"/>
                    <xsl:copy-of select="*"/>
                </xsl:copy>
            </original>
        </record>
    </xsl:template>

    <xsl:template match="text()"/>
    <xsl:template name="remove-characters-last">
        <xsl:param name="input"/>
        <xsl:param name="characters"/>
        <xsl:variable name="lastcharacter" select="substring($input,string-length($input))"/>
        <xsl:choose>
            <xsl:when test="$characters and $lastcharacter and contains($characters, $lastcharacter)">
                <xsl:call-template name="remove-characters-last">
                    <xsl:with-param name="input" select="substring($input,1, string-length($input)-1)"/>
                    <xsl:with-param name="characters" select="$characters"/>
                </xsl:call-template>
            </xsl:when>
            <xsl:otherwise>
                <xsl:value-of select="$input"/>
            </xsl:otherwise>
        </xsl:choose>
    </xsl:template>

</xsl:stylesheet>

If we write this XSLT in a favorite XSLT editor and save it as marc-to-instance.xslt we can push it to the step with:

PUT inventory-import/steps/c2f39026-b8bc-430c-a79d-473f00159563/script -f marc-to-instance.xslt

Now we can create a transformation pipeline consisting of just this one step with:

POST to inventory-import/transformations:

{
  "id": "9a57d0e2-08d9-4eaf-a30b-97d3ad0f43bc",
  "name": "MARC XML transformation",
  "description": "Sample transformation from MARC XML to FOLIO Inventory instance",
  "steps": [
    {
      "id": "c2f39026-b8bc-430c-a79d-473f00159563"
    }
  ]
}

And finally we can create a channel that uses this pipeline:

POST to inventory-import/channels:

{
  "id": "ab9c3cbf-1ca0-4184-8f6a-083d1a644ce1",
  "name": "MARC XML Channel",
  "tag": "demo",
  "type": "XML",
  "enabled": true,
  "listening": true,
  "transformationId": "9a57d0e2-08d9-4eaf-a30b-97d3ad0f43bc"
}

MIU has an API to try out the transformation without actually importing anything. It will return the result of the transformation of the first record in the collection. The response can be returned as the transformed XML or as the result converted to JSON (the default).

If we put the tiny MARC in a file named marc.xml we can try it out like shown in the next box. The example illustrates the use of the channel tag demo as an alternative identifier that can be put in the API path instead of the channel's UUID. Notice also the resulting XML encoding of contributors with arr to indicate an array, and i to envelope each element. This would be an inventory record set in XML format.

 POST to inventory-import/channels/demo/try-tranformation?output=xml -f marc.xml

 Response as XML:
 
<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns:marc="http://www.loc.gov/MARC21/slim">
   <record>
      <instance>
         <source>DEMO</source>
         <hrid>73209622</hrid>
         <title>Demo title</title>
         <instanceTypeId>6312d172-f0cf-40f6-b27d-9fa8feaf332f</instanceTypeId>
         <contributors>
            <arr>
               <i>
                  <name>Hanson, J. Arthur, (Joseph Arthur)</name>
                  <contributorNameTypeId>2b94c631-fca9-4892-a730-03ee529ffe2a</contributorNameTypeId>
               </i>
               <i>
                  <name>Paterson, David, 1952</name>
                  <contributorNameTypeId>2b94c631-fca9-4892-a730-03ee529ffe2a</contributorNameTypeId>
               </i>
               <i>
                  <name>Research Associates</name>
                  <contributorNameTypeId>2e48e713-17f3-4c13-a9f8-23845bb210aa</contributorNameTypeId>
               </i>
            </arr>
         </contributors>
      </instance>
      <original>
         <marc:record>
            <marc:leader>00942nam a22002531a 4504</marc:leader>
            <marc:controlfield tag="001">73209622</marc:controlfield>
            <marc:controlfield tag="003">DLC</marc:controlfield>
            <marc:controlfield tag="005">19820325000000.0</marc:controlfield>
            <marc:controlfield tag="008">780306m19009999ohu 00000 grc</marc:controlfield>
            <marc:datafield tag="245" ind1="0" ind2="4">
               <marc:subfield code="a">Demo title</marc:subfield>
               <marc:subfield code="c">J. Arthur Hanson, David Paterson, editors.</marc:subfield>
            </marc:datafield>
            <marc:datafield tag="700" ind1="1" ind2="0">
               <marc:subfield code="a">Hanson, J. Arthur</marc:subfield>
               <marc:subfield code="q">(Joseph Arthur)</marc:subfield>
            </marc:datafield>
            <marc:datafield tag="700" ind1="1" ind2="0">
               <marc:subfield code="a">Paterson, David,</marc:subfield>
               <marc:subfield code="d">1952-</marc:subfield>
            </marc:datafield>
            <marc:datafield tag="710" ind1="2" ind2="0">
               <marc:subfield code="a">Research Associates.</marc:subfield>
            </marc:datafield>
         </marc:record>
      </original>
   </record>
</collection>

At the end of the transformation pipeline, the XML will be transformed to JSON, and the result can be inspected with this command:

 POST to inventory-import/channels/demo/try-tranformation -f marc.xml  

 Response, the XML converted to JSON,
 
 {
  "instance" : {
    "source" : "DEMO",
    "hrid" : "73209622",
    "title" : "Demo title",
    "instanceTypeId" : "6312d172-f0cf-40f6-b27d-9fa8feaf332f",
    "contributors" : [ {
      "name" : "Hanson, J. Arthur, (Joseph Arthur)",
      "contributorNameTypeId" : "2b94c631-fca9-4892-a730-03ee529ffe2a"
    }, {
      "name" : "Paterson, David, 1952",
      "contributorNameTypeId" : "2b94c631-fca9-4892-a730-03ee529ffe2a"
    }, {
      "name" : "Research Associates",
      "contributorNameTypeId" : "2e48e713-17f3-4c13-a9f8-23845bb210aa"
    } ]
  }
}

That concludes the practical walk-through. With this channel in place the MARC record can be imported with:

  POST inventory-import/channels/demo/upload -f marc.xml

See the OpenAPI spec for further details XML importing APIs.

Managing channels

The configurable parts of the channel itself are

  • A name for the channel.
  • A shorthand tag for the channel that can be used instead of its UUID in certain API requests.
  • A reference to the "transformation" that processes the incoming source files.
  • Two boolean state properties for marking the channel 'enabled' and 'listening'
    • enabled means that a background process is running (or should be running) for the channel, and that it has a file queue to which it is possible to upload files. Files can not be uploaded to a channel that is not enabled.
    • listening means that the channel, if enabled, will actively pick up files, if any, from the queue and process/import them. Files can be uploaded to the channel even if it is not listened (as long as it is enabled).

Finally, there is a derived, non-persistent (read only) property named commissioned that indicates if an enabled channel actually has a running process. A channel will automatically get a running background process when it is marked enabled (updated with enabled set to true) but if the module gets shut down it will no longer have that process running once the module is brought up again. In that scenario the channel will be enabled (supposed to run) but not commissioned (not actually running). However, if a file is uploaded to such a channel, MIU will detect that it should be running and automatically commission it again. Only if the module was restarted while there were files in the queue, AND there are no additional files coming, AND you want to process the existing queue now, is it necessary to explicitly commission the channel.

When a channel gets enabled/commissioned it will also have a file queue that source files can be uploaded to. If the channel is also listening it will pick files from the queue. The channel API has some extra properties displaying the status of the channel's file queue.

Import jobs

When the first file of a queue is picked up by the channel's import process, it will trigger the creation of an import job record in the database. The import job record registers information about the processing of a continuous set of files in queue. When the last file in the queue is processed, the import job record will be marked finished. If more files get uploaded later, a new import job will be created for them and so on, so the import job is basically a demarcation of processing logs. Attached to the importJob are also records containing more fine-grained information about the number of records processed per file with some performance statistics per file and per the continuous queue as a whole. If any records encountered errors during importing to Inventory, error report records named failedRecord will likewise be attached to the importJob.

The import job will continue despite errors in individual instances, holdings records or items, for example errors due to missing required properties, invalid reference data references, etc. But in case of a "fatal" error, the job will be paused. An error is "fatal" if it is not possible or does not make sense to continue the import.

When a job is paused it effectively means that the channel is paused too, since there can at most be one current job in the channel.

Error handling

If a fatal error occurs, the import job will be marked paused so that processing can potentially be resumed once the problem is resolved. When the job is resumed, any processing statistics and failed records will then count towards that same import job until the last file in the queue is processed.

There are two main categories of errors that should halt the processing. One would be a fatal problem with the import file itself, for example that it contains invalid XML and thus cannot be meaningfully processed from the point in the file where the error is encountered. The other would be some external problem like MIU losing access to Inventory Storage.

If facing the first case (the XML is bad) it will not help resuming the job from the bad file, since it will immediately get halted again. MIU's API therefore has an option to skip the current file when resuming. In the second case (external problem), the file could be fine, and if the external problem can be resolved, then the job should be resumed from the same file where the error was first encountered.

Note on possible enhancements
- Resume from a given record in the file? Currently, the entire file will be resumed, 
  but a number of records of the file may already have been imported alright, for example 
  all records up until the point of a bad XML construct.
  Usually it does not hurt to repeat upserts, the end result will be the same, but processing
  counts will be affected in ways that might make them look a bit off.
- Replace current file?  Currently, one can opt to skip current file if it is invalid. 
  However, it might be desired to have an option to correct the file locally and upload 
  it again in place of the current file as opposed to simply have it uploaded to the end 
  of the file queue. This would be to ensure order of processing if important.

HTTP requests for managing channels

API Feature
POST /inventory-import/channels
Properties:
name
tag
transformationId
enabled
listening
Create a channel. A transformation pipeline must be created, that the channel can reference, before the channel can be created.
The tag can be used instead of the channels UUID in some API requests.
If enabled is true the channel will accept uploads.
If listening is true, the channel will process uploaded files in order of arrival.
PUT /inventory-import/channels/<channel uuid> Update properties of a channel.
If the channel was disabled before, and enabled is then set to true, a channel process will be launched, a new file system queue will be created, and the channel will allow uploading of files to it.
If the channel was enabled, setting enabled to false, will stop the channel process, remove the file queue and prevent upload of files to the queue.
If listening is being set to true the (enabled, unpaused) channel will pick up any current of future files from the queue and import them to Inventory.
If listening is being set to false, the (enabled) channel will still allow uploading of files to the queue but will not pick them up for processing.
POST /inventory-import/channels/<channel id>/commission
Optional parameters:
retainQueue
listening
Launch a channel process, create a filesystem queue, allow uploads, and mark the channel enabled.
The command takes the parameter ?retainQueue=[true,false] (default false). When true, any filesystem queue that was left behind from a previous channel process will be kept.
If the paramter listening is set, the setting will override/update the listening property as saved in the channel object.
If not setting the parameter retainQueue to true and not using the parameter listening, then the commission command has the same effect as PUTting a channel object with the property enabled set to true.
POST /inventory-import/channels/<channel id>/decommission
Optional parameter: retainQueue
Stop the channel process, remove the file system queue, prevent uploads, and mark the channel disabled.
The command takes the parameter ?retainQueue=[true,false] (default false). When set to true, the file system queue for this channel is kept and potentially available if the channel process is launched again.
Unless the parameter retainQueue is set to true, this command has the same effect as PUTting a channel object with the property enabled set to false.
POST /inventory-import/channels/<channel id>/listen Listen for source files in queue, same effect as setting channel.listening=true.
POST /inventory-import/channels/<channel id>/no-listen Ignore source files in queue, same effect as setting channel.listening=false.
POST /inventory-import/channels/<channel id>/init-queue Delete all the source files in a queue (or re-establish an empty queue structure, in case the previous queue was deleted directly in the file system for example).
DELETE /inventory-import/channels/<channel uuid> Delete the channel configuration, including the file queue but not the channel's job history.
POST /inventory-import/channels/<channel id>/upload
Optional parameter: filename
Push a source file to the channel. MIU is currently set to accept files up to a size of 100 MB.
The parameter filename can be used to tell MIU what name should be used for the uploaded file. For example: ?filename=marc-records.xml. The name will appear in log statements about the processing of the file, including in error records to aid potential troubleshooting or resending of the source files. If no filename is provided, MIU will assign a UUID-based name to it.
POST /inventory-import/channels/<channel id>/pause-job Halt processing in order to potentially resume it again with processing logs assigned to the same job.
POST /inventory-import/channels/<channel id>/resume-job
Optional parameter: skipCurrentFile
Resume a paused job, counting subsequent files in the queue as part of the existing import job.
When skipCurrentFile is set to true the job will discard the current file and resume from the next.
Request operating on multiple channels
POST /inventory-import/recover-interrupted-channels
Optional parameters:
retainQueue
listening
Deploy ("commission") all channels that are marked enabled but are not actually running. This is a scenario that would presumably only occur if the module was stopped and started while channels were enabled.
The command takes a paramter listening to be able to enable the channels without kicking of actual importing just yet.
Posting a source file to a channel that is enabled but not deployed (not "commissioned") will implicitly commission the channel. Explicitly recovering channels with this command will ensure that already existing files in the queue are processed without posting new files.

Using tag for channel ID

The in the paths can either be the UUID of the channel record (channel.id) or the value of the property channel.tag. The tag is an optional, unique, max 24 character long string without spaces. If it's set on a channel, that channel can be referenced by the tag in the various channel commands. The basic REST requests (GET, PUT, DELETE channel) use the UUID like standard FOLIO APIs.

PART II. How MIU works with the inventory record set

This part explains how MIU uses the inventory record set JSON to control and perform the persistence of instances, holdings records and items in Inventory Storage.

When using the synchronous, JSON based upsert APIs of MIU, this walkthrough explains what happens when the client pushes a valid inventory record set JSON to MIU.

Even for a client uploading XML files to MIU, this part applies. The outcome of the XSLT transformation described in PART I will (must) likewise be a valid inventory record set, which will go through the same processing as if it was pushed to the upsert API directly.

All updates to Inventory are HRID based

The upsert APIs will update an Instance as well as its associated holdings and items based on incoming HRIDs on all three record types. If an instance with the incoming HRID does not exist in storage already, the new Instance is inserted, otherwise the existing instance is updated - thus the term 'upsert'.

This means that HRIDs are required to be present in records from the client for all three record types. As a consequence, these client side identifiers must furthermore be consistent over time. MIU will not be able to find existing records in Inventory, and update them accordingly, if the identifiers are changed on the client side. Whenever new identifiers occur in a feed, MIU will treat them as new records to be created, rather than updated, in Inventory.

Detect if holdings records or items should be deleted

The API will detect if holdings and/or items have disappeared from the Instance since last update and in that case remove them from storage. Note, however, that there is a distinction between a request with no holdingsRecords property and a request with an empty holdingsRecords property. If existing holdings and items should not be touched, for example if holdings and items are maintained manually in Inventory, then no holdingsRecords property should appear in the request JSON and existing records would be ignored. Providing an empty holdingsRecords property, on the other hand, would cause all existing holdings and items to be deleted. The API will also detect if new holdings or items on the Instance existed already on a different Instance in storage and then move them over to the incoming Instance. The IDs (UUIDs) on any pre-existing Instances, holdings records and items will be preserved in this process, thus avoiding breaking any external UUID based references to these records.

The Inventory Record Set, that is PUT to the end point, may contain relations to other Instances, for example the kind of relationships that tie multipart monographs together or relations pointing to preceding or succeeding titles. Based on a comparison with the set of relationships that may be registered for the Instance in storage already, relationships will be created and/or deleted (updating relationships is obsolete).

Control record overlay on updates.

The default behaviour of MIU is to simply replace the entire record on updates, for example override the entire holdingsRecord, with the input JSON it receives from the client, except for the ID (UUID) and version.

The default behaviour can be changed per request using structures in the processing element.

Prevent MIU from overriding existing values

MIU can be instructed to leave certain properties in place when updating Instances, holdings records, and Items.

For example, to retain all existing Item properties that are not included in the request body to MIU, use the retainExistingValues.

"processing": {
   "item": {
     "retainExistingValues": {
       "forOmittedProperties": true
     }
   }
}

This is a way to have MIU only update the set of properties it should be concerned with and let other properties be the responsibility of other processes if required.

If MIU sets certain properties on insert but should not touch them in subsequent updates -- even though they are provided in the request body to MIU -- those properties can be explicitly turned off in updates:

"processing": {
   "item": {
     "retainExistingValues": {
       "forTheseProperties": [ "a", "b", "c" ]
     }
   }
}

The two settings can be combined to not touch neither omitted properties nor the explicitly listed properties.

If forOmittedProperties is used, it requires the client to distinguish between sending an empty property vs not sending the property at all. Say, an Instance had contributors before, but now they were removed in the source catalogue. If this is communicated to MIU by an empty contributors property, then it's fine, it will become empty in Inventory Storage too, but if the property is simply removed from the request body altogether, then the existing value of contributors will be retained in storage if forOmittedProperties is set to true.

Instruct MIU to leave the item status unmodified under certain circumstance.

In case the Item status is being managed outside the MIU update process, likely by FOLIO Circulation, MIU can be instructed to only touch the status under certain circumstances.

For example, to only overwrite a status of "On order" and retain any other statuses, do:

processing": {
   "item": {
     "status": {
       "policy": "overwrite",
       "ifStatusWas": [
         {"name": "On order"}
       ]
     }
   }
}

The default behaviour is to overwrite all statuses.

Instruct MIU to avoid deleting items even though they are missing from the input

When MIU receives an Instance update, it will look for existing items on the holdings record that are not present in the update and then delete them.

With this instruction, deletion can be prevented based on a regular expression matched against a specified property. This could be used to preserve items created from other sources, provided that a regular expression can be written that will identify such items without at the same time matching items of the current data feed.

For example, to retain items that have HRIDs starting with non-digit characters:

processing": {
   "item": {
     "retainOmittedRecord": {
       "ifField": "hrid",
       "matchesPattern": "\\D+.*"
     }
   }
}
Retain omitted items if the status indicates that they are still circulating

Usually items that are omitted from the holdings record in the upsert will be removed from storage by MIU. The exceptions are if they are delete protected as described in the previous section, or if they have a status that indicates they might still be circulating.

MIU will avoid deleting items with following item statuses

  • Awaiting delivery
  • Awaiting pickup
  • Checked out
  • Aged to lost
  • Claimed returned
  • Declared lost
  • Paged
  • In transit

Provisional Instance created when related Instance doesn't exist yet

If an upsert request comes in with a relation to an Instance that doesn't already exist in storage, a provisional Instance will be created provided that the request contains sufficient data as required for creating the provisional Instance - like any mandatory Instance properties.

Deletion of Instance-to-Instance relations

Only existing relationships that are explicitly omitted in the request will be deleted. In FOLIO Inventory, a relation will appear on both Instances of the relation, say, one Instance will have a parent relation, and the other will have a child relation.

This may not be the case in the source system where, perhaps, the child record may declare its parent, but the parent will not mention its child records.

To support deletion of relations for these scenarios, and not implicitly but unintentionally delete too many, following rules apply:

Including an empty array of child instances will tell the API that if the Instance has any existing child relations, they should be deleted.

"instanceRelations": {
  "childInstances": []
}

Leaving out any reference to child instances -- or as in this sample, any references to any related Instances at all -- means that any existing relationships will be left untouched by the update request.

"instanceRelations": {
}

Instance DELETE requests

The API supports DELETE requests, which would delete the Instance with all of its associated holdings records and items and any relations it might have to other Instances.

To delete an instance record with all its holdings and items, send a DELETE request to /inventory-upsert-hrid with a payload like this:

{
  "hrid": "001"
}

Note that deleting any relations that the Instance had to other instances only cuts those links between them but does not otherwise affect those other instances.

Protecting certain items or holdings records from deletion in DELETE requests

Delete requests can be extended with a processing instruction that blocks deletion of holdings and/or items based on pattern matching in select properties.

This can be used to avoid deletion of items that are created outside the MIU pipeline (for example through the Inventory UI) provided that there is a pattern that can be applied to a property value of those records to identify which items to protect.

For example, to protect items that have HRIDs starting with non-digit characters, following delete body for deletion of the Instance with HRID "123456" could be used:

{
    "hrid":  "1234567",
    "processing": {
       "item": {
         "blockDeletion": {
           "ifField": "hrid",
           "matchesPattern": "\\D+.*"
         }
       }
    }
}

When a delete request is sent for an Instance that has protected Items, the deletion of the Instance, as well as the holdings record for the Item, will be blocked as well. Other holdings records or Items that do not match the block criteria will be deleted.

Deletion will likewise be blocked when one or more items under the instance have one of the statuses also listed in "Retain omitted items if the status indicates that they are still circulating":

  • Awaiting delivery
  • Awaiting pickup
  • Checked out
  • Aged to lost
  • Claimed returned
  • Declared lost
  • Paged
  • In transit

Statistical coding of delete protection events

When records are prevented from being deleted, the delete or the upsert request can be configured to set specified statistical codes on the record, which was up for deletion, and then update the existing record with those. This will not count as a record update, and if the update fails -- for example due to use of invalid UUIDs -- it will write an error to the module log but will not fail the overall request. If non-existing statistical codes are specified the storage module will silently avoid setting them.

Here are some examples of statistical coding of skipped deletes, first in a delete requests

Set all available codes on all record types. This means not just setting the code on the item that was not delete but also marking it on the instance that consequently could also not be deleted:

{
  "hrid": "in001",
  "processing": {
    "item": {
      "blockDeletion": {
        "ifField": "hrid",
        "matchesPattern": "it.*"
      },
      "statisticalCoding": [
        {"if": "deleteSkipped", "becauseOf":  "ITEM_STATUS", "setCode": "1ce0a775-286f-45ae-8446-e26ba0687b61"},
        {"if": "deleteSkipped", "becauseOf":  "ITEM_PATTERN_MATCH", "setCode": "42b735fa-eb6f-4c53-b5d5-2d98500868c5"}
      ]
    },
    "holdingsRecord": {
      "blockDeletion": {
        "ifField": "hrid",
        "matchesPattern": "ho.*"
      },
      "statisticalCoding": [
        {"if": "deleteSkipped", "becauseOf":  "HOLDINGS_RECORD_PATTERN_MATCH", "setCode": "d11fd9d8-b234-4159-b7b1-61b531bb1405"},
        {"if": "deleteSkipped", "becauseOf":  "ITEM_STATUS", "setCode": "1ce0a775-286f-45ae-8446-e26ba0687b61"},
        {"if": "deleteSkipped", "becauseOf":  "ITEM_PATTERN_MATCH", "setCode": "42b735fa-eb6f-4c53-b5d5-2d98500868c5"}
      ]
    },
    "instance": {
      "statisticalCoding": [
        {"if": "deleteSkipped", "becauseOf":  "PO_LINE_REFERENCE", "setCode": "98993c92-c5c9-414b-b12c-a82836b0dbf6"},
        {"if": "deleteSkipped", "becauseOf":  "HOLDINGS_RECORD_PATTERN_MATCH", "setCode": "d11fd9d8-b234-4159-b7b1-61b531bb1405"},
        {"if": "deleteSkipped", "becauseOf":  "ITEM_STATUS", "setCode": "1ce0a775-286f-45ae-8446-e26ba0687b61"},
        {"if": "deleteSkipped", "becauseOf":  "ITEM_PATTERN_MATCH", "setCode": "42b735fa-eb6f-4c53-b5d5-2d98500868c5"}
      ]
    }
  }
}

Lift all codes up on the instance level, even if the primarily protected record was a holdings record or an item:

{
  "hrid": "in001",
  "processing": {
    "item": {
      "blockDeletion": {
        "ifField": "hrid",
        "matchesPattern": "it.*"
      }
    },
    "holdingsRecord": {
      "blockDeletion": {
        "ifField": "hrid",
        "matchesPattern": "ho.*"
      }
    },
    "instance": {
      "statisticalCoding": [
        {"if":  "deleteSkipped", "becauseOf":  "PO_LINE_REFERENCE", "setCode": "98993c92-c5c9-414b-b12c-a82836b0dbf6"},
        {"if":  "deleteSkipped", "becauseOf":  "HOLDINGS_RECORD_PATTERN_MATCH", "setCode": "d11fd9d8-b234-4159-b7b1-61b531bb1405"},
        {"if":  "deleteSkipped", "becauseOf":  "ITEM_STATUS", "setCode": "1ce0a775-286f-45ae-8446-e26ba0687b61"},
        {"if":  "deleteSkipped", "becauseOf":  "ITEM_PATTERN_MATCH", "setCode": "42b735fa-eb6f-4c53-b5d5-2d98500868c5"}
      ]
    }
  }
}

Only set the code on the record that was directly protected from deletion (not on the indirectly delete protected records) :

{
  "hrid": "in001",
  "processing": {
    "item": {
      "blockDeletion": {
        "ifField": "hrid",
        "matchesPattern": "it.*"
      },
      "statisticalCoding": [
        {"if": "deleteSkipped", "becauseOf": "ITEM_STATUS", "setCode": "1ce0a775-286f-45ae-8446-e26ba0687b61"},
        {"if": "deleteSkipped", "becauseOf": "ITEM_PATTERN_MATCH", "setCode": "42b735fa-eb6f-4c53-b5d5-2d98500868c5"}
      ]
    },
    "holdingsRecord": {
      "blockDeletion": {
        "ifField": "hrid",
        "matchesPattern": "ho.*"
      },
      "statisticalCoding": [
        {"if": "deleteSkipped", "becauseOf": "HOLDINGS_RECORD_PATTERN_MATCH", "setCode": "d11fd9d8-b234-4159-b7b1-61b531bb1405"}
      ]
    },
    "instance": {
      "statisticalCoding": [
        {"if": "deleteSkipped", "becauseOf": "PO_LINE_REFERENCE", "setCode": "98993c92-c5c9-414b-b12c-a82836b0dbf6"}
      ]
    }
  }
}

Statistical codes can be set on holdings and items in upserts (deleting the instance is not an option in the upsert, and preventing delete due to PO_LINE_REFERENCE does not apply).

For example:

"processing": {
  "item": {
    "retainOmittedRecord": {
      "ifField": "hrid",
      "matchesPattern": "it.*"
    },
    "statisticalCoding": [
      { "if": "deleteSkipped", "becauseOf": "ITEM_STATUS", "setCode": "2b750461-5368-4a4e-9484-4e1eea2bc384" },
      { "if": "deleteSkipped", "becauseOf": "ITEM_PATTERN_MATCH", "setCode": "6f143d4c-75fe-4987-ae3e-3d7c2a4ccca2" },
    ]
  },
  "holdingsRecord": {
    "retainOmittedRecord": {
      "ifField": "hrid",
      "matchesPattern": "ho.*"
    }
    "statisticalCoding": [
      { "if": "deleteSkipped", "becauseOf": "ITEM_STATUS", "setCode": "b2452cc2-7024-41fa-a5c7-b1736280d781" },
      { "if": "deleteSkipped", "becauseOf": "ITEM_PATTERN_MATCH", "setCode": "a8e70d5e-2861-4a89-93cc-88679c74e592" },
      { "if": "deleteSkipped", "becauseOf": "HOLDINGS_RECORD_PATTERN_MATCH", "setCode": "e24f4a4e-8d8f-4ca2-ab05-dd497a8379e3" }
    ]
  },
  "instance": {
  }
}

Additional info about the batch version of upserts /inventory-batch-upsert-hrid

A client can send arrays of inventory record sets to the batch APIs with significant improvement to overall throughput compared to the single record APIs.

These APIs utilise Inventory Storage's batch upsert APIs for Instances, holdings records and Items.

In case of data problems in either type of entity (referential constraint errors, missing mandatory properties, etc), the entire batch of the given entity will fail. For example if an Item fails all Items fail. At this point all the Instances and holdings records are presumably persisted. The module aims to recover from such inconsistent states by switching from batch processing to record-by-record updates in case of errors so that, in this example, all the good Items can be persisted. The response on a request with one or more errors, that didn't prevent the entire request from being processed, will be a 207 Multi-Status, and the one or more error that were found will be returned with the response JSON, together with the summarised update statistics ("metrics).

In a feed with many errors the throughput will be close to that of the single record APIs since many batches will be processed record-by-record.

If the client needs to pair up error reports in the response with any original records it holds itself, the client can set an identifier in the inventory record set property "processing". The name and content of that property is entirely up to the client, MIU will simply return the data it gets, so it could be a simple sequence number for the batch, like

{
 "inventoryRecordSets":
  [
   {
    "instances": ...
    "holdingsRecords": ...
    "processing": {
      ...
      "batchIndex": 1
    }
   },
    "instances": ...
    "holdingsRecords": ...
    "processing": {
      ...
      "batchIndex": 2
    }
}

A response of a batch upsert of 100 Instances where the 50th failed, the error message is tagged with the clients batch index.

{
    "metrics": {
        "INSTANCE": {
            "CREATE": {
                "COMPLETED": 99,
                "FAILED": 1,
                "SKIPPED": 0,
                "PENDING": 0
            },
            "UPDATE": {
                "COMPLETED": 0,
                "FAILED": 0,
                "SKIPPED": 0,
                "PENDING": 0
            },
            "DELETE": {
                "COMPLETED": 0,
                "FAILED": 0,
                "SKIPPED": 0,
                "PENDING": 0
            }
        },
        "HOLDINGS_RECORD": {
            "CREATE": {
                "COMPLETED": 0,
                "FAILED": 0,
                "SKIPPED": 0,
                "PENDING": 0
            },
            "UPDATE": {
                "COMPLETED": 0,
                "FAILED": 0,
                "SKIPPED": 0,
                "PENDING": 0
            },
            "DELETE": {
                "COMPLETED": 0,
                "FAILED": 0,
                "SKIPPED": 0,
                "PENDING": 0
            }
        },
        "ITEM": {
            "CREATE": {
                "COMPLETED": 0,
                ...
                ...
                ...
            }
        }
    },
    "errors": [
        {
            "category": "STORAGE",
            "message": {
                "message": {
                    "errors": [
                        {
                            "message": "must not be null",
                            "type": "1",
                            "code": "javax.validation.constraints.NotNull.message",
                            "parameters": [
                                {
                                    "key": "source",
                                    "value": "null"
                                }
                            ]
                        }
                    ]
                }
            },
            "shortMessage": "One or more errors occurred updating Inventory records",
            "entityType": "INSTANCE",
            "entity": {
                "title": "New title 50",
                "instanceTypeId": "12345",
                "matchKey": "new_title_50__________________________________________________________0000_____________________________________________________________________________________________p",
                "id": "782d5015-147d-433d-beaf-06a47bde6be5"
            },
            "statusCode": 422,
            "requestJson": {
                "instance": {
                    "title": "New title 50",
                    "instanceTypeId": "12345",
                    "matchKey": "new_title_50__________________________________________________________0000_____________________________________________________________________________________________p"
                },
                "processing": {
                    "batchIndex": 50
                }
            },
            "details": {

            }
        }
    ]
}

Note that any given batch cannot touch the same records twice, since that would require a certain order of processing, something that batching will not be able to guarantee. For the upsert by HRID for example, if any HRID appears twice in the batch, the module will fall back to record-by-record updates and process the record sets in the order it receives them. Same for the upsert by match-key, if any match-key appears twice in the batch.

Miscellaneous

/shared-inventory-upsert-matchkey

Currently obsolete.

Inserts or updates an Instance based on whether an Instance with the same matchKey exists in storage already. The matchKey is typically generated from a combination of metadata in the bibliographic record, and the API has logic for that, but if an Instance comes in with a ready-made matchKey, the end-point will use that instead.

APIs for fetching an Inventory record set

There are two REST paths for retrieving single Inventory record sets by ID: /inventory-upsert-hrid/fetch/{id} and /shared-inventory-upsert-matchkey/fetch/{id}. Both APIs will return a record set with an Instance record, potentially an array of holdings records, each holdings-record potentially with an array of Item records, and finally a set of arrays of external relations that the Instance has with other Instances.

Fetching an Inventory record set from inventory-upsert-hrid/fetch

The ID provided on the API path is the Instance HRID. A request like GET /inventory-upsert-hrid/fetch/inst000000000017 would give a response like this (shortened):

{
  "instance" : {
    "_version" : 1,
    "hrid" : "inst000000000017",
    "source" : "FOLIO",
    "title" : "Interesting Times",
    "identifiers" : [ {
      "value" : "0552142352",
      "identifierTypeId" : "8261054f-be78-422d-bd51-4ed9f33c3422"
    } ],
    "contributors" : [ {
      "name" : "Pratchett, Terry",
      "contributorNameTypeId" : "2b94c631-fca9-4892-a730-03ee529ffe2a"
    } ],
    "subjects" : [ ],
    ... etc
    "statusUpdatedDate" : "2021-11-01T23:31:36.026+0100",
    "metadata" : {
      "createdDate" : "2021-11-01T22:31:36.025+00:00",
      "updatedDate" : "2021-11-01T22:31:36.025+00:00"
    },
  },
  "holdingsRecords" : [ {
    "_version" : 1,
    "hrid" : "hold000000000007",
    "permanentLocationId" : "f34d27c6-a8eb-461b-acd6-5dea81771e70",
    ... etc
    "metadata" : {
      "createdDate" : "2021-11-01T22:31:38.030+00:00",
      "updatedDate" : "2021-11-01T22:31:38.030+00:00"
    },
    "items" : [ {
      "_version" : 1,
      "hrid" : "item000000000012",
      "barcode" : "326547658598",
      ... etc
      "status" : {
        "name" : "Checked out",
        "date" : "2021-11-01T22:31:38.587+00:00"
      },
      "materialTypeId" : "1a54b431-2e4f-452d-9cae-9cee66c9a892",
      "metadata" : {
        "createdDate" : "2021-11-01T22:31:38.587+00:00",
        "updatedDate" : "2021-11-01T22:31:38.587+00:00"
      }
    } ]
  } ],
  "instanceRelations" : {
    "parentInstances" : [ ],
    "childInstances" : [ ],
    "precedingTitles" : [ ],
    "succeedingTitles" : [ ]
  }
}
(Note: it's possible to use the Instance UUID instead of the HRID in the GET request)

It's possible to take the response from the /inventory-upsert-hrid/fetch and PUT it back to the /inventory-upsert-hrid API.

There may not be obvious use cases for it but for what it's worth, the response JSON can be edited by, say, setting "editions" to ["First edition"] or adding one more Item, and the record set JSON can then be PUT back to /inventory-upsert-hrid to perform the updates.

The response JSON above contains none of the primary key fields, id, or referential fields, instanceId and holdingsRecordId, for the three main entities of the Inventory record set. This is because the inventory-upsert-hrid API is entirely HRID based (at least when viewed from the outside. Internally the module of course deals with the UUIDs).

The client of the API is responsible for knowing what the HRIDs for the records are and for ensuring that the provided IDs are indeed unique.

The _version fields and optimistic locking

The _version fields for optimistic locking can be seen in the output above. These values would have no effect in a PUT to the upsert API. As the service receives the record set JSON in a PUT request, it will pull new versions of the entities from storage and get the latest version numbers from that anyway.

   | 2.2.0                          |

Prerequisites

  • Java 21 JDK
  • Maven 3.3.9

Building

run mvn install from the root directory.

Additional information

Other documentation

Other modules are described, with further FOLIO Developer documentation at dev.folio.org

Code of Conduct

Refer to the Wiki FOLIO Code of Conduct.

Issue tracker

See project MODINVUP at the FOLIO issue tracker.

ModuleDescriptor

See the ModuleDescriptor for the interfaces that this module requires and provides, the permissions, and the additional module metadata.

API documentation

Generated API documentation.

Code analysis

SonarQube analysis.

Download and configuration

The built artefacts for this module are available. See configuration for repository access, and the Docker image.