Skip to content

Commit 34963a2

Browse files
authored
Merge pull request #67 from HYPERNETS/metadata_db
update docs
2 parents 5ea8756 + 35cc4bc commit 34963a2

File tree

5 files changed

+127
-61
lines changed

5 files changed

+127
-61
lines changed

docs/sphinx/content/users/use_field.rst

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,20 @@
55
66
.. _use_field:
77

8-
Using hypernets_processor in the Field
9-
======================================
8+
Field Processing User Guide
9+
===========================
1010

11-
TBC
11+
Installation
12+
------------
13+
14+
First clone the project repository from GitHub::
15+
16+
$ git clone https://github.com/HYPERNETS/hypernets_processor.git
17+
18+
Then install the module with pip::
19+
20+
$ pip install hypernets_processor/
21+
22+
This should automatically install the dependencies.
23+
24+
If you are installing the module to contribute to developing it is recommended you follow the install instructions on the :ref:`developers` page.

docs/sphinx/content/users/use_processing.rst

Lines changed: 0 additions & 11 deletions
This file was deleted.
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
.. use_processing - description of running the processor in an automated manner
2+
Author: seh2
3+
Email: sam.hunt@npl.co.uk
4+
Created: 22/10/20
5+
6+
.. _user_processor:
7+
8+
Automated Processing User Guide
9+
===============================
10+
11+
This section provides a user guide for running the `hypernets_processor` module as an automated processor of incoming field data. In this scenario, a set of field hypstar systems are regularly syncing raw data to a server. Running on this server, the `hypernets_processor` processes the data and adds it to an archive that can be accessed through a user portal.
12+
13+
Covered in this section is installing and setting up the processor, setting up specific job (e.g. field site) and running the automated job scheduler.
14+
15+
Server Installation
16+
-------------------
17+
18+
First clone the project repository from GitHub::
19+
20+
$ git clone https://github.com/HYPERNETS/hypernets_processor.git
21+
22+
To facilitate proper version control of processor configuration, create a new branch for your installed code::
23+
24+
$ git checkout -b <installation_name>_operational
25+
26+
Then install the module with setup tools, including the option to setup the processor::
27+
28+
$ python setup.py develop --setup-processor
29+
30+
This automatically installs the processor and its the dependencies, followed by the running a processor setup routine (see :ref:`user_processor-processor_setup` more details).
31+
32+
Finally, commit any changes to the module made during set up and push::
33+
34+
$ git add -A
35+
$ git commit -m "initial setup on server"
36+
$ git push
37+
38+
Any future changes to the processor configuration should be committed, to ensure appropriate version control. Updates to the processor are then made by merging release branches onto the operational branch (see :ref:`user_processor-updates`).
39+
40+
.. _user_processor-processor_setup:
41+
Processor Configuration
42+
-----------------------
43+
44+
To set the processor configuration, a setup routine is run upon installation. This can be rerun at any time as::
45+
46+
$ hypernets_processor_setup
47+
48+
This sets up the processor configuration such that it correctly points to the appropriate log file, directories and databases, creating any as necessary. By default any created log file or databases are added to the defined processor working directory.
49+
50+
For further configuration one can directly edit the processor configuration file, e.g.::
51+
52+
$ vim <installation_directory>/hypernets_processor/etc/processor.config
53+
54+
55+
.. _user_processor-job_setup:
56+
Job Setup
57+
---------
58+
59+
In the context of the `hypernets_processor`, processing a particular data stream from a given field site is defined as a job.
60+
61+
To initialise a new job to run in the processor, run the following::
62+
63+
$ hypernets_processor_job_init -n <job_name> -w <job_working directory> -i <raw_data_directory> --add-to-scheduler
64+
65+
where:
66+
67+
* `job_name` - is the name of the job within the context of the hypernets processor (could, for example, be set as the site name)
68+
* `job_working_directory` - the working directory of the job. A job configuration file is created in this directory, called `<job_name>.config`.
69+
* `raw_data_directory` - the directory the field data is synced to.
70+
* `add_to_scheduler` - option to add the job to the list of scheduled jobs, should be set.
71+
72+
As well as defining required job configuration information, the job configuration file can also be used to override any processor configuration defaults (e.g. chosen calibration function, which file levels to write), except the set of protected processor configuration defaults (e.g. processor version number). To see what configuration values may be set review the processor configuration file.
73+
74+
For all jobs, it is important relevant metadata be added to the metadata database, so it can be added to the data products.
75+
76+
.. _user_processor-scheduler:
77+
Run Scheduler
78+
-------------
79+
80+
Once setup the automated processing scheduler can be started with::
81+
82+
$ hypernets_processor_scheduler
83+
84+
To see options, try::
85+
86+
$ hypernets_processor_scheduler --help
87+
88+
All jobs are run regularly, processing any new data synced to the server from the field since the last run. The run schedule is defined in the scheduler config, which may be edited as::
89+
90+
$ vim <installation_directory>/hypernets_processor/etc/scheduler.config
91+
92+
Processed products are added to the data archive and listed in the archive database. Any anomolies are add to the anomoly database. More detailed job related log information is added to the job log file. Summary log information for all jobs is added to the processor log file.
93+
94+
To amend the list of scheduled jobs, edit the list of job configuration files listed in the processor jobs file as::
95+
96+
$ vim <installation_directory>/hypernets_processor/etc/jobs.txt
97+
98+
.. _user_processor-updates:
99+
Updates
100+
-------
101+
102+
Updates to the processor are then made by merging release branches onto the operational branch.

docs/sphinx/content/users/users.rst

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,18 @@
55
66
.. _users:
77

8-
Users
9-
=====
8+
User Guide
9+
==========
10+
11+
Usage
12+
-----
13+
14+
There are two main use cases for the hypernets_processor package. The primary function of the software is the automated preparation of data retrieved from network sites for distribution to users. Additionally, the software may also be used for ad-hoc processing of particular field acquisitions, for example for testing instrument operation in the field. For information on each these use cases click on one of the following links:
15+
1016

1117
.. toctree::
1218
:maxdepth: 2
1319

14-
users_getting_started
1520
use_field
16-
use_processing
21+
user_processor
1722
atbd

docs/sphinx/content/users/users_getting_started.rst

Lines changed: 0 additions & 43 deletions
This file was deleted.

0 commit comments

Comments
 (0)