Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
589 changes: 204 additions & 385 deletions docs/html/Contents.html

Large diffs are not rendered by default.

1,093 changes: 545 additions & 548 deletions docs/html/CoreComponents.html

Large diffs are not rendered by default.

2,613 changes: 1,578 additions & 1,035 deletions docs/html/DataConversionComponents.html

Large diffs are not rendered by default.

1,182 changes: 607 additions & 575 deletions docs/html/FinancialModule.html

Large diffs are not rendered by default.

717 changes: 325 additions & 392 deletions docs/html/Introduction.html

Large diffs are not rendered by default.

629 changes: 237 additions & 392 deletions docs/html/MultiPeril.html

Large diffs are not rendered by default.

1,390 changes: 1,390 additions & 0 deletions docs/html/ORDOutputComponents.html

Large diffs are not rendered by default.

1,413 changes: 729 additions & 684 deletions docs/html/OutputComponents.html

Large diffs are not rendered by default.

623 changes: 231 additions & 392 deletions docs/html/Overview.html

Large diffs are not rendered by default.

806 changes: 351 additions & 455 deletions docs/html/README.html

Large diffs are not rendered by default.

679 changes: 262 additions & 417 deletions docs/html/RandomNumbers.html

Large diffs are not rendered by default.

709 changes: 296 additions & 413 deletions docs/html/ReferenceModelOverview.html

Large diffs are not rendered by default.

1,051 changes: 504 additions & 547 deletions docs/html/Specification.html

Large diffs are not rendered by default.

1,000 changes: 436 additions & 564 deletions docs/html/StreamConversionComponents.html

Large diffs are not rendered by default.

687 changes: 262 additions & 425 deletions docs/html/ValidationComponents.html

Large diffs are not rendered by default.

744 changes: 303 additions & 441 deletions docs/html/Workflows.html

Large diffs are not rendered by default.

1,550 changes: 780 additions & 770 deletions docs/html/fmprofiles.html

Large diffs are not rendered by default.

186 changes: 186 additions & 0 deletions docs/md/DataConversionComponents.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,20 @@
The following components convert input data in csv format to the binary format required by the calculation components in the reference model;

**Static data**
* **[aggregatevulnerabilitytobin](#aggregatevulnerability)** converts the aggregate vulnerability data.
* **[damagebintobin](#damagebins)** converts the damage bin dictionary.
* **[footprinttobin](#footprint)** converts the event footprint.
* **[lossfactorstobin](#lossfactors)** converts the lossfactors data.
* **[randtobin](#rand)** converts a list of random numbers.
* **[vulnerabilitytobin](#vulnerability)** converts the vulnerability data.
* **[weightstobin](#weights)** converts the weights data.

A reference [intensity bin dictionary](#intensitybins) csv should also exist, although there is no conversion component for this file because it is not needed for calculation purposes.

**Input data**
* **[amplificationtobin](#amplifications)** converts the amplifications data.
* **[coveragetobin](#coverages)** converts the coverages data.
* **[ensembletobin](#ensemble)** converts the ensemble data.
* **[evetobin](#events)** converts a list of event_ids.
* **[itemtobin](#items)** converts the items data.
* **[gulsummaryxreftobin](#gulsummaryxref)** converts the gul summary xref data.
Expand All @@ -24,19 +29,25 @@ A reference [intensity bin dictionary](#intensitybins) csv should also exist, al
* **[occurrencetobin](#occurrence)** converts the event occurrence data.
* **[returnperiodtobin](#returnperiod)** converts a list of return periods.
* **[periodstobin](#periods)** converts a list of weighted periods (optional).
* **[quantiletobin](#quantile)** converts a list of quantiles (optional).

These components are intended to allow users to generate the required input binaries from csv independently of the original data store and technical environment. All that needs to be done is first generate the csv files from the data store (SQL Server database, etc).

The following components convert the binary input data required by the calculation components in the reference model into csv format;

**Static data**
* **[aggregatevulnerabilitytocsv](#aggregatevulnerability)** converts the aggregate vulnerability data.
* **[damagebintocsv](#damagebins)** converts the damage bin dictionary.
* **[footprinttocsv](#footprint)** converts the event footprint.
* **[lossfactorstocsv](#lossfactors)** converts the lossfactors data.
* **[randtocsv](#rand)** converts a list of random numbers.
* **[vulnerabilitytocsv](#vulnerability)** converts the vulnerability data.
* **[weightstocsv](#weights)** converts the weights data.

**Input data**
* **[amplificationtocsv](#amplifications)** converts the amplifications data.
* **[coveragetocsv](#coverages)** converts the coverages data.
* **[ensembletocsv](#ensemble)** converts the ensemble data.
* **[evetocsv](#events)** converts a list of event_ids.
* **[itemtocsv](#items)** converts the items data.
* **[gulsummaryxreftocsv](#gulsummaryxref)** converts the gul summary xref data.
Expand All @@ -48,11 +59,43 @@ The following components convert the binary input data required by the calculati
* **[occurrencetocsv](#occurrence)** converts the event occurrence data.
* **[returnperiodtocsv](#returnperiod)** converts a list of return periods.
* **[periodstocsv](#returnperiod)** converts a list of weighted periods (optional).
* **[quantiletocsv](#quantile)** converts a list of quantiles (optional).

These components are provided for the convenience of viewing the data and debugging.

## Static data

<a id="aggregatevulnerability"></a>
### aggregate vulnerability
***
The aggregate vulnerability file is required for the gulmc component. It contains the conditional distributions of damage for each intensity bin and for each vulnerability_id. This file must have the following location and filename;

* static/aggregate_vulnerability.bin

##### File format

The csv file should contain the following fields and include a header row.


| Name | Type | Bytes | Description | Example |
|:-------------------------------|--------|--------| :---------------------------------------------|------------:|
| aggregate_vulnerability_id | int | 4 | Oasis vulnerability_id | 45 |
| vulnerability_id | int | 4 | Oasis vulnerability_id | 45 |

If this file is present, the weights.bin or weights.csv file must also be present. The data should not contain nulls.

##### aggregatevulnerabilitytobin
```
$ aggregatevulnerabilitytobin < aggregate_vulnerability.csv > aggregate_vulnerability.bin
```

##### aggregatevulnerabilitytocsv
```
$ aggregatevulnerabilitytocsv < aggregate_vulnerability.bin > aggregate_vulnerability.csv
```

[Return to top](#dataconversioncomponents)

<a id="damagebins"></a>
### damage bin dictionary
***
Expand Down Expand Up @@ -197,6 +240,37 @@ $ footprinttocsv -z > footprint.csv

[Return to top](#dataconversioncomponents)

<a id="lossfactors"></a>
### Loss Factors
***
The lossfactors binary maps the event_id/amplification_id pairs with post loss amplification factors, and is supplied by the model providers. The first 4 bytes are preserved for future use and the data format is as follows. It is required by Post Loss Amplification (PLA) workflow must have the following location and filename;

* static/lossfactors.bin

#### File format
The csv file should contain the following fields and include a header row.

| Name | Type | Bytes | Description | Example |
|:------------------|--------|--------| :---------------------------------------------------------|------------:|
| event_id | int | 4 | Event ID | 1 |
| count | int | 4 | Number of amplification IDs associated with the event ID | 1 |
| amplification_id | int | 4 | Amplification ID | 1 |
| factor | float | 4 | The uplift factor | 1.01 |

All fields must not have null values. The csv file will not contain the count, and the conversion tools will add/remove this count.

##### lossfactorstobin
```
$ lossfactorstobin < lossfactors.csv > lossfactors.bin
```

##### lossfactorstocsv
```
$ lossfactorstocsv < lossfactors.bin > lossfactors.csv
```

[Return to top](#dataconversioncomponents)

<a id="rand"></a>
### Random numbers
***
Expand Down Expand Up @@ -294,8 +368,67 @@ $ vulnerabilitytocsv -z > vulnerability.csv
```
[Return to top](#dataconversioncomponents)

<a id="weights"></a>
### Weights
***
The vulnerability weights binary contains the the weighting of each vulnerability function in all areaperil IDs. The data format is as follows. It is required by gulmc with the aggregate_vulnerability file and must have the following location and filename;

* static/weights.bin

#### File format
The csv file should contain the following fields and include a header row.

| Name | Type | Bytes | Description | Example |
|:------------------|--------|--------| :---------------------------------------------------------|------------:|
| areaperil_id | int | 4 | Areaperil ID | 1 |
| vulnerability_id | int | 4 | Vulnerability ID | 1 |
| weight | float | 4 | The weighting factor | 1.0 |

All fields must not have null values.

##### weightstobin
```
$ weightstobin < weights.csv > weights.bin
```

##### weightstocsv
```
$ weightstocsv < weights.bin > weights.csv
```

[Return to top](#dataconversioncomponents)

## Input data

<a id="amplifications"></a>
### Amplifications
***
The amplifications binary contains the list of item IDs mapped to amplification IDs. The data format is as follows. It is required by Post Loss Amplification (PLA) workflow must have the following location and filename;

* input/amplifications.bin

#### File format
The csv file should contain the following fields and include a header row.

| Name | Type | Bytes | Description | Example |
|:------------------|--------|--------| :---------------------------------------------|------------:|
| item_id | int | 4 | Item ID | 1 |
| amplification_id | int | 4 | Amplification ID | 1 |

The item_id must start from 1 and must be contiguous and not have null values. The binary file only contains the amplification IDs and assumes the item_ids would start from 1 and are contiguous.

##### amplificationtobin
```
$ amplificationtobin < amplifications.csv > amplifications.bin
```

##### amplificationtocsv
```
$ amplificationtocsv < amplifications.bin > amplifications.csv
```

[Return to top](#dataconversioncomponents)

<a id="coverages"></a>
### Coverages
***
Expand Down Expand Up @@ -325,6 +458,31 @@ $ coveragetocsv < coverages.bin > coverages.csv

[Return to top](#dataconversioncomponents)

<a id="ensemble"></a>
### ensemble
***
The ensemble file is used for ensemble modelling (multiple views) which maps sample IDs to particular ensemble ID groups. It is an optional file for use with AAL and LEC. It must have the following location and filename;
* input/ensemble.bin

##### File format
The csv file should contain a list of event_ids (integers) and include a header.

| Name | Type | Bytes | Description | Example |
|:------------------|--------|--------| :-------------------|------------:|
| sidx | int | 4 | Sample ID | 1 |
| ensemble_id | int | 4 | Ensemble ID | 1 |

##### ensembletobin
```
$ ensembletobin < ensemble.csv > ensemble.bin
```

##### ensembletocsv
```
$ ensembletocsv < ensemble.bin > ensemble.csv
```
[Return to top](#dataconversioncomponents)

<a id="events"></a>
### events
***
Expand Down Expand Up @@ -818,6 +976,34 @@ $ periodstocsv < periods.bin > periods.csv

[Return to top](#dataconversioncomponents)

<a id="quantile"></a>
### Quantile
***
The quantile binary file contains a list of user specified quantile floats. The data format is as follows. It is optionally used by the Quantile Event/Period Loss tables and must have the following location and filename;

* input/quantile.bin

#### File format
The csv file should contain the following fields and include a header row.

| Name | Type | Bytes | Description | Example |
|:------------------|--------|--------| :---------------------------------------------------------|------------:|
| quantile | float | 4 | Quantile float | 0.1 |

All fields must not have null values.

##### quantiletobin
```
$ quantiletobin < quantile.csv > quantile.bin
```

##### quantiletocsv
```
$ quantiletocsv < quantile.bin > quantile.csv
```

[Return to top](#dataconversioncomponents)

[Go to 4.5 Stream conversion components section](StreamConversionComponents.md)

[Back to Contents](Contents.md)
Binary file added docs/pdf/ktools.pdf
Binary file not shown.
Loading