Skip to content
This repository was archived by the owner on Aug 9, 2021. It is now read-only.

Releases: codema-dev/drem

Publish v0.4.0

03 Nov 16:12
9891797

Choose a tag to compare

Added

  • Roughly estimate (not in a prefect flow yet...) & map Small Area energy demand for

    • Residential using SEAI BER archetypes, CSO 2016 Census HH statistics
    • Commercial using VO Floor areas and CIBSE 2009 / Dublin LA derived benchmarks
    • Map using CSO 2016 Census Small Areas and Shane McGuinness Postcodes
  • Clean cso 2019 postcode network gas consumption (residential & non-residential) and link these demands to postcode households statistics (Census 2016 CSO) and to postcode geometries. These demands can be used as a 'ground-truth' for district level heating demand.

  • Roughly adapt BER hh fabric info, CSO Small Area stats & geodirectory hh stats via scripting for input to CityEnergyAnalyst

  • Download VO data via their API LA by LA

  • Add generic dask_dataframe_tasks to wrap generic dask dataframe methods for prefect

  • Add filepath_tasks so can use functions to find ROOT_DIR which can be easily mocked out for the prefect pipeline dummy flows...

  • Add immutable dicts to utilities.filepaths to store filepaths accross files - WIP

  • Add flow visualization mixin that can be inherited by transform tasks to easily visualize each task in the form of a flow chart so that non-programmers can follow the transform pipeline at a glance...

Changed

  • Rename commercial benchmark filenames to remove irrelevant strings (s.a. Table X.X) and move unmatched benchmarks (to VO uses) to the appropriate corresponding benchmark

  • Skip download ber if file exists

  • Refactor all residential & commercial etl flow tasks to read/write to files instead of passing DataFrames & GeoDataFrames. Consequently once a flow each step of the pipeline is now checkpointed. This enables the running of each transform file independent of flow context. It also enables Excel, Tableau or QGIS users to visualize these intermediate steps provided that the data is checkpointed in a compatible file format.

  • Refactor testing of residential etl to mock out DATA_DIR rather than replacing it via prefect flow parameter. Consequently the prefect flow visualisation for this etl is much cleaner as DATA_DIR is no longer split into 10s of div bubbles. Also, it is now possible to use prefect's built-in file checkpointing in place of read/write file explicitely in each etl task as this DATA_DIR variable can be passed outside of flow context during the Task instantiation step at the top of the etl file. It's still possible to run transform tasks independently of flow provided that a default read-from-filepath argument is set for each data file.

  • Refactor drem.transform.dublin_postcodes into a prefect flow & dissolve all Co Dublin postcodes into one multipolygon geometry using geopandas dissolve.

  • Pull generalisable pandas and geopandas prefect tasks into drem.utilities.pandas_tasks and drem.utilities.geopandas_tasks respectively. Only unit tested if functionality differs from pandas or geopandas implementation...

Publish v0.3.1 (#123)

08 Oct 08:24
eacb96a

Choose a tag to compare

Added

  • Create generic Download Task class that can be called to create download tasks to get data direct from any url. Unit test this generic module using responses to mock http requests.

  • Clean electricity demands in closed-access CRU Smart Meter Trials Data in drem.transform.cru_electricity into a usable tabular format via dask.dataframe

  • Create an electricity diversity curve using closed-access CRU Smart Meter Trials Data electricity data

Changed

  • Replace drem.extract with separate download, unzip and convert tasks so that each task is doing one thing and one thing only (more modular code) & read-in files of raw data in transform:

    • use of great_expectations at each stage of the process to ensure expectations are met s.a. downloaded data is as expected (column names, missing values etc.) and each transform task is cleaning in the specified manner. This replaces the previous implementation of fragile functional tests on each transform task which would break upon a simple column name change. great_expectations are quicker to whip up as they are automated...

    • Rewrite drem.utilities.ftest_data so that instead of creating sample parquet files of raw data to be run through the functional flow test (which skips download, unzip & convert) it creates data in the same file format as the actual raw data used in non-test flows (so now only download tasks are skipped in the functional/end-to-end tests)

    • Create a module drem.utilities.convert to transform any file format (xlsx, csv, shp) into parquet in the etl flow (previously this was done during catch-all extract tasks)

    • Create a module drem.utilities.zip to unzip zipped folders prior to conversion to parquet.

    • Refactor drem.extract.ber into drem.download.ber which merely logs into BER Public search and downloads the data (so leaves unzipping & converting to parquet to other tasks...)

    • Rewrite entire drem.etl.residential using generic Download tasks (defining url for each in task initialisation @ top of module), unzip zipped folders, convert to parquet & transform task using a filepath input (rather than passing a DataFrame).

  • Pull generic pandas tasks into drem.utilities.pandas_tasks and run import drem.utilities.pandas_tasks as pdt to call em within any flow

  • Refactor drem.transform.ber into drem.estimate.ber_archetypes and drem.transform.ber with transform only performing cleaning/filtering operations. This enables generic command line operations on clean ber Dublin data with all 204 columns as archetype generation is no longer a barrier...

  • Refactor all transform tasks into prefect sub-flows so:

    • transform tasks log their progress step-by-step so that it is obvious at which step a transform task fails and why

    • enables use of prefect flow visualization for transform tasks as well as etl flows so that non-programmers can easily see what steps are being performed on the data.

  • Refactor Small Area Statitistics so can query any table in glossary excel file by copying and pasting the table name in the Table Within Themes column to the drem.transform.sa_statistics._extract_rows_from_glossary function's target argument. Previously, this function was hard-coded to extract only the period built table from glossary.

Removed

  • Remove all drem.* namespace tasks and import them directly via from drem.transform.ber import transform_ber

  • Delete all non-etl functional tests (previously had a functional test for each transform task) as am replacing em all with great_expectations tasks. Previous implementation was too fragile as it broke every time a single column was changed. Expecations on the other hand are designed to be dynamic and easily updatable...

  • Remove all tdda related code as has been replaced by great_expectations for functional tests and pytest for individual task tests... This includes removing all unit test data from source control...

v0.1.2

20 Aug 23:23
24f88ea

Choose a tag to compare

v0.1.2 Pre-release
Pre-release

Resolves dependency conflict bug in v0.1.1

First Release

20 Aug 16:23
0a4d817

Choose a tag to compare

First Release Pre-release
Pre-release
v0.1.1

Add pypy publish on GitHub Release & Pypy badge (#32)