diff --git a/docs/INDEX.md b/docs/INDEX.md index 76c7c62cff..446dc29812 100644 --- a/docs/INDEX.md +++ b/docs/INDEX.md @@ -1,8 +1,11 @@ -# eCLM Documentation +# Welcome to eCLM! -```{important} -**Welcome!** You are viewing the first version of the documentation for eCLM. This is a living document, which means it will be continuously updated and improved. Please check back regularly for the latest information and updates. +```{warning} +TODO ``` eCLM is based on version 5.0 of the Community Land Model ([CLM5](https://www.cesm.ucar.edu/models/clm)) with simplified infrastructure for build and namelist generation. The build system is handled entirely by Cmake and namelists are generated through a small set of Python scripts. Similar to CLM5, eCLM is forced with meteorological data and uses numerous input streams on soil properties, land cover and land use, as well as complex parameter sets on crop phenology, and plant hydraulics for simulations. +An overview of eCLM can be seen from this [poster](https://virtual.oxfordabstracts.com/event/75166/submission/35) which was presented at [RSECon'25](https://rsecon25.society-rse.org). + +![eCLM poster](users_guide/images/rsecon25_eclm_poster.jpg) diff --git a/docs/_toc.yml b/docs/_toc.yml index 7dd640808d..2c0a2aaccf 100644 --- a/docs/_toc.yml +++ b/docs/_toc.yml @@ -5,15 +5,10 @@ root: INDEX parts: - caption: Introduction chapters: - - file: users_guide/installation/README - title: Installing eCLM - file: users_guide/introduction/introduction title: Scientific Background - - file: users_guide/introduction/perturbation - title: Atmospheric forcing noise - - file: users_guide/introduction/soil_hydraulic_parameters - title: Soil hydraulic properties from surface file - + - file: users_guide/introduction/first_tutorial.md + title: First Tutorial - caption: User's Guide chapters: - file: users_guide/case_examples/README @@ -29,15 +24,16 @@ parts: - file: users_guide/analyzing_model_output title: Analyzing model output - - file: users_guide/case_creation/README + - url: https://hpscterrsys.github.io/eCLM_static-file-generator/INDEX.html title: Creating a custom case + + - file: users_guide/misc_tutorials/README + title: Miscellaneous tutorials sections: - - file: users_guide/case_creation/1_create_grid_file - - file: users_guide/case_creation/2_create_mapping_file - - file: users_guide/case_creation/3_create_domain_file - - file: users_guide/case_creation/4_create_surface_file - - file: users_guide/case_creation/5_modifications_surface_domain_file - - file: users_guide/case_creation/6_create_atm_forcings + - file: users_guide/misc_tutorials/perturbation + title: Atmospheric forcing noise + - file: users_guide/misc_tutorials/soil_hydraulic_parameters + title: Soil hydraulic properties from surface file - caption: Developer's Guide chapters: @@ -51,7 +47,3 @@ parts: - file: reference/history_fields - url: https://escomp.github.io/CTSM/release-clm5.0/tech_note/index.html title: CLM5 Technical Note - - url: https://github.com/HPSCTerrSys/eCLM_static-file-generator/blob/main/README.md) - title: eCLM static file generator - - url: https://hpscterrsys.github.io/TSMP2_workflow-engine - title: TSMP2 Workflow Engine diff --git a/docs/users_guide/case_creation/1_create_grid_file.md b/docs/users_guide/case_creation/1_create_grid_file.md deleted file mode 100644 index 1a859968c7..0000000000 --- a/docs/users_guide/case_creation/1_create_grid_file.md +++ /dev/null @@ -1,55 +0,0 @@ -# Create SCRIP grid file - -The first step in creating your input data is to define your model domain and the grid resolution you want to model in. There are several options to create the SCRIP grid file that holds this information: -1. Using the `mkscripgrid.py` script to create a regular latitude longitude grid. -2. Using the `produce_scrip_from_griddata.ncl` script to convert an existing netCDF file that holds the latidude and longitude centers of your grid in 2D (This allows you to create a curvilinear grid). -3. Similar to the first option but using the `scrip_mesh.py` script to create the SCRIP grid file. - -To start the SCRIP grid file creation navigate into the `mkmapgrids` directory where you will find the above mentioned scripts. - -```sh -cd mkmapgrids -``` - -## 1. Create SCRIP grid file with `mkscripgrid.py` - -To use `mkscripgrid.py`, first open the script (for example using vim text editor) and adapt the variables that describe your grid. These include your grid name, the four corner points of your model domain as well as the resolution (lines 42-50 of the script). Then you can execute the script: - -```sh -python mkscripgrid.py -``` - -```{attention} -The `mkscripgrid.py` script requires numpy and netCDF4 python libraries to be installed (use pip install to do that if not already installed). -``` - -The output will be a SCRIP grid netCDF file containing the grid dimension and the center and corners for each grid point. It will have the format `SCRIPgrid_"Your grid name"_nomask_c"yymmdd".nc` - -## 2. Create SCRIP grid file from griddata with `produce_scrip_from_griddata.ncl` - -Unfortunately, NCL is not maintained anymore in the new software stages. Therefore, in order to use it you first need to load an older Stage and the required software modules: - -```sh -module load Stages/2020 -module load Intel/2020.2.254-GCC-9.3.0 -module load ParaStationMPI/5.4.7-1 -module load NCL -``` - -Next, adapt the input in `produce_scrip_from_griddata.ncl` to your gridfile.This includes choosing a name for your output file "OutFileName", adjusting the filename of your netcdf file in line 9 and the variable names for longitude/latitude in lines 10-11. Then execute: - -```sh -ncl produce_scrip_from_griddata.ncl -``` - -## 3. Create SCRIP grid file from griddata using `scrip_mesh.py` - -Alternatively to the first option, you can use the python script `scrip_mesh.py`. Like the ncl script it can create SCRIP files including the calculation of corners. It takes command line arguments like this: - -```sh -python3 scrip_mesh.py --ifile NC_FILE.nc --ofile OUTPUT_SCRIP.nc --oformat SCRIP # replace NC_FILE.nc with your netcdf file and choose a name for your output SCRIP grid file for OUTPUT_SCRIP.nc -``` - ---- - -**Congratulations!** You successfully created your SCRIP grid file and can now move on to the next step. \ No newline at end of file diff --git a/docs/users_guide/case_creation/2_create_mapping_file.md b/docs/users_guide/case_creation/2_create_mapping_file.md deleted file mode 100644 index 1e23c821a8..0000000000 --- a/docs/users_guide/case_creation/2_create_mapping_file.md +++ /dev/null @@ -1,39 +0,0 @@ -# Create mapping files - -To start the mapping file creation navigate into the `mkmapdata` directory where you will find the script needed for this step. - -```sh -cd ../mkmapdata -``` - -Before you run `runscript_mkmapdata.sh` you need to adapt some environment variables in lines 23-25 of the script. For this open the script (for example using vim text editor) and enter the name of your grid under `GRIDNAME` (same as what you used for the SCRIP grid file). For `CDATE`, use the date that your SCRIP grid file was created (per default the script uses the current date, if you created the SCRIPgrid file at some other point, you find the date of creation at the end of your SCRIPgrid file or in the file information). Lastly, provide the full path and name of your SCRIP grid file under `GRIDFILE`. Save and close the script. - -To create your mapping files, you need a set of rawdata. If you are a JSC user you can simply refer to the common data repository by adapting the "rawpath" path in line 29 of the script. - -``` -rawpath="/p/scratch/cslts/shared_data/rlmod_eCLM/inputdata/surfdata/lnd/clm2/mappingdata/grids" -``` - -For non JSC users, download the data and adapt "rawpath" to their new location. To download the data to the directory use: - -```sh -wget --no-check-certificate -i clm_mappingfiles.txt -``` - -Now you can execute the script: - -```sh -sbatch runscript_mkmapdata.sh -``` - -The output will be a `map_*.nc` file for each of the rawdata files. These files are the input for the surface parameter creation weighted to your grid specifications. - -To generate the domain file in the next step a mapfile is needed. This can be any of the generated `map_*.nc` files. So, set the environment variable `MAPFILE` for later use: - -```sh -export MAPFILE="path to your mapfiles"/"name of one of your map files" -``` - ---- - -**Congratulations!** You successfully created your mapping files and can now move on to the next step. \ No newline at end of file diff --git a/docs/users_guide/case_creation/3_create_domain_file.md b/docs/users_guide/case_creation/3_create_domain_file.md deleted file mode 100644 index 9ea49b9b2c..0000000000 --- a/docs/users_guide/case_creation/3_create_domain_file.md +++ /dev/null @@ -1,37 +0,0 @@ -# Create domain file - -In this step you will create the domain file for your case using `gen_domain`. First, you need to navigate into the `gen_domain_files/src/` directory and compile it with the loaded modules ifort, imkl, netCDF and netCDF-Fortran. - -```sh -cd ../gen_domain_files/src/ - -# Compile the script -ifort -o ../gen_domain gen_domain.F90 -mkl -lnetcdff -lnetcdf -``` -```{attention} -If you get a message saying "ifort: command line remark #10412: option '-mkl' is deprecated and will be removed in a future release. Please use the replacement option '-qmkl'" or the compiling fails, replace `-mkl` with `-qmkl`. -``` - -Before running the script you need to export the environment variable `GRIDNAME` (same as what you used for the SCRIP grid file and in the `runscript_mkmapdata.sh` script). - -```sh -export GRIDNAME="your gridname" -``` -Then you can run the script: -```sh -cd ../ -./gen_domain -m $MAPFILE -o $GRIDNAME -l $GRIDNAME -``` - -The output of this will be two netCDF files `domain.lnd.*.nc` and `domain.ocn.*.nc` that define the land and ocean mask respectively. The land mask will inform the atmosphere and land inputs of eCLM when running a case. - -However, `gen_domain` defaults the use of the variables `mask` and `frac` on these files to be for ocean models, i.e. 0 for land and 1 for ocean. So to use them you have to either manipulate the `domain.lnd.*.nc` file to have mask and frac set to 1 instead of 0 (WARNING: some netCDF script languages have `mask` as a reserved keyword e.g. NCO, use single quotation marks as workaround). -Or simply swap/rename the `domain.lnd.*.nc` and `domain.ocn.*.nc` file: - -```sh -mv domain.lnd."your gridname"_"your gridname"."yymmdd".nc temp.nc -mv domain.ocn."your gridname"_"your gridname"."yymmdd".nc domain.lnd."your gridname"_"your gridname"."yymmdd".nc -mv temp.nc domain.ocn."your gridname"_"your gridname"."yymmdd".nc -``` - -**Congratulations!** You successfully created your domain files and can now move on to the final next step to create your surface data. diff --git a/docs/users_guide/case_creation/4_create_surface_file.md b/docs/users_guide/case_creation/4_create_surface_file.md deleted file mode 100644 index fe1d654c70..0000000000 --- a/docs/users_guide/case_creation/4_create_surface_file.md +++ /dev/null @@ -1,43 +0,0 @@ -# Create surface file - -In this step you will create the surface data file using the `mksurfdata.pl` script. -First, we will compile the script with `make` in the `mksurfdata/src` directory. - - -```sh -cd ../mksurfdata/src - -# Compile the script -make -``` - -The script needs a few environment variables such as `GRIDNAME` (exported in the previous step), `CDATE` (date of creation of the mapping files which can be found at the end of each `map_*` file before the file extension) and `CSMDATA` (the path where the raw data for the surface file creation is stored) before executing the script. - -```sh -export CDATE=`date +%y%m%d` -export CSMDATA="/p/scratch/cslts/shared_data/rlmod_eCLM/inputdata/" # this works for JSC users only, for non JSC users see below - -# generate surfdata -./mksurfdata.pl -r usrspec -usr_gname $GRIDNAME -usr_gdate $CDATE -l $CSMDATA -allownofile -y 2000 -crop -``` - -```{tip} -The `-crop` option used in `./mksurfdata.pl` will create a surface file for BGC mode with all crops active. If you want to use SP mode, you should run without this option. - -Use `./mksurfdata.pl -help` to display all options possible for this script. -For example: -- `hirespft` - If you want to use the high-resolution pft dataset rather than the default lower resolution dataset (low resolution is at half-degree, high resolution at 3minute), hires is only available for present-day (2000) -``` - -**For non JSC users**: -Non JSC users can download the raw data from HSC datapub using this link or from the official rawdata repository using `wget` before submitting the script. - -```sh -wget "RAWDATA_LINK"/"NAME_OF_RAWDATA_FILE" --no-check-certificate # repeat this for every rawdata file -``` - -You will see a "Successfully created fsurdat files" message displayed at the end of the script if it ran through. - -The output will be a netCDF file similar to `surfdata_"your grid name"_hist_78pfts_CMIP6_simyr2000_c"yymmdd".nc`. - -**Congratulations!** You successfully created your surface data file! In the next step you will learn how to create your own atmospheric forcings. \ No newline at end of file diff --git a/docs/users_guide/case_creation/5_modifications_surface_domain_file.md b/docs/users_guide/case_creation/5_modifications_surface_domain_file.md deleted file mode 100644 index 17f78027a6..0000000000 --- a/docs/users_guide/case_creation/5_modifications_surface_domain_file.md +++ /dev/null @@ -1,39 +0,0 @@ -# Modification of the surface and domain file - - -## Handling negative longitudes and the landmask - -eCLM does not accept negative longitudes for the surface and domain file. In case you used a grid file to create your SCRIP grid file which used negative longitudes (instead of creating it through the `mkscripgrid.py` script), these need to be converted into the 0 to 360 degree system used by eCLM. You can use the `mod_domain.sh` script in the main directory `eCLM_static_file_workflow` to do this. - -Before executing the script adapt the paths to your [surface file](https://hpscterrsys.github.io/eCLM/users_guide/case_creation/4_create_surface_file.html#create-surface-file) and [domain file](https://hpscterrsys.github.io/eCLM/users_guide/case_creation/3_create_domain_file.html#create-domain-file). - -`mod_domain.sh` also replaces the `mask` and `frac` variables of your domain file with the information from a `landmask_file` (this `landmask.nc` file should contain the 2D variables `mask` and `frac` that contain your landmask (value 1 for land and 0 for ocean)). This step should not be necessary as you already swapped the `domain.lnd.*.nc` and `domain.ocn.*.nc` file when creating them. However, for some domains (e.g. the ICON grid) the mask from the rawdata may not correctly represent your landmask. Additionally, if you want to replace the surface parameters with higher resolution data (see below), you may need to update the landmask as well to match your surface parameters (e.g. coast lines may have changed). - -## Modifying surface parameters - -You may want to modify the default soil, landuse or other land surface data on the surface file if you have measurements or a different data source of higher resolution or similar available. -You can do this by accessing the relevant variables on the surface file. - -Variables you want to modify may include (non-exhaustive list): - -Soil: -- `PCT_SAND`: percentage sand at soil levels (10 levels are considered) -- `PCT_CLAY`: percentage clay at soil levels (10 levels are considered) -- `ORGANIC`: organic matter density at soil levels (10 levels are considered) - -Landuse at the landunit level ([Fig. 2](https://hpscterrsys.github.io/eCLM/users_guide/introduction_to_eCLM/introduction.html#fig2)): -- `PCT_NATVEG`: total percentage of natural vegetation landunit -- `PCT_CROP`: total percentage of crop landunit -- `PCT_URBAN`: total percentage of urban landunit -- `PCT_LAKE`: total percentage of lake landunit -- `PCT_GLACIER`: total percentage of glacier landunit -- `PCT_WETLAND`: total percentage of wetland landunit - -Types of crop and natural vegetation at the patch level ([Fig. 2](https://hpscterrsys.github.io/eCLM/users_guide/introduction_to_eCLM/introduction.html#fig2)): - -- `PCT_NAT_PFT`: percent plant functional type (PFT) on the natural veg landunit (% of landunit) (15 PFTs are considered see here for a list of PFTs) -- `PCT_CFT`: percent crop functional type (CFT) on the crop landunit (% of landunit) (2 CFTs are considered in SP mode, 64 CFTs are considered in BGC mode, see here for a list of CFTs) - -Land fraction: -- `LANDFRAC_PFT`: land fraction from PFT dataset -- `PFTDATA_MASK`: land mask from pft dataset, indicative of real/fake points \ No newline at end of file diff --git a/docs/users_guide/case_creation/6_create_atm_forcings.md b/docs/users_guide/case_creation/6_create_atm_forcings.md deleted file mode 100644 index 539fea3413..0000000000 --- a/docs/users_guide/case_creation/6_create_atm_forcings.md +++ /dev/null @@ -1,95 +0,0 @@ -# Create atmospheric forcing files - -There exist a few global standard forcing data sets that can be downloaded together with their domain file from the official data repository via this link. For beginners, it is easiest to start with these existing data files. - -- GSWP3 NCEP forcing dataset -- CRUNCEP dataset -- Qian dataset - - -To run with your own atmospheric forcing data, you need to set them up in NetCDF format that can be read by the atmospheric data model `DATM`. - -There is a list of eight variables that are expected to be on the input files. The names and units can be found in the table below (in the table TDEW and SHUM are optional fields that can be used in place of RH). The table also lists which of the fields are required and if not required what the code will do to replace them. If the names of the fields are different or the list is changed from the standard list of eight fields: FLDS, FSDS, PRECTmms, PSRF, RH, TBOT, WIND, and ZBOT, the resulting streams file will need to be modified to take this into account. - -```{list-table} Atmospheric forcing fields adapted from CESM1.2.0 User's Guide Documentation. -:header-rows: 1 -:name: tab1 - -* - Short-name - - Description - - Unit - - Required? - - If NOT required how replaced -* - FLDS - - incident longwave - - W/m2 - - No - - calculates based on Temperature, Pressure and Humidity (NOTE: The CRUNCEP data includes LW down, but by default we do NOT use it -- we use the calculated values) -* - FSDS - - incident solar - - W/m2 - - Yes - - / -* - FSDSdif - - incident solar diffuse - - W/m2 - - No - - based on FSDS -* - FSDSdir - - incident solar direct - - W/m2 - - No - - based on FSDS -* - PRECTmms - - precipitation - - mm/s - - Yes - - / -* - PSRF - - pressure at the lowest atm level - - Pa - - No - - assumes standard-pressure -* - RH - - relative humidity at the lowest atm level - - \% - - No - - can be replaced with SHUM or TDEW -* - SHUM - - specific humidity at the lowest atm level - - kg/kg - - Optional in place of RH - - can be replaced with RH or TDEW -* - TBOT - - temperature at the lowest atm level - - K (or can be C) - - Yes - - / -* - TDEW - - dew point temperature - - K (or can be C) - - Optional in place of RH - - can be replaced with RH or SHUM -* - WIND - - wind at the lowest atm level - - m/s - - Yes - - / -* - ZBOT - - observational height - - m - - No - - assumes 30 meters -``` - -All of the variables should be dimensioned: time, lat, lon, with time units in the form of "days since yyyy-mm-d hh:mm:ss" and a calendar attribute that can be "noleap" or "gregorian". There should be separate files for each month called `YYYY-MM.nc` where YYYY-MM corresponds to the four digit year and two digit month with a dash in-between. - -For single point cases where the atmospheric data has hourly or half-hourly temporal resolution, all data can be in the same monthly files (`YYYY-MM.nc`). For regional cases and if the data is at coarser temporal resolution, different time interpolation algorithms will be used for solar radiation, precipitation and the remaining input data so the data needs to be split into three files and placed into three different folders (`Precip/`, `Solar/`, `TPHWL/`). You also need a domain file to go with your atmospheric data which can be the same as the land domain file created in the previous workflow if the spatial resolution of your atmospheric data is the same as of your specified domain. - -For JSC users, an example python script to create forcings for a single-point case based on hourly observations can be found under `/p/scratch/cslts/shared_data/rlmod_eCLM`. - -Simply copy the script to your directory and adapt it to your own data. - -```sh -cp /p/scratch/cslts/shared_data/rlmod_eCLM/createnetCDF_forc_hourly_input.py $HOME -``` \ No newline at end of file diff --git a/docs/users_guide/case_creation/README.md b/docs/users_guide/case_creation/README.md deleted file mode 100644 index 30300e6881..0000000000 --- a/docs/users_guide/case_creation/README.md +++ /dev/null @@ -1,34 +0,0 @@ -# Creating a custom case - -This workflow will guide you through creating your own input datasets at a resolution of your choice for eCLM simulations. - -Throughout this process, you will use a range of different scripts to create the necessary files. - -```{figure} ../images/Build_custom_input.png -:height: 500px -:name: fig5 - -Overview of the work flow for the creation of custom surface datasets adapted from the CLM5.0 User's Guide. -``` -

- -This workflow is based on the following Github repository that contains all the necessary tools: https://github.com/HPSCTerrSys/eCLM_static_file_workflow. It follows the official CLM-workflow but makes a few adaptations. The basis is the clm5.0 release but there might be newer developments in the CTSM and CIME Github repositories. - -To get started, log into the JSC system and clone the repository for instance into your folder in `project1` that you created during the build of eCLM. - -```sh -cd /p/project1/projectID/user1 # replace projectID with your compute project and user1 with your username - -git clone https://github.com/HPSCTerrSys/eCLM_static_file_workflow.git -``` - -Sourcing the environment file that is contained in the repository will load all the required software modules. - -```sh -cd eCLM_static_file_workflow/ -source jsc.2023_Intel.sh -``` -You are now ready to start with the workflow. - -```{tableofcontents} -``` \ No newline at end of file diff --git a/docs/users_guide/case_examples/README.md b/docs/users_guide/case_examples/README.md index 3c98c12a5a..ea43cd6a82 100644 --- a/docs/users_guide/case_examples/README.md +++ b/docs/users_guide/case_examples/README.md @@ -1,12 +1,4 @@ # Running example cases -Always load the eCLM environment before creating a case. This only needs to be done once per terminal session. - -```sh -source $HOME/load-eclm-variables.sh -``` - -## Cases - ```{tableofcontents} -``` \ No newline at end of file +``` diff --git a/docs/users_guide/case_examples/Wuestebach.md b/docs/users_guide/case_examples/Wuestebach.md index 8b26533b08..ae9272c3ba 100644 --- a/docs/users_guide/case_examples/Wuestebach.md +++ b/docs/users_guide/case_examples/Wuestebach.md @@ -1,5 +1,8 @@ # Single-Point Wuestebach +```{warning} TODO +``` + This test case at point scale covers the Wuestebach test site that is part of the TERENO network. Wuestebach is a forest site located in the Eifel region in Germany. To set up eCLM and run this test case, follow the instructions below. ```{figure} ../images/wtb_bogena.png @@ -9,34 +12,13 @@ This test case at point scale covers the Wuestebach test site that is part of th Location of the Wüstebach test site within the TERENO Rur/Lower Rhine Valley observatory. Adapted from Bogena et al (2010). ``` -## 1. Copy the namelist files -For JSC users, all required namelist and input files to run this case are in the shared directory `/p/scratch/cslts/shared_data/rlmod_eCLM` - -```sh -mkdir test_cases -cp -r /p/scratch/cslts/shared_data/rlmod_eCLM/example_cases/wtb_1x1 test_cases/ -cd test_cases/wtb_1x1 -``` - -## 1. Download and extract data files (**For non JSC users**) +## 1. Download Wuestebach data files -You can download all required files through the JSC datahub. ```sh -mkdir -p test_cases/1x1_wuestebach -wget https://datapub.fz-juelich.de/slts/eclm/1x1_wuestebach.tar.gz -tar xf 1x1_wuestebach.tar.gz -C test_cases/1x1_wuestebach -``` -The repository contains two directories. The `common` directory contains some general input files necessary to run eCLM cases. The `wtb_1x1` directory contains the case specific domain and surface files as well as atmospheric forcing data and a script for namelist generation. - -```sh -# Generate namelists -cd test_cases/1x1_wuestebach -export ECLM_SHARED_DATA=$(pwd) -cd wtb_1x1 -clm5nl-gen wtb_1x1.toml - -# Validate namelists -clm5nl-check . +git clone https://icg4geo.icg.kfa-juelich.de/ExternalReposPublic/tsmp2-static-files/extpar_eclm_wuestebach_sp.git +cd extpar_eclm_wuestebach_sp/static.resources +generate_wtb_namelists.sh 1x1_wuestebach +cd 1x1_wuestebach ``` ## 2. Check the case setup @@ -55,58 +37,8 @@ cat drv_in ## 3. Run the test case -Customize the copied job script `run-eclm-job.sh` as desired. In this example, it is already customized to this test case, you should just adapt the SBATCH parameters `--account` to your compute project and `--partition` to your system. As Wuestebach is a single-column case, the number of processors should be set to 1 (SBATCH parameter `--ntasks-per-node=1`). - -**For non JSC users**: Create a job script in your case directory with: - -```sh -cat << EOF > run-eclm-job.sh -``` -Adapt the SBATCH parameters and then copy the following in the shell file: - -```sh -#!/usr/bin/env bash -#SBATCH --job-name=wtb_1x1 -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=1 -#SBATCH --account=jibg36 -#SBATCH --partition=batch -#SBATCH --time=1:00:00 -#SBATCH --output=logs/%j.eclm.out -#SBATCH --error=logs/%j.eclm.err - -ECLM_EXE=${eCLM_ROOT}/install/bin/eclm.exe -if [[ ! -f $ECLM_EXE || -z "$ECLM_EXE" ]]; then - echo "ERROR: eCLM executable '$ECLM_EXE' does not exist." - exit 1 -fi - -# Set PIO log files -if [[ -z $SLURM_JOB_ID || "$SLURM_JOB_ID" == " " ]]; then - LOGID=$(date +%Y-%m-%d_%H.%M.%S) -else - LOGID=$SLURM_JOB_ID -fi -mkdir -p logs timing/checkpoints -LOGDIR=$(realpath logs) -comps=(atm cpl esp glc ice lnd ocn rof wav) -for comp in ${comps[*]}; do - LOGFILE="$LOGID.comp_${comp}.log" - sed -i "s#diro.*#diro = \"$LOGDIR\"#" ${comp}_modelio.nml - sed -i "s#logfile.*#logfile = \"$LOGFILE\"#" ${comp}_modelio.nml -done - -# Run model -srun $ECLM_EXE -EOF -``` - -Then you can submit your job: - -```sh -sbatch run-eclm-job.sh +```bash +mpirun -np 1 eclm.exe ``` -To check the job status, run `sacct`. - The model run is successful if the history files (`wtb_1x1.clm2.h0.*.nc`) have been generated. diff --git a/docs/users_guide/images/Putty_X11.png b/docs/users_guide/images/Putty_X11.png deleted file mode 100644 index 138a72b495..0000000000 Binary files a/docs/users_guide/images/Putty_X11.png and /dev/null differ diff --git a/docs/users_guide/images/load_env.png b/docs/users_guide/images/load_env.png deleted file mode 100644 index fea39337de..0000000000 Binary files a/docs/users_guide/images/load_env.png and /dev/null differ diff --git a/docs/users_guide/images/winSCP.png b/docs/users_guide/images/winSCP.png deleted file mode 100644 index 36d9e5f590..0000000000 Binary files a/docs/users_guide/images/winSCP.png and /dev/null differ diff --git a/docs/users_guide/installation/README.md b/docs/users_guide/installation/README.md deleted file mode 100644 index 5599edd686..0000000000 --- a/docs/users_guide/installation/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# Installing eCLM - -The easiest way to install eCLM is through [TSMP2 build system](https://github.com/HPSCTerrSys/TSMP2). - -```sh -# Download TSMP2 -git clone https://github.com/HPSCTerrSys/TSMP2.git -cd TSMP2 - -# Build and install eCLM -./build_tsmp2.sh --eCLM -``` - diff --git a/docs/users_guide/introduction/first_tutorial.md b/docs/users_guide/introduction/first_tutorial.md new file mode 100644 index 0000000000..b8c2f8e1c2 --- /dev/null +++ b/docs/users_guide/introduction/first_tutorial.md @@ -0,0 +1,90 @@ +# First Tutorial + +```{warning} +TODO!!! +``` + +Welcome! This guide will teach you on how to set up and run eCLM for the first time. Normally, eCLM is run +on an HPC cluster, and thus eCLM user guides typically rely on steps that only work on a particular HPC +environment. Not in this tutorial though: the aim is toward general users with only a personal laptop/computer. +The most important thing to learn is the basic workflow of running eCLM simulation for the first time: + +1. Install eCLM dependencies +2. Build eCLM +3. Set up a simulation experiment +4. Run eCLM + +Steps 1 and 2 are the most time-consuming part. But once set up, you only need to do steps 3 and 4. + +## Prerequisites + +**This guide has been written to work on an Ubuntu system**. For Windows/Mac users, please set up a virtual +[Ubuntu 24.04 LTS] OS first through a container app (*e.g.* [Podman] or [Docker]). All steps in this guide +assume an Ubuntu system. + +**Users are also expected to be familiar with using command-line interfaces (CLI).** For GUI users, unfortunately CLI is +usually the only option of working in an HPC environment. Consider this as a preparation to use HPC! You don't have to be +a CLI wizard; for starters you just need to know how to run your local terminal/console app and what the basic commands +such as `cd`, `ls`, `pwd`, and `cat`, do. If you want a refresher, check out the [beginner-friendly shell tutorial by MIT]. +It will arm you with more than enough info to go through this tutorial. + +## 1. Load eCLM dependencies + +eCLM requires CMake, a Fortran compiler, an MPI library, and NetCDF libraries. On an HPC cluster these packages +are typically installed already. For our case, we need to install them: + +```sh +# Install basic utilities +sudo apt-get install libxml2-utils cmake + +# Install Fortran and MPI compilers +sudo apt-get install gfortran openmpi-bin libopenmpi-dev + +# Install NetCDF libraries +sudo apt-get install netcdf-bin libnetcdf-dev libnetcdff-dev libpnetcdf-dev +``` + +## 2. Build eCLM + +```{hint} +**Building** in this context means the transformation of source codes (e.g. eCLM Fortran source codes) +into an application binary (e.g. `eclm.exe`) which the users can run. +``` + +```sh +# You can modify the eCLM install directory, or simply use the provided default. +eCLM_INSTALL_DIR=${HOME}/eCLM +mkdir -p ${eCLM_INSTALL_DIR} + +# eCLM can be easily built via the TSMP2 build system. The following step will download TSMP2. +git clone https://github.com/HPSCTerrSys/TSMP2.git +cd TSMP2 + +# Start the eCLM build process (will take <10minutes). +export SYSTEMNAME="UBUNTU" +./build_tsmp2.sh eCLM --install-dir=${eCLM_INSTALL_DIR} +``` + +## 3. Set up a simulation experiment + +```sh +git clone https://icg4geo.icg.kfa-juelich.de/ExternalReposPublic/tsmp2-static-files/extpar_eclm_wuestebach_sp.git +cd extpar_eclm_wuestebach_sp/static.resources +generate_wtb_namelists.sh 1x1_wuestebach +``` + +## 4. Run eCLM + +```sh +cd 1x1_wuestebach +mpirun -np 1 ${eCLM_INSTALL_DIR}/bin/eclm.exe +``` + +## Next Steps + +[Podman]: https://docs.podman.io/en/latest/Tutorials.html +[Docker]: https://docs.docker.com/get-started +[VirtualBox]: https://www.virtualbox.org +[UTM]: https://mac.getutm.app +[Ubuntu 24.04 LTS]: https://hub.docker.com/_/ubuntu +[beginner-friendly shell tutorial by MIT]: https://missing.csail.mit.edu/2020/course-shell diff --git a/docs/users_guide/misc_tutorials/README.md b/docs/users_guide/misc_tutorials/README.md new file mode 100644 index 0000000000..95ccf907b3 --- /dev/null +++ b/docs/users_guide/misc_tutorials/README.md @@ -0,0 +1,7 @@ +# Miscellaneous tutorials + +Special use cases of eCLM go into this section. + + +```{tableofcontents} +``` diff --git a/docs/users_guide/introduction/perturbation.md b/docs/users_guide/misc_tutorials/perturbation.md similarity index 100% rename from docs/users_guide/introduction/perturbation.md rename to docs/users_guide/misc_tutorials/perturbation.md diff --git a/docs/users_guide/introduction/soil_hydraulic_parameters.md b/docs/users_guide/misc_tutorials/soil_hydraulic_parameters.md similarity index 100% rename from docs/users_guide/introduction/soil_hydraulic_parameters.md rename to docs/users_guide/misc_tutorials/soil_hydraulic_parameters.md