Skip to content

Rofi spread#13

Open
anton-seaice wants to merge 8 commits intocmeps1.1.35-xfrom
rofi_spread
Open

Rofi spread#13
anton-seaice wants to merge 8 commits intocmeps1.1.35-xfrom
rofi_spread

Conversation

@anton-seaice
Copy link
Copy Markdown
Collaborator

@anton-seaice anton-seaice commented Mar 1, 2026

This change allows for spreading of the iceberg melt field according to a climatological pattern.

A new configuration parameter is added, rof2ocn_ice_spread - which is the filename for a netcdf file with a 12-monthly climatological spreading pattern. If this parameter is not set - there is no change to existing behaviour (see repro test)

When a file is provided, the incoming Forr_rofi field is spread according to the pattern supplied in the climatology of spreading.

e.g., for January:

image

and for June:

image

Water volume is maintained between that coming from drof and ficeberg in MOM:

image

See example OM3 100km configuration:

ACCESS-NRI/access-om3-configs@3169a38

@anton-seaice anton-seaice self-assigned this Mar 1, 2026
@anton-seaice anton-seaice marked this pull request as ready for review March 4, 2026 03:05
@anton-seaice
Copy link
Copy Markdown
Collaborator Author

mediator timestamps are a bit strange, hence the minor difference in mediator vs mom runoff plots:

image

@aekiss
Copy link
Copy Markdown

aekiss commented Mar 10, 2026

Thanks @anton-seaice - do you think this is ready for the next 25km IAF test run?

@anton-seaice
Copy link
Copy Markdown
Collaborator Author

Ready from my perspective.

  1. Do we want to seperate the impact of changing the land mask in the poles to support b-grid advection from this change ?
  2. Needs review and deployment - would take a few days at least

Comment thread mediator/med_io_mod.F90
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There isn't much flexibility in the mediator io. I modified this so that it can read 3d variable (x,y,time) from a netcdf . For whatever reason, 3d information in restart files is saved in separate 2d variables so the functionality didn't exisit.

Comment thread mediator/med_io_mod.F90 Outdated
@anton-seaice
Copy link
Copy Markdown
Collaborator Author

I did some cleanup, but have finished that for now

Copy link
Copy Markdown

@dougiesquire dougiesquire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @anton-seaice. Sorry for taking so long to get to this.

My main worry is the lack of time interpolation and the resulting step changes in the pattern between months. But, as we've discussed, we probably can't address that here.

Comment thread mediator/med_io_mod.F90
!-------------------------------------------------------------------------------
#endif

rc = ESMF_SUCCESS
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? Is this just convention (e.g. in case someone adds an early return path or something)?

Comment on lines +544 to +568
runoff_flux => rof2ocn_spread(:,month)
! calculate sum of spreading
local_sum = 0.0_r8
do i = 1, size(runoff_flux)
if (lats(i) < 0.0_r8) then
local_sum(1) = local_sum(1) + areas(i) * runoff_flux(i)
else
local_sum(2) = local_sum(2) + areas(i) * runoff_flux(i)
end if
end do

call ESMF_VMAllreduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return

! adjust correction so that it's sums to 1 in each hemisphere
do i = 1, size(runoff_flux)
if (lats(i) < 0.0_r8) then
runoff_flux(i) = runoff_flux(i) / global_sum(1)
else
runoff_flux(i) = runoff_flux(i) / global_sum(2)
end if
end do

rof2ocn_spread(:,month) = runoff_flux
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runoff_flux is a pointer, so the final assignment here is not necessary.

Personal preference, but I think the use of a pointer here just adds indirection.

Suggested change
runoff_flux => rof2ocn_spread(:,month)
! calculate sum of spreading
local_sum = 0.0_r8
do i = 1, size(runoff_flux)
if (lats(i) < 0.0_r8) then
local_sum(1) = local_sum(1) + areas(i) * runoff_flux(i)
else
local_sum(2) = local_sum(2) + areas(i) * runoff_flux(i)
end if
end do
call ESMF_VMAllreduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
! adjust correction so that it's sums to 1 in each hemisphere
do i = 1, size(runoff_flux)
if (lats(i) < 0.0_r8) then
runoff_flux(i) = runoff_flux(i) / global_sum(1)
else
runoff_flux(i) = runoff_flux(i) / global_sum(2)
end if
end do
rof2ocn_spread(:,month) = runoff_flux
! calculate sum of spreading
local_sum = 0.0_r8
do i = 1, size(areas)
if (lats(i) < 0.0_r8) then
local_sum(1) = local_sum(1) + areas(i) * rof2ocn_spread(i,month)
else
local_sum(2) = local_sum(2) + areas(i) * rof2ocn_spread(i,month)
end if
end do
call ESMF_VMAllreduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
! adjust correction so that it's sums to 1 in each hemisphere
do i = 1, size(areas)
if (lats(i) < 0.0_r8) then
rof2ocn_spread(i,month) = rof2ocn_spread(i,month) / global_sum(1)
else
rof2ocn_spread(i,month) = rof2ocn_spread(i,month) / global_sum(2)
end if
end do

Comment on lines +662 to +682
if (maintask .and. dbug_flag > dbug_threshold) then

! calculate the new global sum (after correction), should be equal to 0
local_sum = 0.0_r8
do n = 1, size(runoff_flux)
if (lats(n) < 0.0_r8) then
local_sum(1) = local_sum(1) + areas(n) * runoff_flux(n)
else
local_sum(2) = local_sum(2) + areas(n) * runoff_flux(n)
end if
end do

call ESMF_GridCompGet(gcomp, vm=vm, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
call ESMF_VMReduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rootPet = 0, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
write(logunit,'(a)') subname//' Before correction: '//trim(field_name)
write(logunit,'(a,e27.17)') subname//' global_sh = ', global_sum(1)
write(logunit,'(a,e27.17)') subname//' global_nh = ', global_sum(2)
end if
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this really all be inside if (maintask)??

Also, is maintask is always PE 0? If it isn't, you may need to use ESMF_VMAllreduce. If it is, you could do:

Suggested change
if (maintask .and. dbug_flag > dbug_threshold) then
! calculate the new global sum (after correction), should be equal to 0
local_sum = 0.0_r8
do n = 1, size(runoff_flux)
if (lats(n) < 0.0_r8) then
local_sum(1) = local_sum(1) + areas(n) * runoff_flux(n)
else
local_sum(2) = local_sum(2) + areas(n) * runoff_flux(n)
end if
end do
call ESMF_GridCompGet(gcomp, vm=vm, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
call ESMF_VMReduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rootPet = 0, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
write(logunit,'(a)') subname//' Before correction: '//trim(field_name)
write(logunit,'(a,e27.17)') subname//' global_sh = ', global_sum(1)
write(logunit,'(a,e27.17)') subname//' global_nh = ', global_sum(2)
end if
if (dbug_flag > dbug_threshold) then
! calculate the new global sum (after correction), difference should be equal to 0
local_sum = 0.0_r8
do n = 1, size(runoff_flux)
if (lats(n) < 0.0_r8) then
local_sum(1) = local_sum(1) + areas(n) * runoff_flux(n)
else
local_sum(2) = local_sum(2) + areas(n) * runoff_flux(n)
end if
end do
call ESMF_GridCompGet(gcomp, vm=vm, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
call ESMF_VMReduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rootPet=0, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
if (maintask) then
write(logunit,'(a)') subname//' After correction: '//trim(field_name)
write(logunit,'(a,e27.17)') subname//' global_sh = ', global_sum(1)
write(logunit,'(a,e27.17)') subname//' global_nh = ', global_sum(2)
end if
end if

call ESMF_VMReduce(vm, senddata=local_sum, recvdata=global_sum, count=2, &
reduceflag=ESMF_REDUCE_SUM, rootPet = 0, rc=rc)
if (ChkErr(rc,__LINE__,u_FILE_u)) return
write(logunit,'(a)') subname//' Before correction: '//trim(field_name)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
write(logunit,'(a)') subname//' Before correction: '//trim(field_name)
write(logunit,'(a)') subname//' After correction: '//trim(field_name)


if (maintask .and. dbug_flag > dbug_threshold) then

! calculate the new global sum (after correction), should be equal to 0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
! calculate the new global sum (after correction), should be equal to 0
! calculate the new global sum (after correction), difference should be equal to 0

Comment thread mediator/med_io_mod.F90
rcode = pio_inq_dimlen(pioid, dimid(1), lnx)
if (rcode /= pio_noerr) then
ierr = pio_strerror(rcode, tmpstr)
call shr_sys_abort(trim(subname)//' ERROR: '//trim(tmpstr), rc=rc)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come you call shr_sys_abort here, but shr_log_error for lny and lni below?

Comment thread mediator/med_io_mod.F90
subroutine med_io_read_init_iodesc(field, name1, pioid, iodesc, ungridded_nc, rc)

use ESMF , only : ESMF_LogWrite, ESMF_LOGMSG_INFO, ESMF_SUCCESS
use ESMF , only : ESMF_FieldBundleIsCreated, ESMF_FieldBundle, ESMF_Mesh, ESMF_DistGrid
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ESMF_FieldBundle is no longer used (same for a number of other imports, but this one you removed)

Suggested change
use ESMF , only : ESMF_FieldBundleIsCreated, ESMF_FieldBundle, ESMF_Mesh, ESMF_DistGrid
use ESMF , only : ESMF_FieldBundleIsCreated, ESMF_Mesh, ESMF_DistGrid

Comment thread mediator/med_io_mod.F90
integer :: rcode, ierr
integer :: nf
integer :: k,n,l
integer :: k,n,l,m
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused variable

Suggested change
integer :: k,n,l,m
integer :: k,n,l

Comment thread mediator/med_io_mod.F90
character(len=*) ,optional ,intent(in) :: pre ! prefix to variable name
logical ,optional ,intent(in) :: ungridded_nc
! if true : ungridded dim in fields is dimension in netcdf file,
! if false: ungridded_dim is provided in seperate variables with the index appended
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
! if false: ungridded_dim is provided in seperate variables with the index appended
! if false: ungridded_dim is provided in separate variables with the index appended

Comment thread mediator/med_io_mod.F90
Comment on lines +1669 to +1670
call ESMF_LogWrite(trim(subname)//trim(tmpstr), ESMF_LOGMSG_INFO, rc=rc)
fldptr2 = 0.0_r8
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come we don't abort here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants