3. User Guide

3.1. Numerical implementation

CICE is written in FORTRAN90 and runs on platforms using UNIX, LINUX, and other operating systems. The code is parallelized via grid decomposition with MPI or OpenMP threads and includes some optimizations for vector architectures.

A second, “external” layer of parallelization involves message passing between CICE and the flux coupler, which may be running on different processors in a distributed system. The parallelization scheme for CICE was designed so that MPI could be used for the coupling along with MPI, OpenMP or no parallelization internally. The internal parallelization method is set at compile time with the NTASK and THRD definitions in the compile script. Message passing between the ice model and the CESM flux coupler is accomplished with MPI, regardless of the type of internal parallelization used for CICE, although the ice model may be coupled to another system without using MPI.

3.1.1. Directory structure

The present code distribution includes make files, several scripts and some input files. The main directory is cice/, and a run directory (rundir/) is created upon initial execution of the script comp_ice. One year of atmospheric forcing data is also available from the code distribution web site (see the README file for details).

basic information

bld/ makefiles

Macros.\(\langle\)OS\(\rangle\).\(\langle\)SITE\(\rangle\).\(\langle\)machine\(\rangle\)
macro definitions for the given operating system, used by Makefile.\(\langle\) OS\(\rangle\)
Makefile.\(\langle\)OS\(\rangle\)
primary makefile for the given operating system (:math:`langle`std:math:`rangle` works for most systems)
makedep.c
perl script that determines module dependencies

script that sets up the run directory and compiles the code

modules based on “shared” code in CESM

shr_orb_mod.F90
orbital parameterizations

documentation

cicedoc.pdf
this document
PDF/
PDF documents of numerous publications related to CICE

institution-specific modules

cice/

official driver for CICE v5 (LANL)

CICE.F90
main program
CICE_FinalMod.F90
routines for finishing and exiting a run
CICE_InitMod.F90
routines for initializing a run
CICE_RunMod.F90
main driver routines for time stepping
CICE_RunMod.F90_debug
debugging version of CICE_RunMod.F90
ice_constants.F90
physical and numerical constants and parameters

sample diagnostic output files

input files that may be modified for other CICE configurations

col/

column configuration files

ice_in
namelist input data (data paths depend on particular system)
gx1/

\(\left<1^\circ\right>\) displaced pole grid files

global_gx1.grid
\(\left<1^\circ\right>\) displaced pole grid (binary)
global_gx1.kmt
\(\left<1^\circ\right>\) land mask (binary)
ice.restart_file
pointer for restart file name
ice_in
namelist input data (data paths depend on particular system)
ice_in_v4.1
namelist input data for default CICE v4.1 configuration
iced_gx1_v5.nc
restart file used for initial condition
gx3/

\(\left<3^\circ\right>\) displaced pole grid files

global_gx3.grid
\(\left<3^\circ\right>\) displaced pole grid (binary)
global_gx3.kmt
\(\left<3^\circ\right>\) land mask (binary)
global_gx3.grid.nc
\(\left<3^\circ\right>\) displaced pole grid ()
global_gx3.kmt.nc
\(\left<3^\circ\right>\) land mask ()
ice.restart_file
pointer for restart file name
ice_in
namelist input data (data paths depend on particular system)
iced_gx3_v5.nc
restart file used for initial condition
convert_restarts.f90
Fortran code to convert restart files from v4.1 to v5 (4 ice layers)
run_ice.\(\langle\)OS\(\rangle\).\(\langle\)SITE\(\rangle\).\(\langle\)machine\(\rangle\)
sample script for running on the given operating system

binary history and restart modules

ice_history_write.F90
subroutines with binary output
ice_restart.F90
read/write binary restart files

 history and restart modules

ice_history_write.F90
subroutines with  output
ice_restart.F90
read/write   restart files

parallel I/O history and restart modules

ice_history_write.F90
subroutines with   output using PIO
ice_pio.F90
subroutines specific to PIO
ice_restart.F90
read/write  restart files using PIO

modules that require MPI calls

ice_boundary.F90
boundary conditions
ice_broadcast.F90
routines for broadcasting data across processors
ice_communicate.F90
routines for communicating between processors
ice_exit.F90
aborts or exits the run
ice_gather_scatter.F90
gathers/scatters data to/from one processor from/to all processors
ice_global_reductions.F90
global sums, minvals, maxvals, etc., across processors
ice_timers.F90
timing routines

same modules as in mpi/ but without MPI calls

general CICE source code

handles most work associated with the aerosol tracers

handles most work associated with the age tracer

skeletal layer biogeochemistry

stability-based parameterization for calculation of turbulent ice–atmosphere fluxes

for decomposing global domain into blocks

evolves the brine height tracer

keeps track of what time it is

miscellaneous diagnostic and debugging routines

for distributing blocks across processors

decompositions, distributions and related parallel processing info

domain and block sizes

elastic-anisotropic-plastic dynamics component

elastic-viscous-plastic dynamics component

code shared by EVP and EAP dynamics

unit numbers for I/O

handles most work associated with the first-year ice area tracer

fluxes needed/produced by the model

routines to read and interpolate forcing data for stand-alone ice model runs

grid and land masks

initialization and accumulation of history output variables

history output of biogeochemistry variables

history output of form drag variables

history output of ridging variables

history output of melt pond variables

code shared by all history modules

namelist and initializations

utilities for managing ice thickness distribution

basic definitions of reals, integers, etc.

handles most work associated with the level ice area and volume tracers

mechanical redistribution component (ridging)

CESM melt pond parameterization

level-ice melt pond parameterization

topo melt pond parameterization

mixed layer ocean model

orbital parameters for Delta-Eddington shortwave parameterization

utilities for reading and writing files

driver for reading/writing restart files

code shared by all restart options

basic restoring for open boundary conditions

shortwave and albedo parameterizations

space-filling-curves distribution method

essential arrays to describe the state of the ice

routines for time stepping the major code components

zero-layer thermodynamics of [64]

multilayer thermodynamics of [7]

thermodynamic changes mostly related to ice thickness distribution

mushy-theory thermodynamics of:cite:THB13

code shared by all thermodynamics parameterizations

vertical growth rates and fluxes

driver for horizontal advection

horizontal advection via incremental remapping

driver for ice biogeochemistry and brine tracer motion

parameters and shared code for biogeochemistry and brine height

execution or “run” directory created when the code is compiled using the comp_ice script (gx3)

cice
code executable
compile/
directory containing object files, etc.
grid
horizontal grid file from cice/input_templates/gx3/
ice.log.[ID]
diagnostic output file
ice_in
namelist input data from cice/input_templates/gx3/
history/iceh.[timeID].nc
output history file
kmt
land mask file from cice/input_templates/gx3/
restart/

restart directory

iced_gx3_v5.nc
initial condition from cice/input_templates/gx3/
ice.restart_file
restart pointer from cice/input_templates/gx3/
run_ice
batch run script file from cice/input_templates/

3.1.2. Grid, boundary conditions and masks

The spatial discretization is specialized for a generalized orthogonal B-grid as in [54] or [67]. The ice and snow area, volume and energy are given at the center of the cell, velocity is defined at the corners, and the internal ice stress tensor takes four different values within a grid cell; bilinear approximations are used for the stress tensor and the ice velocity across the cell, as described in [34]. This tends to avoid the grid decoupling problems associated with the B-grid. EVP is available on the C-grid through the MITgcm code distribution, http://mitgcm.org/viewvc/MITgcm/MITgcm/pkg/seaice/.

Since ice thickness and thermodynamic variables such as temperature are given in the center of each cell, the grid cells are referred to as “T cells.” We also occasionally refer to “U cells,” which are centered on the northeast corner of the corresponding T cells and have velocity in the center of each. The velocity components are aligned along grid lines.

The user has several choices of grid routines: popgrid reads grid lengths and other parameters for a nonuniform grid (including tripole and regional grids), and rectgrid creates a regular rectangular grid, including that used for the column configuration. The input files global_gx3.grid and global_gx3.kmt contain the \(\left<3^\circ\right>\) POP grid and land mask; global_gx1.grid and global_gx1.kmt contain the \(\left<1^\circ\right>\) grid and land mask. These are binary unformatted, direct access files produced on an SGI (Big Endian). If you are using an incompatible (Little Endian) architecture, choose rectangular instead of displaced_pole in ice_in, or follow procedures as for conejo (\(\langle\)OS\(\rangle.\langle\)SITE\(\rangle.\langle\)machine\(\rangle\) = Linux.LANL.conejo). There are versions of the gx3 grid files available.

In CESM, the sea ice model may exchange coupling fluxes using a different grid than the computational grid. This functionality is activated using the namelist variable gridcpl_file.

3.1.2.1. Grid domains and blocks

In general, the global gridded domain is nx_global \(\times\)ny_global, while the subdomains used in the block distribution are nx_block \(\times\)ny_block. The physical portion of a subdomain is indexed as [ilo:ihi, jlo:jhi], with nghost “ghost” or “halo” cells outside the domain used for boundary conditions. These parameters are illustrated in Figure 8 in one dimension. The routines global_scatter and global_gather distribute information from the global domain to the local domains and back, respectively. If MPI is not being used for grid decomposition in the ice model, these routines simply adjust the indexing on the global domain to the single, local domain index coordinates. Although we recommend that the user choose the local domains so that the global domain is evenly divided, if this is not possible then the furthest east and/or north blocks will contain nonphysical points (“padding”). These points are excluded from the computation domain and have little effect on model performance.

_images/grid.png

Figure 8

Figure 8 : Grid parameters for a sample one-dimensional, 20-cell global domain decomposed into four local subdomains. Each local domain has one ghost (halo) cell on each side, and the physical portion of the local domains are labeled ilo:ihi. The parameter nx_block is the total number of cells in the local domain, including ghost cells, and the same numbering system is applied to each of the four subdomains.

The user chooses a block size BLCKX \(\times\)BLCKY and the number of processors NTASK in comp_ice. Parameters in the domain_nml namelist in ice_in determine how the blocks are distributed across the processors, and how the processors are distributed across the grid domain. Recommended combinations of these parameters for best performance are given in Section Performance. The script comp_ice computes the maximum number of blocks on each processor for typical Cartesian distributions, but for non-Cartesian cases MXBLCKS may need to be set in the script. The code will print this information to the log file before aborting, and the user will need to adjust MXBLCKS in comp_ice and recompile. The code will also print a warning if the maximum number of blocks is too large. Although this is not fatal, it does require excess memory.

A loop at the end of routine create_blocks in module ice_blocks.F90 will print the locations for all of the blocks on the global grid if dbug is set to be true. Likewise, a similar loop at the end of routine create_local_block_ids in module ice_distribution.F90 will print the processor and local block number for each block. With this information, the grid decomposition into processors and blocks can be ascertained. The dbug flag must be manually set in the code in each case (independently of the dbug flag in ice_in), as there may be hundreds or thousands of blocks to print and this information should be needed only rarely. This information is much easier to look at using a debugger such as Totalview.

Alternatively, a new variable is provided in the history files, blkmask, which labels the blocks in the grid decomposition according to blkmask = my_task + iblk/100.

3.1.2.2. Tripole grids

The tripole grid is a device for constructing a global grid with a normal south pole and southern boundary condition, which avoids placing a physical boundary or grid singularity in the Arctic Ocean. Instead of a single north pole, it has two “poles” in the north, both located on land, with a line of grid points between them. This line of points is called the “fold,” and it is the “top row” of the physical grid. One pole is at the left-hand end of the top row, and the other is in the middle of the row. The grid is constructed by “folding” the top row, so that the left-hand half and the right-hand half of it coincide. Two choices for constructing the tripole grid are available. The one first introduced to CICE is called “U-fold”, which means that the poles and the grid cells between them are U cells on the grid. Alternatively the poles and the cells between them can be grid T cells, making a “T-fold.” Both of these options are also supported by the OPA/NEMO ocean model, which calls the U-fold an “f-fold” (because it uses the Arakawa C-grid in which U cells are on T-rows). The choice of tripole grid is given by the namelist variable ns_boundary_type, ‘tripole’ for the U-fold and ‘tripoleT’ for the T-fold grid.

In the U-fold tripole grid, the poles have U-index \({\tt nx\_global}/2\) and nx_global on the top U-row of the physical grid, and points with U-index i and \({\tt nx\_global-i}\) are coincident. Let the fold have U-row index \(n\) on the global grid; this will also be the T-row index of the T-row to the south of the fold. There are ghost (halo) T- and U-rows to the north, beyond the fold, on the logical grid. The point with index i along the ghost T-row of index \(n+1\) physically coincides with point \({\tt nx\_global}-{\tt i}+1\) on the T-row of index \(n\). The ghost U-row of index \(n+1\) physically coincides with the U-row of index \(n-1\).

In the T-fold tripole grid, the poles have T-index 1 and and \({\tt nx\_global}/2+1\) on the top T-row of the physical grid, and points with T-index i and \({\tt nx\_global}-{\tt i}+2\) are coincident. Let the fold have T-row index \(n\) on the global grid. It is usual for the northernmost row of the physical domain to be a U-row, but in the case of the T-fold, the U-row of index \(n\) is “beyond” the fold; although it is not a ghost row, it is not physically independent, because it coincides with U-row \(n-1\), and it therefore has to be treated like a ghost row. Points i on U-row \(n\) coincides with \({\tt nx\_global}-{\tt i}+1\) on U-row \(n-1\). There are still ghost T- and U-rows \(n+1\) to the north of U-row \(n\). Ghost T-row \(n+1\) coincides with T-row \(n-1\), and ghost U-row \(n+1\) coincides with U-row \(n-2\).

The tripole grid thus requires two special kinds of treatment for certain rows, arranged by the halo-update routines. First, within rows along the fold, coincident points must always have the same value. This is achieved by averaging them in pairs. Second, values for ghost rows and the “quasi-ghost” U-row on the T-fold grid are reflected copies of the coincident physical rows. Both operations involve the tripole buffer, which is used to assemble the data for the affected rows. Special treatment is also required in the scattering routine, and when computing global sums one of each pair of coincident points has to be excluded.

3.1.2.3. Bio-grid

The bio-grid is a vertical grid used for solving the brine height variable \(h_b\). In the future, it will also be used for discretizing the vertical transport equations of biogeochemical tracers. The bio-grid is a non-dimensional vertical grid which takes the value zero at \(h_b\) and one at the ice–ocean interface. The number of grid levels is specified during compilation in comp_ice by setting the variable NBGCLYR equal to an integer (\(n_b\)) .

Ice tracers and microstructural properties defined on the bio-grid are referenced in two ways: as bgrid \(=n_b+2\) points and as igrid\(=n_b+1\) points. For both bgrid and igrid, the first and last points reference \(h_b\) and the ice–ocean interface, respectively, and so take the values \(0\) and \(1\), respectively. For bgrid, the interior points \([2, n_b+1]\) are spaced at \(1/n_b\) intervals beginning with bgrid(2) :math:` = 1/(2n_b)`. The igrid interior points \([2, n_b]\) are also equidistant with the same spacing, but physically coincide with points midway between those of bgrid.

3.1.2.4. Column configuration

A column modeling capability is available. Because of the boundary conditions and other spatial assumptions in the model, this is not a single column, but a small array of columns (minimum grid size is 5x5). However, the code is set up so that only the single, central column is used (all other columns are designated as land). The column is located near Barrow (71.35N, 156.5W). Options for choosing the column configuration are given in comp_ice (choose RES col) and in the namelist file, input_templates/col/ice_in. Here, istep0 and the initial conditions are set such that the run begins September 1 with no ice. The grid type is rectangular, dynamics are turned off (kdyn = 0) and one processor is used.

History variables available for column output are ice and snow temperature, Tinz and Tsnz. These variables also include thickness category as a fourth dimension.

3.1.2.5. Boundary conditions

Much of the infrastructure used in CICE, including the boundary routines, is adopted from POP. The boundary routines perform boundary communications among processors when MPI is in use and among blocks whenever there is more than one block per processor.

Open/cyclic boundary conditions are the default in CICE; the physical domain can still be closed using the land mask. In our bipolar, displaced-pole grids, one row of grid cells along the north and south boundaries is located on land, and along east/west domain boundaries not masked by land, periodic conditions wrap the domain around the globe. CICE can be run on regional grids with open boundary conditions; except for variables describing grid lengths, non-land halo cells along the grid edge must be filled by restoring them to specified values. The namelist variable restore_ice turns this functionality on and off; the restoring timescale trestore may be used (it is also used for restoring ocean sea surface temperature in stand-alone ice runs). This implementation is only intended to provide the “hooks” for a more sophisticated treatment; the rectangular grid option can be used to test this configuration. The ‘displaced_pole’ grid option should not be used unless the regional grid contains land all along the north and south boundaries. The current form of the boundary condition routines does not allow Neumann boundary conditions, which must be set explicitly. This has been done in an unreleased branch of the code; contact Elizabeth for more information.

For exact restarts using restoring, set restart_ext = true in namelist to use the extended-grid subroutines.

On tripole grids, the order of operations used for calculating elements of the stress tensor can differ on either side of the fold, leading to round-off differences. Although restarts using the extended grid routines are exact for a given run, the solution will differ from another run in which restarts are written at different times. For this reason, explicit halo updates of the stress tensor are implemented for the tripole grid, both within the dynamics calculation and for restarts. This has not been implemented yet for tripoleT grids, pending further testing.

3.1.2.6. Masks

A land mask hm (\(M_h\)) is specified in the cell centers, with 0 representing land and 1 representing ocean cells. A corresponding mask uvm (\(M_u\)) for velocity and other corner quantities is given by

\[M_u(i,j)=\min\{M_h(l),\,l=(i,j),\,(i+1,j),\,(i,j+1),\,(i+1,j+1)\}.\]

The logical masks tmask and umask (which correspond to the real masks hm and uvm, respectively) are useful in conditional statements.

In addition to the land masks, two other masks are implemented in evp_prep in order to reduce the dynamics component’s work on a global grid. At each time step the logical masks ice_tmask and ice_umask are determined from the current ice extent, such that they have the value “true” wherever ice exists. They also include a border of cells around the ice pack for numerical purposes. These masks are used in the dynamics component to prevent unnecessary calculations on grid points where there is no ice. They are not used in the thermodynamics component, so that ice may form in previously ice-free cells. Like the land masks hm and uvm, the ice extent masks ice_tmask and ice_umask are for T cells and U cells, respectively.

Improved parallel performance may result from utilizing halo masks for boundary updates of the full ice state, incremental remapping transport, or for EVP or EAP dynamics. These options are accessed through the logical namelist flags maskhalo_bound, maskhalo_remap, and maskhalo_dyn, respectively. Only the halo cells containing needed information are communicated.

Two additional masks are created for the user’s convenience: lmask_n and lmask_s can be used to compute or write data only for the northern or southern hemispheres, respectively. Special constants (spval and spval_dbl, each equal to \(10^{30}\)) are used to indicate land points in the history files and diagnostics.

3.1.3. Test configurations

3.1.4. Initialization and coupling

The ice model’s parameters and variables are initialized in several steps. Many constants and physical parameters are set in ice_constants.F90. Namelist variables (Table of namelist options), whose values can be altered at run time, are handled in input_data and other initialization routines. These variables are given default values in the code, which may then be changed when the input file ice_in is read. Other physical constants, numerical parameters, and variables are first set in initialization routines for each ice model component or module. Then, if the ice model is being restarted from a previous run, core variables are read and reinitialized in restartfile, while tracer variables needed for specific configurations are read in separate restart routines associated with each tracer or specialized parameterization. Finally, albedo and other quantities dependent on the initial ice state are set. Some of these parameters will be described in more detail in Table of namelist options.

The restart files supplied with the code release include the core variables on the default configuration, that is, with seven vertical layers and the ice thickness distribution defined by kcatbound = 0. Restart information for some tracers is also included in the  restart files.

Three namelist variables control model initialization, ice_ic, runtype, and restart, as described in Table 4. It is possible to do an initial run from a file filename in two ways: (1) set runtype = ‘initial’, restart = true and ice_ic = filename, or (2) runtype = ‘continue’ and pointer_file = ./restart/ice.restart_file where ./restart/ice.restart_file contains the line “./restart/[filename]”. The first option is convenient when repeatedly starting from a given file when subsequent restart files have been written. With this arrangement, the tracer restart flags can be set to true or false, depending on whether the tracer restart data exist. With the second option, tracer restart flags are set to ‘continue’ for all active tracers.

An additional namelist option, restart_ext specifies whether halo cells are included in the restart files. This option is useful for tripole and regional grids, but can not be used with PIO.

MPI is initialized in init_communicate for both coupled and stand-alone MPI runs. The ice component communicates with a flux coupler or other climate components via external routiines that handle the variables listed in Table 1. For stand-alone runs, routines in ice_forcing.F90 read and interpolate data from files, and are intended merely to provide guidance for the user to write his or her own routines. Whether the code is to be run in stand-alone or coupled mode is determined at compile time, as described below.

Table 4 : Ice initial state resulting from combinations of ice_ic, runtype and restart. \(^a\)If false, restart is reset to true. \(^b\)restart is reset to false. \(^c\)ice_ic is reset to ‘none.’

Table 4
ice_ic      
  initial/false initial/true continue/true (or false\(^a\))
none no ice no ice\(^b\) restart using pointer_file
default SST/latitude dependent SST/latitude dependent\(^b\) restart using pointer_file
filename no ice\(^c\) start from filename restart using pointer_file

3.1.5. Choosing an appropriate time step

The time step is chosen based on stability of the transport component (both horizontal and in thickness space) and on resolution of the physical forcing. CICE allows the dynamics, advection and ridging portion of the code to be run with a shorter timestep, \(\Delta t_{dyn}\) (dt_dyn), than the thermodynamics timestep \(\Delta t\) (dt). In this case, dt and the integer ndtd are specified, and dt_dyn = dt/ndtd.

A conservative estimate of the horizontal transport time step bound, or CFL condition, under remapping yields

\[\Delta t_{dyn} < {\min\left(\Delta x, \Delta y\right)\over 2\max\left(u, v\right)}.\]

Numerical estimates for this bound for several POP grids, assuming \(\max(u, v)=0.5\) m/s, are as follows:

grid label N pole singularity dimensions min \(\sqrt{\Delta x\cdot\Delta y}\) max \(\Delta t_{dyn}\)
gx3 Greenland \(100\times 116\) \(39\times 10^3\) m 10.8hr
gx1 Greenland \(320\times 384\) \(18\times 10^3\) m 5.0hr
p4 Canada \(900\times 600\) \(6.5\times 10^3\) m 1.8hr

As discussed in section Mechanical redistribution and [47], the maximum time step in practice is usually determined by the time scale for large changes in the ice strength (which depends in part on wind strength). Using the strength parameterization of [61], as in Equation (101), limits the time step to \(\sim\)30 minutes for the old ridging scheme (krdg_partic = 0), and to \(\sim\)2 hours for the new scheme (krdg_partic = 1), assuming \(\Delta x\) = 10 km. Practical limits may be somewhat less, depending on the strength of the atmospheric winds.

Transport in thickness space imposes a similar restraint on the time step, given by the ice growth/melt rate and the smallest range of thickness among the categories, \(\Delta t<\min(\Delta H)/2\max(f)\), where \(\Delta H\) is the distance between category boundaries and \(f\) is the thermodynamic growth rate. For the 5-category ice thickness distribution used as the default in this distribution, this is not a stringent limitation: \(\Delta t < 19.4\) hr, assuming \(\max(f) = 40\) cm/day.

In the classic EVP or EAP approach (kdyn = 1 or 2, revised_evp = false), the dynamics component is subcycled ndte (\(N\)) times per dynamics time step so that the elastic waves essentially disappear before the next time step. The subcycling time step (\(\Delta t_e\)) is thus

\[dte = dt\_dyn/ndte.\]

A second parameter, \(E_\circ\) (eyc), defines the elastic wave damping timescale \(T\), described in Section Dynamics, as eyc* dt_dyn. The forcing terms are not updated during the subcycling. Given the small step (dte) at which the EVP dynamics model is subcycled, the elastic parameter \(E\) is also limited by stability constraints, as discussed in [33]. Linear stability analysis for the dynamics component shows that the numerical method is stable as long as the subcycling time step \(\Delta t_e\) sufficiently resolves the damping timescale \(T\). For the stability analysis we had to make several simplifications of the problem; hence the location of the boundary between stable and unstable regions is merely an estimate. In practice, the ratio \(\Delta t_e ~:~ T ~:~ \Delta t\)  = 1 : 40 : 120 provides both stability and acceptable efficiency for time steps (\(\Delta t\)) on the order of 1 hour.

For the revised EVP approach (kdyn = 1, revised_evp = true), the relaxation parameter arlx1i effectively sets the damping timescale in the problem, and brlx represents the effective subcycling [8]. In practice the parameters \(S_e>0.5\) and \(\xi<1\) are set, along with an estimate of the ice strength per unit mass, and the damping and subcycling parameters are then calculated. With the addition of the revised EVP approach to CICE, the code now uses these parameters internally for both classic and revised EVP configurations (see Section Revised approach).

Note that only \(T\) and \(\Delta t_e\) figure into the stability of the dynamics component; \(\Delta t\) does not. Although the time step may not be tightly limited by stability considerations, large time steps (e.g., \(\Delta t=1\) day, given daily forcing) do not produce accurate results in the dynamics component. The reasons for this error are discussed in [33]; see [37] for its practical effects. The thermodynamics component is stable for any time step, as long as the surface temperature \(T_{sfc}\) is computed internally. The numerical constraint on the thermodynamics time step is associated with the transport scheme rather than the thermodynamic solver.

3.1.6. Model output

3.1.6.1. History files

Model output data is averaged over the period(s) given by histfreq and histfreq_n, and written to binary or  files prepended by history_file in ice_in. That is, if history_file = ‘iceh’ then the filenames will have the form iceh.[timeID].nc or iceh.[timeID].da, depending on the output file format chosen in comp_ice (set IO_TYPE). The  history files are CF-compliant; header information for data contained in the  files is displayed with the command ncdump -h filename.nc. Parallel  output is available using the PIO library; the attribute io_flavor distinguishes output files written with PIO from those written with standard netCDF. With binary files, a separate header file is written with equivalent information. Standard fields are output according to settings in the icefields_nml namelist in ice_in. The user may add (or subtract) variables not already available in the namelist by following the instructions in section History fields.

With this release, the history module has been divided into several modules based on the desired formatting and on the variables themselves. Parameters, variables and routines needed by multiple modules is in ice_history_shared.F90, while the primary routines for initializing and accumulating all of the history variables are in ice_history.F90. These routines call format-specific code in the io_binary, io_netcdf and io_pio directories. History variables specific to certain components or parameterizations are collected in their own history modules (ice_history_bgc.F90, ice_history_drag.F90, ice_history_mechred.F90, ice_history_pond.F90).

The history modules allow output at different frequencies. Five output frequencies (1, h, d, m, y) are available simultaneously during a run. The same variable can be output at different frequencies (say daily and monthly) via its namelist flag, f_ \(\left<{var}\right>\), which is now a character string corresponding to histfreq or ‘x’ for none. (Grid variable flags are still logicals, since they are written to all files, no matter what the frequency is.) If there are no namelist flags with a given histfreq value, or if an element of histfreq_n is 0, then no file will be written at that frequency. The output period can be discerned from the filenames.

For example, in namelist:

`histfreq` = ’1’, ’h’, ’d’, ’m’, ’y’
`histfreq\_n` = 1, 6, 0, 1, 1
`f\_hi` = ’1’
`f\_hs` = ’h’
`f\_Tsfc` = ’d’
`f\_aice` = ’m’
`f\_meltb` = ’mh’
`f\_iage` = ’x’

Here, hi will be written to a file on every timestep, hs will be written once every 6 hours, aice once a month, meltb once a month AND once every 6 hours, and Tsfc and iage will not be written.

From an efficiency standpoint, it is best to set unused frequencies in histfreq to ‘x’. Having output at all 5 frequencies takes nearly 5 times as long as for a single frequency. If you only want monthly output, the most efficient setting is histfreq = ’m’,’x’,’x’,’x’,’x’. The code counts the number of desired streams (nstreams) based on histfreq.

The history variable names must be unique for netcdf, so in cases where a variable is written at more than one frequency, the variable name is appended with the frequency in files after the first one. In the example above, meltb is called meltb in the monthly file (for backward compatibility with the default configuration) and meltb_h in the 6-hourly file.

Using the same frequency twice in histfreq will have unexpected consequences and currently will cause the code to abort. It is not possible at the moment to output averages once a month and also once every 3 months, for example.

If write_ic is set to true in ice_in, a snapshot of the same set of history fields at the start of the run will be written to the history directory in iceh_ic.[timeID].nc(da). Several history variables are hard-coded for instantaneous output regardless of the averaging flag, at the frequency given by their namelist flag.

The normalized principal components of internal ice stress are computed in principal_stress and written to the history file. This calculation is not necessary for the simulation; principal stresses are merely computed for diagnostic purposes and included here for the user’s convenience.

Several history variables are available in two forms, a value representing an average over the sea ice fraction of the grid cell, and another that is multiplied by \(a_i\), representing an average over the grid cell area. Our naming convention attaches the suffix “_ai” to the grid-cell-mean variable names.

3.1.6.2. Diagnostic files

Like histfreq, the parameter diagfreq can be used to regulate how often output is written to a log file. The log file unit to which diagnostic output is written is set in ice_fileunits.F90. If diag_type = ‘stdout’, then it is written to standard out (or to ice.log.[ID] if you redirect standard out as in run_ice); otherwise it is written to the file given by diag_file. In addition to the standard diagnostic output (maximum area-averaged thickness, velocity, average albedo, total ice area, and total ice and snow volumes), the namelist options print_points and print_global cause additional diagnostic information to be computed and written. print_global outputs global sums that are useful for checking global conservation of mass and energy. print_points writes data for two specific grid points. Currently, one point is near the North Pole and the other is in the Weddell Sea; these may be changed in ice_in.

Timers are declared and initialized in ice_timers.F90, and the code to be timed is wrapped with calls to ice_timer_start and ice_timer_stop. Finally, ice_timer_print writes the results to the log file. The optional “stats” argument (true/false) prints additional statistics. Calling ice_timer_print_all prints all of the timings at once, rather than having to call each individually. Currently, the timers are set up as in Table 5. Section Timers contains instructions for adding timers.

The timings provided by these timers are not mutually exclusive. For example, the column timer (5) includes the timings from 6–10, and subroutine bound (timer 15) is called from many different places in the code, including the dynamics and advection routines.

The timers use MPI_WTIME for parallel runs and the F90 intrinsic system_clock for single-processor runs.

Table 5 : CICE timers

Table 5
Timer    
Index Label  
1 Total the entire run
2 Step total minus initialization and exit
3 Dynamics EVP
4 Advection horizontal transport
5 Column all vertical (column) processes
6 Thermo vertical thermodynamics
7 Shortwave SW radiation and albedo
8 Meltponds melt ponds
9 Ridging mechanical redistribution
10 Cat Conv transport in thickness space
11 Coupling sending/receiving coupler messages
12 ReadWrite reading/writing files
13 Diags diagnostics (log file)
14 History history output
15 Bound boundary conditions and subdomain communications
16 BGC biogeochemistry

3.1.6.3. Restart files

CICE now provides restart data in binary unformatted or  formats, via the IO_TYPE flag in comp_ice and namelist variable restart_format. Restart and history files must use the same format. As with the history output, there is also an option for writing parallel restart files using PIO.

The restart files created by CICE contain all of the variables needed for a full, exact restart. The filename begins with the character string ‘iced.’, and the restart dump frequency is given by the namelist variables dumpfreq and dumpfreq_n. The pointer to the filename from which the restart data is to be read for a continuation run is set in pointer_file. The code assumes that auxiliary binary tracer restart files will be identified using the same pointer and file name prefix, but with an additional character string in the file name that is associated with each tracer set. All variables are included in  restart files.

Additional namelist flags provide further control of restart behavior. dump_last = true causes a set of restart files to be written at the end of a run when it is otherwise not scheduled to occur. The flag use_restart_time enables the user to choose to use the model date provided in the restart files. If use_restart_time = false then the initial model date stamp is determined from the namelist parameters. lcdf64 = true sets 64-bit  output, allowing larger file sizes with version 3.

Routines for gathering, scattering and (unformatted) reading and writing of the “extended” global grid, including the physical domain and ghost (halo) cells around the outer edges, allow exact restarts on regional grids with open boundary conditions, and they will also simplify restarts on the various tripole grids. They are accessed by setting restart_ext = true in namelist. Extended grid restarts are not available when using PIO; in this case extra halo update calls fill ghost cells for tripole grids (do not use PIO for regional grids).

Two restart files are included with the CICE v5 code distribution, for the gx3 and gx1 grids. The were created using the default model configuration (settings as in comp_ice and ice_in), but initialized with no ice. The gx3 case was run for 1 year using the 1997 forcing data provided with the code. The gx1 case was run for 20 years, so that the date of restart in the file is 1978-01-01. Note that the restart dates provided in the restart files can be overridden using the namelist variables use_restart_time, year_init and istep0. The forcing time can also be overridden using fyear_init.

Several changes in CICE v5 have made restarting from v4.1 restart files difficult. First, the ice and snow enthalpy state variables are now carried as tracers instead of separate arrays, and salinity has been added as a necessary restart field. Second, the default number of ice layers has been increased from 4 to 7. Third, netcdf format is now used for all I/O; it is no longer possible to have history output as  and restart output in binary format. However, some facilities are included with CICE v5 for converting v4.1 restart files to the new file structure and format, provided that the same number of ice layers and basic physics packages will be used for the new runs. See Section Restarts for details.

3.2. Execution procedures

To compile and execute the code: in the source directory,

  1. Download the forcing data used for testing from the CICE-Consortium github page, https://github.com/CICE-Consortium .

  2. Create Macros.* and run_ice.* files for your particular platform, if they do not already exist (type ‘uname -s’ at the prompt to get \(\langle\)OS\(\rangle\)).

  3. Alter directories in the script comp_ice.

  4. Run comp_ice to set up the run directory and make the executable ‘cice’.

  5. To clean the compile directory and start fresh, simply execute ‘/bin/rm -rf compile’ from the run directory.

In the run directory,

  1. Alter atm_data_dir and ocn_data_dir in the namelist file ice_in.
  2. Alter the script run_ice for your system.
  3. Execute run_ice.

If this fails, see Section Initial setup.

This procedure creates the output log file ice.log.[ID], and if npt is long enough compared with dumpfreq and histfreq, dump files iced.[timeID] and   (or binary) history output files iceh_[timeID].nc (.da). Using the \(\left<3^\circ\right>\) grid, the log file should be similar to ice.log.:math:`langle`OS:math:`rangle`, provided for the user’s convenience. These log files were created using MPI on 4 processors on the \(\left<3^\circ\right>\) grid.

Several options are available in comp_ice for configuring the run, shown in Table 6. If NTASK = 1, then the serial/ code is used, otherwise the code in mpi/ is used. Loops over blocks have been threaded throughout the code, so that their work will be divided among OMP_NUM_THREADS if THRD is ‘yes.’ Note that the value of NTASK in comp_ice must equal the value of nprocs in ice_in. Generally the value of MXBLCKS computed by comp_ice is sufficient, but sometimes it will need to be set explicitly, as discussed in Section Performance. To conserve memory, match the tracer requests in comp_ice with those in ice_in. CESM uses 3 aerosol tracers; the number given in comp_ice must be less than or equal to the maximum allowed in ice_domain_size.F90.

The scripts define a number of environment variables, mostly as directories that you will need to edit for your own environment. $SYSTEM_USERDIR, which on machines at Oak Ridge National Laboratory points automatically to scratch space, is intended to be a disk where the run directory resides. SHRDIR is a path to the CESM shared code.

Table 6 : Configuration options available in comp_ice.

Table 6
variable options description
RES col, gx3, gx1 grid resolution
NTASK (integer) total number of processors
BLCKX (integer) number of grid cells on each block in the x-direction \(^\dagger\)
BLCKY (integer) number of grid cells on each block in the y-direction \(^\dagger\)
MXBLCKS (integer) maximum number of blocks per processor
NICELYR (integer) number of vertical layers in the ice
NSNWLYR (integer) number of vertical layers in the snow
NICECAT (integer) number of ice thickness categories
TRAGE 0 or 1 set to 1 for ice age tracer
TRFY 0 or 1 set to 1 for first-year ice age tracer
TRLVL 0 or 1 set to 1 for level and deformed ice tracers
TRPND 0 or 1 set to 1 for melt pond tracers
NTRAERO 0 or 1 number of aerosol tracers
TRBRINE set to 1 for brine height tracer  
NBGCLYR (integer) number of vertical layers for biogeochemical transport
IO_TYPE none/netcdf/pio use ‘none’ if  library is unavailable,‘pio’ for PIO
DITTO yes/no for reproducible diagnostics
BARRIERS yes/no flushes MPI buffers during global scatters and gathers
THRD yes/no set to yes for OpenMP threaded parallelism
OMP_NUM_THREADS (integer) the number of OpenMP threads requested
NUMIN (integer) smallest unit number assigned to CICE files
NUMAX (integer) largest unit number assigned to CICE files

The ‘reproducible’ option (DITTO) makes diagnostics bit-for-bit when varying the number of processors. (The simulation results are bit-for-bit regardless, because they do not require global sums or max/mins as do the diagnostics.) This was done mainly by increasing the precision for the global reduction calculations, except for regular double-precision (r8) calculations involving MPI; MPI can not handle MPI_REAL16 on some architectures. Instead, these cases perform sums or max/min calculations across the global block structure, so that the results are bit-for-bit as long as the block distribution is the same (the number of processors can be different).

A more flexible option is available for double-precision MPI calculations, using the namelist variable bfbflag. When true, this flag produces bit-for-bit identical diagnostics with different tasks, threads, blocks and grid decompositions.

CICE namelist variables available for changes after compile time appear in ice.log.* with values read from the file ice_in; their definitions are given in Section Index of primary variables and parameters. For example, to run for a different length of time, say three days, set npt = 72 in ice_in. At present, the user supplies the time step dt, the number of dynamics/advection/ridging subcycles ndtd, and for classic EVP, the number of EVP subcycles ndte; dte is then calculated in subroutine init_evp. The primary reason for doing it this way is to ensure that ndte is an integer. (This is done differently for revised_evp = true.; see Section Dynamics).

To restart from a previous run, set restart = true in ice_in. There are two ways of restarting from a given file. The restart pointer file ice.restart_file (created by the previous run) contains the name of the last written data file (iced.[timeID]). Alternatively, a filename can be assigned to ice_ic in ice_in. Consult Section Initialization and coupling for more details. Restarts are exact for MPI or single processor runs.

3.2.1. Scripts

3.2.2. Directories

3.2.3. Local modifications

3.2.4. Forcing data

3.3. Performance

Namelist options (domain_nml) provide considerable flexibility for finding the most efficient processor and block configuration. Some of these choices are illustration in Figure 9. processor_shape chooses between tall, thin processor domains (slenderX1 or slenderX2, often better for sea ice simulations on global grids where nearly all of the work is at the top and bottom of the grid with little to do in between) and close-to-square domains, which maximize the volume to surface ratio (and therefore on-processor computations to message passing, if there were ice in every grid cell). In cases where the number of processors is not a perfect square (4, 9, 16…), the processor_shape namelist variable allows the user to choose how the processors are arranged. Here again, it is better in the sea ice model to have more processors in x than in y, for example, 8 processors arranged 4x2 (square-ice) rather than 2x4 (square-pop). The latter option is offered for direct-communication compatibility with POP, in which this is the default.

The user provides the total number of processors and the block dimensions in the setup script (comp_ice). When moving toward smaller, more numerous blocks, there is a point where the code becomes less efficient; blocks should not have fewer than about 20 grid cells in each direction. Squarish blocks optimize the volume-to-surface ratio for communications.

_images/distrb.png

Figure 9

Figure 9 : Distribution of 256 blocks across 16 processors, represented by colors, on the gx1 grid: (a) cartesian, slenderX1, (b) cartesian, slenderX2, (c) cartesian, square-ice (square-pop is equivalent here), (d) rake with block weighting, (e) rake with latitude weighting, (f) spacecurve. Each block consists of 20x24 grid cells, and white blocks consist entirely of land cells.

The distribution_type options allow standard Cartesian distribution of blocks, redistribution via a ‘rake’ algorithm for improved load balancing across processors, and redistribution based on space-filling curves. There are also three additional distribution types (‘roundrobin,’ ‘sectrobin,’ ‘sectcart’) that improve land-block elimination rates and also allow more flexibility in the number of processors used. The rake and space-filling curve algorithms are primarily helpful when using squarish processor domains where some processors (located near the equator) would otherwise have little work to do. Processor domains need not be rectangular, however.

distribution_wght chooses how the work-per-block estimates are weighted. The ‘block’ option is the default in POP, which uses a lot of array syntax requiring calculations over entire blocks (whether or not land is present), and is provided here for direct-communication compatibility with POP. The ‘latitude’ option weights the blocks based on latitude and the number of ocean grid cells they contain.

The rake distribution type is initialized as a standard, Cartesian distribution. Using the work-per-block estimates, blocks are “raked” onto neighboring processors as needed to improve load balancing characteristics among processors, first in the x direction and then in y.

Space-filling curves reduce a multi-dimensional space (2D, in our case) to one dimension. The curve is composed of a string of blocks that is snipped into sections, again based on the work per processor, and each piece is placed on a processor for optimal load balancing. This option requires that the block size be chosen such that the number of blocks in the x direction equals the number of blocks in the y direction, and that number must be factorable as \(2^n 3^m 5^p\) where \(n, m, p\) are integers. For example, a 16x16 array of blocks, each containing 20x24 grid cells, fills the gx1 grid (\(n=4, m=p=0\)). If either of these conditions is not met, a Cartesian distribution is used instead.

While the Cartesian distribution groups sets of blocks by processor, the ‘roundrobin’ distribution loops through the blocks and processors together, putting one block on each processor until the blocks are gone. This provides good load balancing but poor communication characteristics due to the number of neighbors and the amount of data needed to communicate. The ‘sectrobin’ and ‘sectcart’ algorithms loop similarly, but put groups of blocks on each processor to improve the communication characteristics. In the ‘sectcart’ case, the domain is divided into two (east-west) halves and the loops are done over each, sequentially. Figure 10 provides an overview of the pros and cons for the distribution types.

_images/scorecard.png

Figure 10

Figure 10 : Scorecard for block distribution choices in CICE, courtesy T. Craig. For more information, see http://www.cesm.ucar.edu/events/ws.2012/Presentations/SEWG2/craig.pdf

The maskhalo options in the namelist improve performance by removing unnecessary halo communications where there is no ice. There is some overhead in setting up the halo masks, which is done during the timestepping procedure as the ice area changes, but this option usually improves timings even for relatively small processor counts. T. Craig has found that performance improved by more than 20% for combinations of updated decompositions and masked haloes, in CESM’s version of CICE. A practical guide for choosing a CICE grid decomposition, based on experience in CESM, is available: http://oceans11.lanl.gov/drupal/CICE/DecompositionGuide

Throughout the code, (i, j) loops have been combined into a single loop, often over just ocean cells or those containing sea ice. This was done to reduce unnecessary operations and to improve vector performance.

Figure 11 illustrates the computational expense of various options, relative to the total time (excluding initialization) of a 7-layer configuration using BL99 thermodynamics, EVP dynamics, and the ‘ccsm3’ shortwave parameterization on the gx1 grid, run for one year from a no-ice initial condition. The block distribution consisted of 20 \(\times\) 192 blocks spread over 32 processors (‘slenderX2’) with no threads and -O2 optimization. Timings varied by about \(\pm3\)% in identically configured runs due to machine load. Extra time required for tracers has two components, that needed to carry the tracer itself (advection, category conversions) and that needed for the calculations associated with the particular tracer. The age tracers (FY and iage) require very little extra calculation, so their timings represent essentially the time needed just to carry an extra tracer. The topo melt pond scheme is slightly faster than the others because it calculates pond area and volume once per grid cell, while the others calculate it for each thickness category.

_images/histograms.png

Figure 11

Figure 11 : Change in ‘TimeLoop’ timings from the 7-layer configuration using BL99 thermodynamics and EVP dynamics. Timings were made on a nondedicated machine, with variations of about \(\pm3\)% in identically configured runs (light grey). Darker grey indicates the time needed for extra required options; The Delta-Eddington radiation scheme is required for all melt pond schemes and the aerosol tracers, and the level-ice pond parameterization additionally requires the level-ice tracers.

3.4. Adding things

3.4.1. Timers

Timing any section of code, or multiple sections, consists of defining the timer and then wrapping the code with start and stop commands for that timer. Printing of the timer output is done simultaneously for all timers. To add a timer, first declare it (timer_[tmr]) at the top of ice_timers.F90 (we recommend doing this in both the mpi/ and serial/ directories), then add a call to get_ice_timer in the subroutine init_ice_timers. In the module containing the code to be timed, call ice_timer_start`(`timer_[tmr]) at the beginning of the section to be timed, and a similar call to ice_timer_stop at the end. A use ice_timers statement may need to be added to the subroutine being modified. Be careful not to have one command outside of a loop and the other command inside. Timers can be run for individual blocks, if desired, by including the block ID in the timer calls.

3.4.2. History fields

To add a variable to be printed in the history output, search for ‘example’ in ice_history_shared.F90:

  1. add a frequency flag for the new field
  2. add the flag to the namelist (here and also in ice_in)
  3. add an index number

and in ice_history.F90:

  1. broadcast the flag
  2. add a call to define_hist_field
  3. add a call to accum_hist_field

The example is for a standard, two-dimensional (horizontal) field; for other array sizes, choose another history variable with a similar shape as an example. Some history variables, especially tracers, are grouped in other files according to their purpose (bgc, melt ponds, etc.).

To add an output frequency for an existing variable, see section History files.

3.4.3. Tracers

Each optional tracer has its own module, ice_[tracer].F90, which also contains as much of the additional tracer code as possible, and for backward compatibility of binary restart files, each new tracer has its own binary restart file. We recommend that the logical namelist variable tr_[tracer] be used for all calls involving the new tracer outside of ice_[tracer].F90, in case other users do not want to use that tracer.

A number of optional tracers are available in the code, including ice age, first-year ice area, melt pond area and volume, brine height, aerosols, and level ice area and volume (from which ridged ice quantities are derived). Salinity, enthalpies, age, aerosols, level-ice volume, brine height and most melt pond quantities are volume-weighted tracers, while first-year area, pond area, level-ice area and all of the biogeochemistry tracers in this release are area-weighted tracers. In the absence of sources and sinks, the total mass of a volume-weighted tracer such as aerosol (kg) is conserved under transport in horizontal and thickness space (the mass in a given grid cell will change), whereas the aerosol concentration (kg/m) is unchanged following the motion, and in particular, the concentration is unchanged when there is surface or basal melting. The proper units for a volume-weighted mass tracer in the tracer array are kg/m.

In several places in the code, tracer computations must be performed on the conserved “tracer volume” rather than the tracer itself; for example, the conserved quantity is \(h_{pnd}a_{pnd}a_{lvl}a_{i}\), not \(h_{pnd}\). Conserved quantities are thus computed according to the tracer dependencies, and code must be included to account for new dependencies (e.g., \(a_{lvl}\) and \(a_{pnd}\) in ice_itd.F90 and ice_mechred.F90).

To add a tracer, follow these steps using one of the existing tracers as a pattern.

  1. ice_domain_size.F90: increase max_ntrcr (can also add option to comp_ice and bld/Macros.*)
  2. ice_state.F90: declare nt_[tracer] and tr_[tracer]
  3. ice_[tracer].F90: create initialization, physics, restart routines
  4. ice_fileunits.F90: add new dump and restart file units
  5. ice_init.F90: (some of this may be done in ice_[tracer].F90 instead)
    • add new module and tr_[tracer] to list of used modules and variables
    • add logical namelist variable tr_[tracer]
    • initialize namelist variable
    • broadcast namelist variable
    • print namelist variable to diagnostic output file
    • increment number of tracers in use based on namelist input (ntrcr)
    • define tracer types (trcr_depend = 0 for ice area tracers, 1 for ice volume, 2 for snow volume, 2+nt_[tracer] for dependence on other tracers)
  6. ice_itd.F90, ice_mechred.F90: Account for new dependencies if needed.
  7. CICE_InitMod.F90: initialize tracer (includes reading restart file)
  8. CICE_RunMod.F90, ice_step_mod.F90:
    • call routine to write tracer restart data
    • call physics routines in ice_[tracer].F90 (often called from ice_step_mod.F90)
  9. ice_restart.F90: define restart variables (for binary,  and PIO)
  10. ice_history_[tracer].F90: add history variables (Section History fields)
  11. ice_in: add namelist variables to tracer_nml and icefields_nml
  12. If strict conservation is necessary, add diagnostics as noted for topo ponds in Section Melt ponds.

3.5. Troubleshooting

Check the FAQ: http://oceans11.lanl.gov/drupal/CICE/FAQ.

3.5.1. Initial setup

The script comp_ice is configured so that the files grid, kmt, ice_in, run_ice, iced_gx3_v5.0 and ice.restart_file are NOT overwritten after the first setup. If you wish to make changes to the original files in input_templates/ rather than those in the run directory, either remove the files from the run directory before executing comp_ice or edit the script.

The code may abort during the setup phase for any number of reasons, and often the buffer containing the diagnostic output fails to print before the executable exits. The quickest way to get the diagnostic information is to run the code in an interactive shell with just the command cice for serial runs or “mpirun -np N cice” for MPI runs, where N is the appropriate number of processors (or a command appropriate for your computer’s software).

If the code fails to compile or run, or if the model configuration is changed, try the following:

  • create Macros.*. Makefile.* and run_ice.* files for your particular platform, if they do not already exist (type ‘uname -s’ at the prompt and compare the result with the file suffixes; we rename UNICOS/mp as UNICOS for simplicity).
  • modify the INCLUDE directory path and other settings for your system in the scripts, Macros.* and Makefile.* files.
  • alter directory paths, file names and the execution command as needed in run_ice and ice_in.
  • ensure that nprocs in ice_in is equal to NTASK in comp_ice.
  • ensure that the block size NXBLOCK, NYBLOCK in comp_ice is compatible with the processor_shape and other domain options in ice_in
  • if using the rake or space-filling curve algorithms for block distribution (distribution_type in ice_in) the code will abort if MXBLCKS is not large enough. The correct value is provided in the diagnostic output.
  • if starting from a restart file, ensure that kcatbound is the same as that used to create the file (kcatbound = 0 for the files included in this code distribution). Other configuration parameters, such as NICELYR, must also be consistent between runs.
  • for stand-alone runs, check that -Dcoupled is not set in the Macros.* file.
  • for coupled runs, check that -Dcoupled and other coupled-model-specific (e.g., CESM, popcice or hadgem) preprocessing options are set in the Macros.* file.
  • edit the grid size and other parameters in comp_ice.
  • remove the compile/ directory completely and recompile.

3.5.2. Restarts

CICE version 5 introduces a new model configuration that makes restarting from older simulations difficult. In particular, the number of ice categories, the category boundaries, and the number of vertical layers within each category must be the same in the restart file and in the run restarting from that file. Moreover, significant differences in the physics, such as the salinity profile, may cause the code to fail upon restart. Therefore, new model configurations may need to be started using runtype = ‘initial’. Binary restart files that were provided with CICE v4.1 were made using the BL99 thermodynamics with 4 layers and 5 thickness categories (kcatbound = 0) and therefore can not be used for the default CICE v5 configuration (7 layers). In addition, CICE’s default restart file format is now  instead of binary.

Restarting a run using runtype = ‘continue’ requires restart data for all tracers used in the new run. If tracer restart data is not available, use runtype = ‘initial’, setting ice_ic to the name of the core restart file and setting to true the namelist restart flags for each tracer that is available. The unavailable tracers will be initialized to their default settings.

On tripole grids, use restart_ext = true when using either binary or regular (non-PIO) netcdf.

Provided that the same number of ice layers (default: 4) will be used for the new runs, it is possible to convert v4.1 restart files to the new file structure and then to  format. If the same physical parameterizations are used, the code should be able to execute from these files. However if different physics is used (for instance, mushy thermo instead of BL99), the code may still fail. To convert a v4.1 restart file:

  1. Edit the code input_templates/convert_restarts.f90 for your model configuration and path names. Compile and run this code to create a binary restart file that can be read using v5. Copy the resulting file to the restart/ subdirectory in your working directory.
  2. In your working directory, turn off all tracer restart flags in ice_in and set the following:
    • runtype = ‘initial’
    • ice_ic = ‘./restart/[your binary file name]’
    • restart = .true.
    • use_restart_time = .true.
  3. In CICE_InitMod.F90, comment out the call to restartfile(ice_ic) and uncomment the call to restartfile_v4(ice_ic) immediately below it. This will read the v4.1 binary file and write a v5  file containing the same information.

If restart files are taking a long time to be written serially (i.e., not using PIO), see the next section.

3.5.3. Slow execution

On some architectures, underflows (\(10^{-300}\) for example) are not flushed to zero automatically. Usually a compiler flag is available to do this, but if not, try uncommenting the block of code at the end of subroutine stress in ice_dyn_evp.F90 or ice_dyn_eap.F90. You will take a hit for the extra computations, but it will not be as bad as running with the underflows.

In some configurations, multiple calls to scatter or gather global variables may overfill MPI’s buffers, causing the code to slow down (particularly when writing large output files such as restarts). To remedy this problem, set BARRIERS yes in comp_ice. This synchronizes MPI messages, keeping the buffers in check.

3.5.4. Debugging hints

Several utilities are available that can be helpful when debugging the code. Not all of these will work everywhere in the code, due to possible conflicts in module dependencies.

debug_ice (CICE.F90)
A wrapper for print_state that is easily called from numerous points during the timestepping loop (see CICE_RunMod.F90_debug, which can be substituted for CICE_RunMod.F90).
print_state (ice_diagnostics.F90)
Print the ice state and forcing fields for a given grid cell.
dbug = true (ice_in)
Print numerous diagnostic quantities.
print_global (ice_in)
If true, compute and print numerous global sums for energy and mass balance analysis. This option can significantly degrade code efficiency.
print_points (ice_in)
If true, print numerous diagnostic quantities for two grid cells, one near the north pole and one in the Weddell Sea. This utility also provides the local grid indices and block and processor numbers (ip, jp, iblkp, mtask) for these points, which can be used in conjunction with check_step, to call print_state. These flags are set in ice_diagnostics.F90. This option can be fairly slow, due to gathering data from processors.
global_minval, global_maxval, global_sum (ice_global_reductions.F90)
Compute and print the minimum and maximum values for an individual real array, or its global sum.

3.5.5. Known bugs

  1. Fluxes sent to the CESM coupler may have incorrect values in grid cells that change from an ice-free state to having ice during the given time step, or vice versa, due to scaling by the ice area. The authors of the CESM flux coupler insist on the area scaling so that the ice and land models are treated consistently in the coupler (but note that the land area does not suddenly become zero in a grid cell, as does the ice area).
  2. With the old CCSM radiative scheme (shortwave = ‘default’ or ‘ccsm3’), a sizable fraction (more than 10%) of the total shortwave radiation is absorbed at the surface but should be penetrating into the ice interior instead. This is due to use of the aggregated, effective albedo rather than the bare ice albedo when snowpatch \(< 1\).
  3. The date-of-onset diagnostic variables, melt_onset and frz_onset, are not included in the core restart file, and therefore may be incorrect for the current year if the run is restarted after Jan 1. Also, these variables were implemented with the Arctic in mind and may be incorrect for the Antarctic.
  4. The single-processor system_clock time may give erratic results on some architectures.
  5. History files that contain time averaged data (hist_avg = true in ice_in) will be incorrect if restarting from midway through an averaging period.
  6. In stand-alone runs, restarts from the end of ycycle will not be exact.
  7. Using the same frequency twice in histfreq will have unexpected consequences and causes the code to abort.
  8. Latitude and longitude fields in the history output may be wrong when using padding.

3.5.6. Interpretation of albedos

The snow-and-ice albedo, albsni, and diagnostic albedos albice, albsno, and albpnd are merged over categories but not scaled (divided) by the total ice area. (This is a change from CICE v4.1 for albsni.) The latter three history variables represent completely bare or completely snow- or melt-pond-covered ice; that is, they do not take into account the snow or melt pond fraction (albsni does, as does the code itself during thermodyamic computations). This is to facilitate comparison with typical values in measurements or other albedo parameterizations. The melt pond albedo albpnd is only computed for the Delta-Eddington shortwave case.

With the Delta-Eddington parameterization, the albedo depends on the cosine of the zenith angle (\(\cos\varphi\), coszen) and is zero if the sun is below the horizon (\(\cos\varphi < 0\)). Therefore time-averaged albedo fields would be low if a diurnal solar cycle is used, because zero values would be included in the average for half of each 24-hour period. To rectify this, a separate counter is used for the averaging that is incremented only when \(\cos\varphi > 0\). The albedos will still be zero in the dark, polar winter hemisphere.

3.5.7. Proliferating subprocess parameterizations

With the addition of several alternative parameterizations for sea ice processes, a number of subprocesses now appear in multiple parts of the code with differing descriptions. For instance, sea ice porosity and permeability, along with associated flushing and flooding, are calculated separately for mushy thermodynamics, topo and level-ice melt ponds, and for the brine height tracer, each employing its own equations. Likewise, the BL99 and mushy thermodynamics compute freeboard and snow–ice formation differently, and the topo and level-ice melt pond schemes both allow fresh ice to grow atop melt ponds, using slightly different formulations for Stefan freezing. These various process parameterizations will be compared and their subprocess descriptions possibly unified in the future.

3.6. Testing CICE

Version 6, August 2017 This documents how to use the testing features developed for the CICE Consortium CICE sea ice model.

3.6.1. Individual tests and test suites

The CICE scripts support both setup of individual tests as well as test suites. Individual tests are run from the command line like

> create.case -t smoke -m wolf -g gx3 -p 8x2 -s diag1,run5day -testid myid

where -m designates a specific machine. Test suites are multiple tests that are specified in an input file and are started on the command line like

> create.case -ts base_suite -m wolf -testid myid

create.case with -t or -ts require a testid to uniquely name test directories. The format of the case directory name for a test will always be ${machine}_${test}_${grid}_${pes}_${soptions}.${testid}

To build and run a test, the process is the same as a case,

cd into the test directory,

run cice.build

run cice.submit

The test results will be generated in a local file called “test_output”.

When running a test suite, the create.case command line automatically generates all the tests under a directory names ${test_suite}.${testid}. It then automatically builds and submits all tests. When the tests are complete, run the results.csh script to see the results from all the tests.

Tests are defined under configuration/scripts/tests. The tests currently supported are:
smoke - Runs the model for default length. The length and options can
be set with the -s commmand line option. The test passes if the model completes successfully.
restart - Runs the model for 10 days, writing a restart file at day 5 and
again at day 10. Runs the model a second time starting from the day 5 restart and writing a restart at day 10 of the model run. The test passes if both the 10 day and 5 day restart run complete and if the restart files at day 10 from both runs are bit-for-bit identical.

Please run ‘./create.case -h’ for additional details.

3.6.2. Additional testing options

There are several additional options on the create.case command line for testing that provide the ability to regression test and compare tests to each other.

-bd defines a baseline directory where tests can be stored for regression testing

-bg defines a version name that where the current tests can be saved for regression testing

-bc defines a version name that the current tests should be compared to for regression testing

-td provides a way to compare tests with each other

To use -bg,
> create.case -ts base_suite -m wolf -testid v1 -bg version1 -bd $SCRATCH/CICE_BASELINES
will copy all the results from the test suite to $SCRATCH/CICE_BASELINES/version1.
To use -bc,
> create.case -ts base_suite -m wolf -testid v2 -bc version1 -bd $SCRATCH/CICE_BASELINES
will compare all the results from this test suite to results saved before in $SCRATCH/CICE_BASELINES/version1.
-bc and -bg can be combined,
>create.case -ts base_suite -m wolf -testid v2 -bg version2 -bc version1 -bd $SCRATCH/CICE_BASELINES
will save the current results to $SCRATCH/CICE_BASELINES/version2 and compare the current results to results save before in $SCRATCH/CICE_BASELINES/version1.

-bg, -bc, and -bd are used for regression testing. There is a default -bd on each machine.

-td allows a user to compare one test result to another. For instance,
> create.case -t smoke -m wolf -g gx3 -p 8x2 -s run5day -testid t01 > create.case -t smoke -m wolf -g gx3 -p 4x2 -s run5day -testid t01 -td smoke_gx3_8x2_run5day

An additional check will be done for the second test (because of the -td argument), and it will compare the output from the first test “smoke_gx3_8x2_run5day” to the output from it’s test “smoke_gx3_4x2_run5day” and generate a result for that. It’s important that the first test complete before the second test is done. Also, the -td option works only if the testid and the machine are the same for the baseline run and the current run.

3.6.3. Test suite format

The format for the test suite file is relatively simple. It is a text file with white space delimited columns like,

Table 7
#Test Grid PEs Sets BFB-compare
smoke gx3 8x2 diag1,run5day  
smoke gx3 8x2 diag24,run1year,medium  
smoke gx3 4x1 debug,diag1,run5day  
smoke gx3 8x2 debug,diag1,run5day  
smoke gx3 4x2 diag1,run5day smoke_gx3_8x2_diag1_run5day
smoke gx3 4x1 diag1,run5day,thread smoke_gx3_8x2_diag1_run5day
smoke gx3 4x1 diag1,run5day smoke_gx3_4x1_diag1_run5day_thread
restart gx3 8x1    
restart gx3 4x2 debug  

The first column is the test name, the second the grid, the third the pe count, the fourth column is the -s options and the fifth column is the -td argument. The fourth and fifth columns are optional. The argument to -ts defines which filename to choose and that argument can contain a path. create.case will also look for the filename in configuration/scripts/tests where some preset test suites are defined.

3.6.4. Example Tests (Quickstart)

3.6.4.1. To generate a baseline dataset for a test case

./create.case -t smoke -m wolf -bg cicev6.0.0 -testid t00

cd wolf_smoke_gx3_4x1.t00

./cice.build

./cice.submit

# After job finishes, check output

cat test_output

3.6.4.2. To run a test case and compare to a baseline dataset

./create.case -t smoke -m wolf -bc cicev6.0.0 -testid t01

cd wolf_smoke_gx3_4x1.t01

./cice.build

./cice.submit

# After job finishes, check output

cat test_output

3.6.4.3. To run a test suite to generate baseline data

./create.case -m wolf -ts base_suite -testid t02 -bg cicev6.0.0bs

cd base_suite.t02

# Once all jobs finish, concatenate all output

./results.csh # All tests results will be stored in results.log

# To plot a timeseries of “total ice extent”, “total ice area”, and “total ice volume”

./timeseries.csh <directory>

ls *.png

3.6.4.4. To run a test suite to compare to baseline data

./create.case -m wolf -ts base_suite -testid t03 -bc cicev6.0.0bs

cd base_suite.t03

# Once all jobs finish, concatenate all output

./results.csh # All tests results will be stored in results.log

# To plot a timeseries of “total ice extent”, “total ice area”, and “total ice volume”

./timeseries.csh <directory>

ls *.png

3.6.4.5. To compare to another test

First:

./create.case -m wolf -t smoke -testid t01 -p 8x2

cd wolf_smoke_gx3_8x2.t01

./cice.build

./cice.submit

# After job finishes, check output

cat test_output

Then, do the comparison:

./create.case -m wolf -t smoke -testid t01 -td smoke_gx3_8x2 -s thread -p 4x1

cd wolf_smoke_gx3_4x1_thread.t01

./cice.build

./cice.submit

# After job finishes, check output

cat test_output

3.6.4.6. Additional Details

  • In general, the baseline generation, baseline compare, and test diff are independent.
  • Use the ‘-bd’ flag to specify the location where you want the baseline dataset
    to be written. Without specifying ‘-bd’, the baseline dataset will be written to the default baseline directory found in the env.<machine> file (ICE_MACHINE_BASELINE).
  • If ‘-bd’ is not passed, the scripts will look for baseline datasets in the default
    baseline directory found in the env.<machine> file (ICE_MACHINE_BASELINE). If the ‘-bd’ option is passed, the scripts will look for baseline datasets in the location passed to the -bd argument.
  • To generate a baseline dataset for a specific version (for regression testing),
    use ‘-bg <version_name>’. The scripts will then place the baseline dataset in $ICE_MACHINE_BASELINE/<version_name>/
  • The ‘-testid’ flag allows users to specify a testing id that will be added to the
    end of the case directory. For example, “./create.case -m wolf -t smoke -testid t12 -p 4x1” creates the directory wolf_smoke_gx3_4x1.t12. This flag is REQUIRED if using -t or -ts.

3.6.5. Code Compliance Test

A core tenet of CICE dycore and Icepack innovations is that they must not change the physics and biogeochemistry of existing model configurations, notwithstanding obsolete model components. Therefore, alterations to existing CICE Consortium code must only fix demonstrable numerical or scientific inaccuracies or bugs, or be necessary to introduce new science into the code. New physics and biogeochemistry introduced into the model must not change model answers when switched off, and in that case CICEcore and Icepack must reproduce answers bit-for-bit as compared to previous simulations with the same namelist configurations. This bit-for-bit requirement is common in Earth System Modeling projects, but often cannot be achieved in practice because model additions may require changes to existing code. In this circumstance, bit-for-bit reproducibility using one compiler may not be unachievable on a different computing platform with a different compiler. Therefore, tools for scientific testing of CICE code changes have been developed to accompany bit-for-bit testing. These tools exploit the statistical properties of simulated sea ice thickness to confirm or deny the null hypothesis, which is that new additions to the CICE dycore and Icepack have not significantly altered simulated ice volume using previous model configurations. Here we describe the CICE testing tools, which are applies to output from five-year gx-1 simulations that use the standard CICE atmospheric forcing. A scientific justification of the testing is provided in [38].

3.6.5.1. Two-Stage Paired Thickness Test

The first quality check aims to confirm the null hypotheses \(H_0\!:\!\mu_d{=}0\) at every model grid point, given the mean thickness difference \(\mu_d\) between paired CICE simulations ‘\(a\)’ and ‘\(b\)’ that should be identical. \(\mu_d\) is approximated as \(\bar{h}_{d}=\tfrac{1}{n}\sum_{i=1}^n (h_{ai}{-}h_{bi})\) for \(n\) paired samples of ice thickness \(h_{ai}\) and \(h_{bi}\) in each grid cell of the gx-1 mesh. Following [84], the associated \(t\)-statistic expects a zero mean, and is therefore

(1)\[t=\frac{\bar{h}_{d}}{\sigma_d/\sqrt{n_{eff}}}\]

given variance \(\sigma_d^{\;2}=\frac{1}{n-1}\sum_{i=1}^{n}(h_{di}-\bar{h}_d)^2\) of \(h_{di}{=}(h_{ai}{-}h_{bi})\) and effective sample size

(2)\[n_{eff}{=}n\frac{({1-r_1})}{({1+r_1})}\]

for lag-1 autocorrelation:

(3)\[r_1=\frac{\sum\limits_{i=1}^{n-1}\big[(h_{di}-\bar{h}_{d1:n-1})(h_{di+1}-\bar{h}_{d2:n})\big]}{\sqrt{\sum\limits_{i=1}^{n-1} (h_{di}-\bar{h}_{d1:n-1})^2 \sum\limits_{i=2}^{n} (h_{di}-\bar{h}_{d2:n})^2 }}.\]

Here, \(\bar{h}_{d1:n-1}\) is the mean of all samples except the last, and \(\bar{h}_{d2:n}\) is the mean of samples except the first, and both differ from the overall mean \(\bar{h}_d\) in equations ((1)). That is:

(4)\[\bar{h}_{d1:n-1}=\frac{1}{n{-}1} \sum \limits_{i=1}^{n-1} h_{di},\quad \bar{h}_{d2:n}=\frac{1}{n{-}1} \sum \limits_{i=2}^{n} h_{di},\quad \bar{h}_d=\frac{1}{n} \sum \limits_{i=1}^{n} {h}_{di}\]

Following [85], the effective sample size is limited to \(n_{eff}\in[2,n]\). This definition of \(n_{eff}\) assumes ice thickness evolves as an AR(1) process [80], which can be justified by analyzing the spectral density of daily samples of ice thickness from 5-year records in CICE Consortium member models [38]. The AR(1) approximation is inadmissible for paired velocity samples, because ice drift possesses periodicity from inertia and tides [29][43][59]. Conversely, tests of paired ice concentration samples may be less sensitive to ice drift than ice thickness. In short, ice thickness is the best variable for CICE Consortium quality control (QC), and for the test of the mean in particular.

Care is required in analyzing mean sea ice thickness changes using ((1)) with \(N{=}n_{eff}{-}1\) degrees of freedom. [85] demonstrate that the \(t\)-test in ((1)) becomes conservative when \(n_{eff} < 30\), meaning that \(H_0\) may be erroneously confirmed for highly auto-correlated series. Strong autocorrelation frequently occurs in modeled sea ice thickness, and \(r_1>0.99\) is possible in parts of the gx-1 domain for the five-year QC simulations. In the event that \(H_0\) is confirmed but \(2\leq n_{eff}<30\), the \(t\)-test progresses to the ‘Table Lookup Test’ of [85], to check that the first-stage test using ((1)) was not conservative. The Table Lookup Test chooses critical \(t\) values \(|t|<t_{crit}({1{-}\alpha/2},N)\) at the \(\alpha\) significance level based on \(r_1\). It uses the conventional \(t={\bar{h}_{d} \sqrt{n}}/{\sigma_d}\) statistic with degrees of freedom \(N{=}n{-}1\), but with \(t_{crit}\) values generated using the Monte Carlo technique described in [85], and summarized in Table 1 for 5-year QC simulations (\(N=1824\)) at the two-sided 80% confidence interval (\(\alpha=0.2\)). We choose this interval to limit Type II errors, whereby a QC test erroneously confirms \(H_0\).

Table 1 : Summary of two-sided \(t_{crit}\) values for the Table Lookup Test of [85] at the 80% confidence interval generated for \(N=1824\) degrees of freedom and lag-1 autocorrelation \(r_1\).

Table 1
\(r_1\) -0.05 0.0 0.2 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.97 0.99
\(t_{crit}\) 1.32 1.32 1.54 2.02 2.29 2.46 3.17 3.99 5.59 8.44 10.85 20.44


3.6.5.2. Quadratic Skill Compliance Test

In addition to the two-stage test of mean sea ice thickness, we also check that paired simulations are highly correlated and have similar variance using a skill metric adapted from [72]. A general skill score applicable to Taylor diagrams takes the form

(5)\[S_m=\frac{4(1+R)^m}{({\hat{\sigma}_{f}+1/{\hat{\sigma}_{f}}})^2 (1+R_0)^m}\]

where \(m=1\) for variance-weighted skill, and \(m=4\) for correlation-weighted performance, as given in equations (4) and (5) of [72], respectively. We choose \(m=2\) to balance the importance of variance and correlation reproduction in QC tests, where \(\hat{\sigma}_{f}={\sigma_{b}}/{\sigma_{a}}\) is the ratio of the standard deviations of simulations ‘\(b\)’ and ‘\(a\)’, respectively, and simulation ‘\(a\)’ is the control. \(R_0\) is the maximum possible correlation between two series for correlation coefficient \(R\) calculated between respective thickness pairs \(h_{a}\) and \(h_{b}\). Bit-for-bit reproduction of previous CICE simulations means that perfect correlation is possible, and so \(R_0=1\), giving the quadratic skill of run ‘\(b\)’ relative to run ‘\(a\)’:

(6)\[S=\bigg[ \frac{(1+R) (\sigma_a \sigma_b)}{({\sigma_a}^2 + {\sigma_b}^2)} \bigg]^2\]

This provides a skill score between 0 and 1. We apply this \(S\) metric separately to the northern and southern hemispheres of the gx-1 grid by area-weighting the daily thickness samples discussed in the Two-Stage Paired Thickness QC Test. The hemispheric mean thickness over a 5-year simulation for run ‘\(a\)’ is:

(7)\[\bar{h}_{a}=\frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{J} \ W_{j} \; h_{{a}_{i,j}}\]

at time sample \(i\) and grid point index \(j\), with an equivalent equation for simulation ‘\(b\)’. \(n\) is the total number of time samples (nominally \(n=1825\)) and \(J\) is the total number of grid points on the gx-1 grid. \(W_j\) is the weight attributed to each grid point according to its area \(A_{j}\), given as

(8)\[W_{j}=\frac{ A_{j} }{\sum_{j=1}^{J} A_{j}}\]

for all grid points within each hemisphere with one or more non-zero thicknesses in one or both sets of samples \(h_{{a}_{i,j}}\) or \(h_{{b}_{i,j}}\). The area-weighted variance for simulation ‘\(a\)’ is:

(9)\[\sigma_a^{\;2}=\frac{\hat{J}}{(n\,\hat{J}-1)} \sum_{i=1}^{n} \sum_{j=1}^{J} W_{j} \, (h_{{a}_{i,j}}-\bar{h}_{a})^2\]

where \(\hat{J}\) is the number of non-zero \(W_j\) weights, and \(\sigma_b\) is calculated equivalently for run ‘\(b\)’. In this context, \(R\) becomes a weighted correlation coefficient, calculated as

(10)\[R=\frac{\textrm{cov}(h_{a},h_{b})}{\sigma_a \; \sigma_b}\]

given the weighted covariance

(11)\[\textrm{cov}(h_{a},h_{b})=\frac{\hat{J}}{(n\,\hat{J}-1)} \sum_{i=1}^{n} \sum_{j=1}^{J} W_{j} \, (h_{{a}_{i,j}}-\bar{h}_{a}) (h_{{b}_{i,j}}-\bar{h}_{b}).\]

Using equations ((6)) to ((11)), the skill score \(S\) is calculated separately for the northern and southern hemispheres, and must exceed a critical value nominally set to \(S_{crit}=0.99\) to pass the test. Practical illustrations of this test and the Two-Stage test described in the previous section are provided in [38].

3.6.5.3. Practical Testing Procedure

The CICE code compliance test is performed by running a python script (cice.t-test.py). In order to run the script, the following requirements must be met:

  • Python v2.7 or later
  • netCDF Python package
  • numpy Python package
  • matplotlib Python package (optional)
  • basemap Python package (optional)

In order to generate the files necessary for the compliance test, test cases should be created with the qc option (i.e., -s qc) when running create.case. This option results in daily, non-averaged history files being written for a 5 year simulation.

To install the necessary Python packages, the pip Python utility can be used.

pip install --user netCDF4
pip install --user numpy
pip install --user matplotlib

To run the compliance test:

cp configuration/scripts/tests/QC/cice.t-test.py .
./cice.t-test.py /path/to/baseline/history /path/to/test/history

The script will produce output similar to:

INFO:__main__:Number of files: 1825
INFO:__main__:Two-Stage Test Passed
INFO:__main__:Quadratic Skill Test Passed for Northern Hemisphere
INFO:__main__:Quadratic Skill Test Passed for Southern Hemisphere
INFO:__main__:
INFO:__main__:Quality Control Test PASSED

Additionally, the exit code from the test (echo $?) will be 0 if the test passed, and 1 if the test failed.

Implementation notes: 1) Provide a pass/fail on each of the confidence intervals, 2) Facilitate output of a bitmap for each test so that locations of failures can be identified.

3.6.6. CICE Test Reporting

The CICE testing scripts have the capability of posting the test results to an online dashboard, located on CDash. There are 2 options for posting CICE results to CDash: 1) The automated script, 2) The manual method.

3.6.6.1. Automatic Script

To automatically run the CICE tests, and post the results to the CICE Cdash dashboard, users need to copy and run the run.suite script:

cp configuration/scripts/run.suite .
./run.suite -m <machine> -testid <test_id> -bc <baseline_to_compare> -bg <baseline_to_generate>

The run.suite script does the following:

  • Creates a fresh clone of the CICE-Consortium repository
  • cd to cloned repo
  • run create.case to generate the base_suite directories. The output is piped to log.suite
  • Running create.case submits each individual job to the queue.
  • run.suite monitors the queue manager to determine when all jobs have finished (pings the queue manager once every 5 minutes).
  • Once all jobs complete, cd to base_suite directory and run ./results.csh
  • Run ./run_ctest.csh in order to post the test results to the CDash dashboard

3.6.6.2. Manual Method

To manually run the CICE tests and post the results to the CICE CDash dashboard, users essentially just need to perform all steps available in run.suite, detailed below:

  • Pass the -report flag to create.case when running the base_suite test suite. The -report flag copies the required CTest / CDash scripts to the suite directory.
  • create.case compiles the CICE code, and submits all of the jobs to the queue manager.
  • After every job has been submitted and completed, cd to the suite directory.
  • Parse the results, by running ./results.csh.
  • Run the CTest / CDash script ./run_ctest.csh.

If the run_ctest.csh script is unable to post the testing results to the CDash server, a message will be printed to the screen detailing instructions on how to attempt to post the results from another server. If run_ctest.csh fails to submit the results, it will generate a tarball cice_ctest.tgz that contains the necessary files for submission. Copy this file to another server (CMake version 2.8+ required), extract the archive, and run ./run_ctest.csh -submit.

3.6.7. End-To-End Testing Procedure

Below is an example of a step-by-step procedure for testing a code change that results in non-bit-for-bit results:

# Create a baseline dataset (only necessary if no baseline exists on the system)
./create.case -m onyx -ts base_suite -testid base0 -bg cicev6.0.0 -a <account_number>

# Check out the updated code, or clone from a pull request

# Run the test with the new code
./create.case -m onyx -ts base_suite -testid test0 -bc cicev6.0.0 -a <account_number>

# Check the results
cd base_suite.test0
./results.csh

#### If the BFB tests fail, perform the compliance testing ####
# Create a QC baseline
./create.case -m onyx -t smoke -g gx1 -p 44x1 -testid qc_base -s qc,medium -a <account_number>
cd onyx_smoke_gx1_44x1_medium_qc.qc_base
./cice.build
./cice.submit

# Check out the updated code or clone from a pull request

# Create the t-test testing data
./create.case -m onyx -t smoke -g gx1 -p 44x1 -testid qc_test -s qc,medium -a <account_number>
cd onyx_smoke_gx1_44x1_medium_qc.qc_test
./cice.build
./cice.submit

# Wait for runs to finish

# Perform the QC test
cp configuration/scripts/tests/QC/cice.t-test.py
./cice.t-test.py /p/work/turner/CICE_RUNS/onyx_smoke_gx1_44x1_medium_qc.qc_base \
                 /p/work/turner/CICE_RUNS/onyx_smoke_gx1_44x1_medium_qc.qc_test

# Example output:
INFO:__main__:Number of files: 1825
INFO:__main__:Two-Stage Test Passed
INFO:__main__:Quadratic Skill Test Passed for Northern Hemisphere
INFO:__main__:Quadratic Skill Test Passed for Southern Hemisphere
INFO:__main__:
INFO:__main__:Quality Control Test PASSED

3.7. Table of namelist options

Table 8
variable options/format description recommended value
setup_nml      
    Time, Diagnostics  
days_per_year 360 or 365 number of days in a model year 365
use_leap_years true/false if true, include leap days  
year_init yyyy the initial year, if not using restart  
istep0 integer initial time step number 0
dt seconds thermodynamics time step length
npt integer total number of time steps to take  
ndtd integer number of dynamics/advection/ridging/steps per thermo timestep 1
    Initialization/Restarting  
runtype initial start from ice_ic  
  continue restart using pointer_file  
ice_ic default latitude and sst dependent default
  none no ice  
  path/file restart file name  
restart true/false initialize using restart file .true.
use_restart_time true/false set initial date using restart file .true.
restart_format nc read/write  restart files (use with PIO)  
  bin read/write binary restart files  
lcdf64 true/false if true, use 64-bit  format  
restart_dir path/ path to restart directory  
restart_ext true/false read/write halo cells in restart files  
restart_file filename prefix output file for restart dump ‘iced’
pointer_file pointer filename contains restart filename  
dumpfreq y write restart every dumpfreq_n years y
  m write restart every dumpfreq_n months  
  d write restart every dumpfreq_n days  
dumpfreq_n integer frequency restart data is written 1
dump_last true/false if true, write restart on last time step of simulation  
    Model Output  
bfbflag true/false for bit-for-bit diagnostic output  
diagfreq integer frequency of diagnostic output in dt 24
  e.g., 10 once every 10 time steps  
diag_type stdout write diagnostic output to stdout  
  file write diagnostic output to file  
diag_file filename diagnostic output file (script may reset)  
print_global true/false print diagnostic data, global sums .false.
print_points true/false print diagnostic data for two grid points .false.
latpnt real latitude of (2) diagnostic points  
lonpnt real longitude of (2) diagnostic points  
dbug true/false if true, write extra diagnostics .false.
histfreq string array defines output frequencies  
  y write history every histfreq_n years  
  m write history every histfreq_n months  
  d write history every histfreq_n days  
  h write history every histfreq_n hours  
  1 write history every time step  
  x unused frequency stream (not written)  
histfreq_n integer array frequency history output is written  
  0 do not write to history  
hist_avg true write time-averaged data .true.
  false write snapshots of data  
history_dir path/ path to history output directory  
history_file filename prefix output file for history ‘iceh’
write_ic true/false write initial condition  
incond_dir path/ path to initial condition directory  
incond_file filename prefix output file for initial condition ‘iceh’
runid string label for run (currently CESM only)  
       
grid_nml      
    Grid  
grid_format nc read  grid and kmt files ‘bin’
  bin read direct access, binary file  
grid_type rectangular defined in rectgrid  
  displaced_pole read from file in popgrid  
  tripole read from file in popgrid  
  regional read from file in popgrid  
grid_file filename name of grid file to be read ‘grid’
kmt_file filename name of land mask file to be read ‘kmt’
gridcpl_file filename input file for coupling grid info  
kcatbound 0 original category boundary formula 0
  1 new formula with round numbers  
  2 WMO standard categories  
  -1 one category  
       
domain_nml      
    Domain  
nprocs integer number of processors to use  
processor_shape slenderX1 1 processor in the y direction (tall, thin)  
  slenderX2 2 processors in the y direction (thin)  
  square-ice more processors in x than y, \(\sim\) square  
  square-pop more processors in y than x, \(\sim\) square  
distribution_type cartesian distribute blocks in 2D Cartesian array  
  roundrobin 1 block per proc until blocks are used  
  sectcart blocks distributed to domain quadrants  
  sectrobin several blocks per proc until used  
  rake redistribute blocks among neighbors  
  spacecurve distribute blocks via space-filling curves  
distribution_weight block full block size sets work_per_block  
  latitude latitude/ocean sets work_per_block  
ew_boundary_type cyclic periodic boundary conditions in x-direction  
  open Dirichlet boundary conditions in x  
ns_boundary_type cyclic periodic boundary conditions in y-direction  
  open Dirichlet boundary conditions in y  
  tripole U-fold tripole boundary conditions in y  
  tripoleT T-fold tripole boundary conditions in y  
maskhalo_dyn true/false mask unused halo cells for dynamics  
maskhalo_remap true/false mask unused halo cells for transport  
maskhalo_bound true/false mask unused halo cells for boundary updates  
       
tracer_nml      
    Tracers  
tr_iage true/false ice age  
restart_age true/false restart tracer values from file  
tr_FY true/false first-year ice area  
restart_FY true/false restart tracer values from file  
tr_lvl true/false level ice area and volume  
restart_lvl true/false restart tracer values from file  
tr_pond_cesm true/false CESM melt ponds  
restart_pond_cesm true/false restart tracer values from file  
tr_pond_topo true/false topo melt ponds  
restart_pond_topo true/false restart tracer values from file  
tr_pond_lvl true/false level-ice melt ponds  
restart_pond_lvl true/false restart tracer values from file  
tr_aero true/false aerosols  
restart_aero true/false restart tracer values from file  
thermo_nml      
    Thermodynamics  
kitd 0 delta function ITD approximation 1
  1 linear remapping ITD approximation  
ktherm 0 zero-layer thermodynamic model  
  1 Bitz and Lipscomb thermodynamic model  
  2 mushy-layer thermodynamic model  
conduct MU71 conductivity [53]  
  bubbly conductivity [58]  
a_rapid_mode real brine channel diameter 0.5x10 \(^{-3}\) m
Rac_rapid_mode real critical Rayleigh number 10
aspect_rapid_mode real brine convection aspect ratio 1
dSdt_slow_mode real drainage strength parameter -1.5x10 \(^{-7}\) m/s/K
phi_c_slow_mode \(0<\phi_c < 1\) critical liquid fraction 0.05
phi_i_mushy \(0<\phi_i < 1\) solid fraction at lower boundary 0.85
       
dynamics_nml      
    Dynamics  
kdyn 0 dynamics OFF 1
  1 EVP dynamics  
  2 EAP dynamics  
revised_evp true/false use revised EVP formulation  
ndte integer number of EVP subcycles 120
advection remap linear remapping advection ‘remap’
  upwind donor cell advection  
kstrength 0 ice strength formulation [26] 1
  1 ice strength formulation [61]  
krdg_partic 0 old ridging participation function 1
  1 new ridging participation function  
krdg_redist 0 old ridging redistribution function 1
  1 new ridging redistribution function  
mu_rdg real e-folding scale of ridged ice  
Cf real ratio of ridging work to PE change in ridging
       
shortwave_nml      
    Shortwave  
shortwave default NCAR CCSM3 distribution method  
  dEdd Delta-Eddington method  
albedo_type default NCAR CCSM3 albedos ‘default’
  constant four constant albedos  
albicev \(0<\alpha <1\) visible ice albedo for thicker ice  
albicei \(0<\alpha <1\) near infrared ice albedo for thicker ice  
albsnowv \(0<\alpha <1\) visible, cold snow albedo  
albsnowi \(0<\alpha <1\) near infrared, cold snow albedo  
ahmax real albedo is constant above this thickness 0.3 m
R_ice real tuning parameter for sea ice albedo from Delta-Eddington shortwave  
R_pnd real … for ponded sea ice albedo …  
R_snw real … for snow (broadband albedo) …  
dT_mlt real \(\Delta\) temperature per \(\Delta\) snow grain radius  
rsnw_mlt real maximum melting snow grain radius  
kalg real absorption coefficient for algae  
       
ponds_nml      
    Melt Ponds  
hp1 real critical ice lid thickness for topo ponds 0.01 m
hs0 real snow depth of transition to bare sea ice 0.03 m
hs1 real snow depth of transition to pond ice 0.03 m
dpscale real time scale for flushing in permeable ice \(1\times 10^{-3}\)
frzpnd hlid Stefan refreezing with pond ice thickness ‘hlid’
  cesm CESM refreezing empirical formula  
rfracmin \(0 \le r_{min} \le 1\) minimum melt water added to ponds 0.15
rfracmax \(0 \le r_{max} \le 1\) maximum melt water added to ponds 1.0
pndaspect real aspect ratio of pond changes (depth:area) 0.8
       
zbgc_nml      
    Biogeochemistry  
tr_brine true/false brine height tracer  
tr_zaero true/false vertical aerosol tracers  
modal_aero true/false modal aersols  
restore_bgc true/false restore bgc to data  
``solve_zsal` true/false update salinity tracer profile  
bgc_data_dir path/ data directory for bgc  
skl_bgc true/false biogeochemistry  
sil_data_type default default forcing value for silicate  
  clim silicate forcing from ocean climatology [23]  
nit_data_type default default forcing value for nitrate  
  clim nitrate forcing from ocean climatology [23]  
  sss nitrate forcing equals salinity  
fe_data_type default default forcing value for iron  
  clim iron forcing from ocean climatology  
bgc_flux_type Jin2006 ice–ocean flux velocity of [39]  
  constant constant ice–ocean flux velocity  
restart_bgc true/false restart tracer values from file  
tr_bgc_C_sk true/false algal carbon tracer  
tr_bgc_chl_sk true/false algal chlorophyll tracer  
tr_bgc_Am_sk true/false ammonium tracer  
tr_bgc_Sil_sk true/false silicate tracer  
tr_bgc_DMSPp_sk true/false particulate DMSP tracer  
tr_bgc_DMSPd_sk true/false dissolved DMSP tracer  
tr_bgc_DMS_sk true/false DMS tracer  
phi_snow real snow porosity for brine height tracer  
       
forcing_nml      
    Forcing  
formdrag true/false calculate form drag  
atmbndy default stability-based boundary layer ‘default’
  constant bulk transfer coefficients  
fyear_init yyyy first year of atmospheric forcing data  
ycycle integer number of years in forcing data cycle  
atm_data_format nc read  atmo forcing files  
  bin read direct access, binary files  
atm_data_type default constant values defined in the code  
  LYq AOMIP/Large-Yeager forcing data  
  monthly monthly forcing data  
  ncar NCAR bulk forcing data  
  oned column forcing data  
atm_data_dir path/ path to atmospheric forcing data directory  
calc_strair true calculate wind stress and speed  
  false read wind stress and speed from files  
highfreq true/false high-frequency atmo coupling  
natmiter integer number of atmo boundary layer iterations  
calc_Tsfc true/false calculate surface temperature .true.
precip_units mks liquid precipitation data units  
  mm_per_month    
  mm_per_sec (same as MKS units)  
tfrz_option minus1p8 constant ocean freezing temperature (\(-1.8^{\circ} C\))  
  linear_salt linear function of salinity (ktherm=1)  
  mushy_layer matches mushy-layer thermo (ktherm=2)  
ustar_min real minimum value of ocean friction velocity 0.0005 m/s
fbot_xfer_type constant constant ocean heat transfer coefficient  
  Cdn_ocn variable ocean heat transfer coefficient  
update_ocn_f true include frazil water/salt fluxes in ocn fluxes  
  false do not include (when coupling with POP)  
l_mpond_fresh true retain (topo) pond water until ponds drain  
  false release (topo) pond water immediately to ocean  
oceanmixed_ice true/false active ocean mixed layer calculation .true. (if uncoupled)
ocn_data_format nc read  ocean forcing files  
  bin read direct access, binary files  
sss_data_type default constant values defined in the code  
  clim climatological data  
  near POP ocean forcing data  
sst_data_type default constant values defined in the code  
  clim climatological data  
  ncar POP ocean forcing data  
ocn_data_dir path/ path to oceanic forcing data directory  
oceanmixed_file filename data file containing ocean forcing data  
restore_sst true/false restore sst to data  
trestore integer sst restoring time scale (days)  
restore_ice true/false restore ice state along lateral boundaries  
       
icefields_tracer_nml      
    History Fields  
f_<var> string frequency units for writing <var> to history  
  y write history every histfreq_n years  
  m write history every histfreq_n months  
  d write history every histfreq_n days  
  h write history every histfreq_n hours  
  1 write history every time step  
  x do not write <var> to history  
  md e.g., write both monthly and daily files  
f_<var>_ai   grid cell average of <var> (\(\times a_i\))