Browse Source

Initial draft of internals.rst

lon0
Bill Ladwig 6 years ago
parent
commit
e1fa3ca747
  1. 61
      doc/source/contrib.rst
  2. 639
      doc/source/internals.rst

61
doc/source/contrib.rst

@ -181,7 +181,8 @@ tedious at this time). @@ -181,7 +181,8 @@ tedious at this time).
3. You should create your contribution in the WRF-Pyhon source tree as if
you were one of the core developers of it. This means:
- Your Fortran code (if applicable) should be placed in the fortran folder.
- Your Fortran code (if applicable) should be placed in the fortran
directory.
- Update the "ext1 = numpy.distutils.core.Extension" section of setup.py
to include your new Fortran source (if applicable).
@ -193,20 +194,21 @@ tedious at this time). @@ -193,20 +194,21 @@ tedious at this time).
- If the current function decorators do not cover your specific needs,
place your custom decorator in specialdec.py. Most of the decorators
in this module are used for products that contain multiple outputs like
cape_2d, but this
in specialdec.py are used for products that contain multiple outputs like
cape_2d, but you can use it for other purposes.
- If your function is pure python, you can create a new module for it,
or place it in another module with similar functionality. For example,
if your routine is a new interpolation routine, then it should go
in interp.py. Remember to apply the same type of decorators as
done with Fortran extensions (checking args, leftmost indexings, etc).
done with Fortran extensions (checking args, leftmost dimension
indexing, etc).
- Create a 'getter' routine which is responsible for extracting the
required variables from a WRF file and calling your computational
routine. This is what will be called by :meth:`wrf.getvar`.
This function should be placed in a new python module with the prefix
'g_' (i.e. g_yourdiagnostic.py)
'g\_' (i.e. g_yourdiagnostic.py).
- Decorate your getter routine with an appropriate metadata handling
decorator. If you need to make a custom decorator for the metadata,
@ -236,10 +238,10 @@ Fixing Documentation Errors @@ -236,10 +238,10 @@ Fixing Documentation Errors
2. Python docstrings follow `Google docstring <https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_ format.
2. Documentation can be found in the *doc* directory, along with the
3. Documentation can be found in the *doc* directory, along with the
docstrings contained within the Python code.
3. For documentation fixes, you can just submit a pull request with the
4. For documentation fixes, you can just submit a pull request with the
appropriate corrections already made.
@ -258,6 +260,8 @@ Creating New Examples @@ -258,6 +260,8 @@ Creating New Examples
go ahead and create a GitHub issue to discuss with the developers.
.. _dev_setup:
Setting Up Your Development Environment
---------------------------------------------
@ -336,6 +340,8 @@ contributing is: @@ -336,6 +340,8 @@ contributing is:
Windows:
.. code::
./win_msvc_mingw_omp.bat
- The previous step will build and install WRF-Python in to the 'develop'
@ -387,34 +393,34 @@ A summary of style notes is below: @@ -387,34 +393,34 @@ A summary of style notes is below:
- Use 4 spaces for indentation, not tabs.
- Use all capital letters for Fortran key words (e.g. IF, DO, REAL, INTENT)
- Use all capital letters for Fortran intrinsics (e.g. MAX, MIN, SUM)
- Use all capital letters for any PARAMETER constants.
- Use all capital letters for any constants declared as PARAMETER (e.g. RD).
- Use all lowercase letters for variables with '_' separting words
(snake case).
- Use all lowercase letters for functions and subroutines with '_' separting
words (snake case).
- Use all lowercase letters for functions and subroutines using '_' to
separate words (snake case).
- Declare your REAL variables as REAL(KIND=8), unless you really need 4-byte
REALs for a specific reason.
- Do not allocate any memory in your Fortran routine (e.g work arrays). We
will use numpy arrays to manage all memory. Instead, declare your work
array (or dynamic array) as an INOUT argument in your function
array (or dynamic array) as an INTENT(INOUT) argument in your function
signature.
- Avoid submitting code that uses global variables (other than for read only
constants). All Fortran contributions must be threadsafe and have no side
effects.
- Place any computational constants in the wrf_constants module found in
wrf_constants.f90 and use "USE wrf_constants, ONLY : YOUR_CONSTANT"
wrf_constants.f90 and put a "USE wrf_constants, ONLY : YOUR_CONSTANT"
declaration in your function.
- Please do not redefine constants already declared in
wrf_constants.f90 (e.g. G, RD, RV, etc). Although the WRF model itself
does not adhere to this, we are trying to be consistent with the constants
used throughout this project.
used throughout WRF-Python.
- Do not put any STOP statements in your code to deal with errors. STOP
statements will bring down the entire Python interpreter with it. Instead,
add *errstat* and *errmsg* arguments to your function signature to tell
Python about the error so it can throw an exception. See WETBULBCALC
in wrf_rip_phys_routines.f90 for how this is handled.
- Don't worry about adding OpenMP directives to your code if you are
unfamiliar OpenMP, but feel free to do so if you are already familiar.
- If you know how to use OpenMP directives, feel free to add them to your
routine, but this is not required.
Pull Requests
@ -428,6 +434,10 @@ Following a pull request, automated continuous integration tools will be @@ -428,6 +434,10 @@ Following a pull request, automated continuous integration tools will be
run to ensure that your code follows the PEP 8 style guide, and verifies that
a basic suite of unit tests run.
If your pull request is for a bug fix to an existing computational routine,
then the automated unit tests will probably fail due to the new values. This
is not a problem, but be sure to indicate to the developers in your GitHub
issue that the unit tests will need to be updated.
.. testing_::
@ -446,26 +456,27 @@ Sample Data @@ -446,26 +456,27 @@ Sample Data
^^^^^^^^^^^^^^^^^^^
You can download sample data for Hurricane Katrina here: <insert link>
This data has both moving nest and static nest version. You should test
against this data set, unless you are unable to demonstrate the problem
with it.
This data includes both a moving nest and a static nest version. You should
create your tests with this data set (both static and moving nests), unless
you are unable to reproduce a particular problem with it.
Supplying Data
^^^^^^^^^^^^^^^^^^^^^^
If you need to supply us data for your test, please provide us a link to
either a cloud storage service, by :ref:`submitting-files`, or some other
means. Unless the data is very small, do not add it to the GitHub repository.
If you need to supply us data for your test (due to a bug) please provide us a
link to either a cloud storage service, by :ref:`submitting_files`, or some
other means. Unless the data is very small, do not add it to the GitHub
repository.
If you can demonstrate the problem/solution with a minimal set of hand created
values, you can just put that in your test itself.
values, you can just use that in your test.
Guidelines
^^^^^^^^^^^^^^^^^^^
The following are guidelines for testing you contributions. Obviously,
different issues have different needs, so you can use the GitHub
The following are guidelines for testing you contributions. The developers are
aware that some issues have unique needs, so you can use the GitHub
issue related to your contribution to discuss with developers.
1. New computations must work for both moving nests and static nests.
@ -484,7 +495,7 @@ issue related to your contribution to discuss with developers. @@ -484,7 +495,7 @@ issue related to your contribution to discuss with developers.
4. Place your test in the test/contrib directory.
5. For new contributions, images may be sufficient to show that your
code is working. Discuss with the developers in you GitHub issue.
code is working. Please discuss with the developers in you GitHub issue.
6. For bug related issues, try to create a case that demonstrates the problem,
and demonstrates the fix. If your problem is a crash, then proving that

639
doc/source/internals.rst

@ -13,8 +13,8 @@ A typical use case for a WRF-Python user is to: @@ -13,8 +13,8 @@ A typical use case for a WRF-Python user is to:
1) Open a WRF data file (or sequence of files) using NetCDF4-python or PyNIO.
2) Compute a WRF diagnostic using :meth:`wrf.getvar`.
3) Performing other computations using methods outside of WRF-Python.
4) Creating a plot of the output using matplotlib (basemap or cartopy) or
3) Perform other computations using methods outside of WRF-Python.
4) Create a plot of the output using matplotlib (basemap or cartopy) or
PyNGL.
The purpose of this guide is to explain the internals of item 2 so that
@ -27,23 +27,21 @@ Overview of a :meth:`wrf.getvar` Diagnostic Computation @@ -27,23 +27,21 @@ Overview of a :meth:`wrf.getvar` Diagnostic Computation
A diagnostic computed using the :meth:`wrf.getvar` function consists of the
following steps:
1) Using the diagnostic string, call the appropriate 'get' function. This
1) Using the diagnostic string, call the appropriate 'getter' function. This
step occurs in the :met:`wrf.getvar` routine in routines.py.
2) Extract the required variables from the NetCDF data file (or files).
3) Compute the diagnostic using a wrapped Fortran, C, or Python routine.
4) Convert to the desired units if applicable.
5) If desired, set the metadata and return the result as an
5) Set the metadata (if desired) and return the result as an
:class:`xarray.DataArray`, or return a :class:`numpy.ndarray` if no
metadata is desired.
In the source directory, the :meth:`wrf.getvar` 'get' routines have a
"g_" prefix for the naming convention (the "g" stands for "get", but didn't
want to cause namespace conflicts with functions already named with "get" in
the title).
In the source directory, the :meth:`wrf.getvar` 'getter' routines have a
"g_" prefix for the naming convention (the "g" stands for "get").
The unit conversion is handled by a wrapt decorator that can be found in
decorators.py. The setting of the metadata is handled using a wrapt decorator,
which can be found in the metadecorators.py file.
The unit conversion is handled by a :mod:`wrapt` decorator that can be found
in decorators.py. The setting of the metadata is handled using a :mod:`wrapt`
decorator, which can be found in the metadecorators.py file.
Overview of Compiled Computational Routines
@ -57,38 +55,629 @@ programming mindset (e.g. some use 1D arrays, 2D arrays, etc). @@ -57,38 +55,629 @@ programming mindset (e.g. some use 1D arrays, 2D arrays, etc).
The raw Fortran routines are compiled in to the :mod:`wrf._wrffortran`
extension module, but are not particularly useful for applications in that
raw form. These routines are imported in the extention.py module, where
raw form. These routines are imported in the extension.py module, where
additional functionality is added to make the routines more user friendly.
The common behavior for a fully exported Fortran routine in extension.py
The typical behavior for a fully exported Fortran routine in extension.py
is:
1) Verify that the arguments passed in are valid in shape. While f2py does this
as well, the errors thrown by f2py are confusing to users, so this step
helps provide better error messages.
1) Verify that the arguments passed in are valid in shape. Although f2py does
this as well, the errors thrown by f2py are confusing to users, so this
step helps provide better error messages.
2) Allocate the ouput array based on the output shape of the algorithm,
number of "leftmost" dimensions, and size of the data.
number of "leftmost"[1]_ dimensions, and size of the data.
3) Iterate over the leftmost dimensions and compute output for argument
3) Iterate over the leftmost [1]_ dimensions and compute output for argument
data slices that are of the same dimensionality as the compiled algorithm.
For example, if the compiled algorithm is written for two dimensional data,
but your data is four dimensional, you have two leftmost dimensions.
4) Cast the argument arrays in to the type used in the
compiled routine (usually for WRF data, the conversion is from 4-byte float
to 8-byte double).
4) Cast the argument arrays in to the dtype used in the
compiled routine (for WRF data the conversion is usually from a 4-byte
float to an 8-byte double).
5) Extract the argument arrays out of xarray in to numpy arrays
(if applicable) and transpose them in to Fortran ordering. Note that this
does not actually do any copying of the data, it simply reorders the shape
tuple for the data and sets the Fortran ordering flag. This allows data
pointers from the output array to be directly passed through f2py so that
copying is not required from the result in to the output array.
pointers from the output array slices to be directly passed through f2py
so that copying is not required from the result in to the output array.
The steps described above are handled in :mod:`wrapt` decorators that can be
found in decorators.py. For some routines that produce multiple outputs or have
atypical behavior, the special case decorators are located in specialdec.py.
.. [1] If the Fortran algorithm is written for a 2-dimensional array,
and a users passes in a 5-dimensional array, there are 3 "leftmost"
dimensions.
An Example
----------------------------
The above overviews are better explained by an example. Although there are a
few exceptions (e.g. ll_to_xy), most of the routines in WRF-Python behave the
same way.
For this example, let's make a routine that adds a variable's base state
to its perturbation. This is the kind of thing that you'd normally use numpy
for (e.g. Ptot = P + PB), but you could do this if you wanted concurrency
for this operation via OpenMP rather than using dask in a future release of
WRF-Python, both OpenMP and dask will be available).
Fortran Code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here's the Fortran 90 code, which will be written to a file called
example.f90.
.. code:: fortran
SUBROUTINE pert_add(base, pert, total, nx, ny)
!f2py threadsafe
!f2py intent(in,out) :: result
REAL(KIND=8), INTENT(IN), DIMENSION(nx, ny) :: base, pert
REAL(KIND=8), INTENT(OUT), DIMENSION(nx, ny) :: total
INTEGER, INTENT(IN) :: nx, ny
INTEGER :: i
!$OMP PARALLEL DO COLLAPSE(2) SCHEDULE(runtime)
DO j=1, ny
DO i=1,nx
total(i) = base(i) + pert(i)
END DO
END DO
!$OMP END PARALLEL DO
END SUBROUTINE pert_add
This code simply adds the base and perturbation and stores the result for each
grid point. For this example, we're using a 2D array because most examples you
see will look like this, but it could have been written with a 1D array as
was done with DCOMPUTETK in wrf_user.f90.
At the top, there are these two f2py directives:
.. code::
!f2py threadsafe
!f2py intent(in,out) :: total
The *threadsafe* directive tells f2py to release Python's Global Interpreter
Lock (GIL) before calling the Fortran routine. The Fortran code no longer
uses Python variables, so you should relese the GIL before running the
computation. This way, Python threads will contine to run, which may be
important if you are using this in a webserver or in some other
threaded environment like dask's threaded scheduler.
The *intent(in,out)* f2py directive is used because in most cases, you will
be supplying a slice of your output array to this routine and you don't want
to have to copy the result from Fortran back in to your result array. By
specifying intent(in,out), we're telling f2py to use the pointer to our
output array directly.
Finally, for the OpenMP directive, the scheduler is set to use runtime
scheduling via *SCHEDULE(runtime)*. By using runtime scheduling, users
can set the scheduling type within Python, but for most users the default will
be sufficient.
Building the Fortran Code
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To build the Fortran code, the example.f90 source code should be placed in the
*fortran* directory of the source tree.
Next, update the numpy.distutils.core.Extension section of setup.py in the
root directory of the source tree.
.. code:: python
ext1 = numpy.distutils.core.Extension(
name="wrf._wrffortran",
sources=["fortran/wrf_constants.f90",
"fortran/wrf_testfunc.f90",
"fortran/wrf_user.f90",
"fortran/rip_cape.f90",
"fortran/wrf_cloud_fracf.f90",
"fortran/wrf_fctt.f90",
"fortran/wrf_user_dbz.f90",
"fortran/wrf_relhl.f90",
"fortran/calc_uh.f90",
"fortran/wrf_user_latlon_routines.f90",
"fortran/wrf_pvo.f90",
"fortran/eqthecalc.f90",
"fortran/wrf_rip_phys_routines.f90",
"fortran/wrf_pw.f90",
"fortran/wrf_vinterp.f90",
"fortran/wrf_wind.f90",
"fortran/omp.f90",
"fortran/example.f90 # New file added here
]
)
The easiest way to build your code is to use one of the build scripts located
in the *build_scripts*. These scripts contain variants for compiling with
or without OpenMP support. Unless you're debugging a problem, building with
OpenMP is recommended.
For this example, we're going to assume you already followed how to
:ref:`dev_setup`. Here are the instructions:
.. code::
pip uninstall wrf-python (if you already installed it)
cd build_scripts
sh ./gnu_omp.sh
The above command will build and install the new routine, along with the
other Fortran routines. If you recieve errors, then your code failed to
build sucessfully. Otherwise, your new routine can be called as
wrf._wrffortran.pert_add.
Creating a Thin Python Wrapper
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The new Fortran pert_add routine will work fine as long as you are only
calling it for one 2D slice of data. If you want to extend the functionality
to work with any dimensional array, you'll need to add a thin wrapper
with some extra functionality added via :mod:`wrapt` decorators.
First, let's start by creating a very thin wrapper in Python in extension.py.
.. code:: python
from wrf._wrffortran import pert_add
.
.
.
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
Despite being only a few lines of code, there is quite a bit going on in the
wrapper. The first thing to note is the arguments to the wrapper function. The
only arguments you will need for the wrapper are the inputs to the function
and an "outview" argument. At this point in the call chain, the arguments are
assumed to be Fortran-ordered, in that the Fortran ordering flag is set and
the shape is transposed from a usual C-ordered numpy array (the data itself
remains in the same order that it was created). By passing numpy
arrays with the Fortran order flag set, f2py will pass the pointer directly
through to the Fortran routine.
The outview argument is used during leftmost dimension indexing to send slices
of the output array to the Fortran routine to be filled. If there are no
leftmost dimensions (e.g. this routine is called on 2D data), then the outview
argument will be None and an outview variable will be created with the same
number of dimensions as the *base* argument. It should be created with Fortran
ordering so the pointer is directly passed to the Fortran routine.
When the actual *pert_add* Fortran routine is called, the nx and ny arguments
are ommitted because f2py will supply this for you based on the shape of the
numpy arrays you are supplying as input arguments. F2py also likes to return
an array as a result, so even though you supplied outview as an array to
be filled by the Fortran routine, you will still get a result from the
function call that is pointing to the same thing as outview. (We could have
chosen to ignore the result and returned outview instead).
Extract and Transpose
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The arrays that are being passed to the _pert_add thin wrapper need to be
numpy arrays in Fortran ordering, but they won't come this way from
users. They will come in as either :class:`numpy.ndarray`
or :class:`xarray.DataArray` and will be C-ordered. So, we need to to make
sure that Fortran-ordered :class:`numpy.ndarray` is what is going to
the thin wrapper.
Since this type of operation is repeated many times, a decorator has been
written in *decorators.py* for this purpose. So let's decorate our thin
wrapper with this function.
.. code:: python
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The :meth:`extract_and_transpose` decorator converts any argument to _pert_add
that are of type :class:`xarray.DataArray` to :class:`numpy.ndarray`, and then
gets the :attr:`numpy.ndarray.T` attribute, and passes this on to the
_pert_add wrapper.
Following the computation, we want the result to be returned back as the
same C-ordered array types that went in as arguments, so this decorator takes
the result of the computation and returns the :attr:`numpy.ndarray.T` from the
Fortran-ordered result. This result gets passed back up the decorator chain.
Cast Type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Fortran routine expects a specific type of data to operate on,
usually double precision numbers. WRF files typically store
their data as 4-byte loating point precision numbers to save
space. So, the arrays being passed to the extract_and_transpose decorator
need to be converted to the type used in the Fortran routine (e.g. double),
then converted back to the original type (e.g. float) after the computation
is finished. This is handled by the :meth:`cast_type` decorator function
in *decorators.py*.
.. code:: python
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The :meth:`cast_type` decorator function takes an *arg_idxs* argument to
specify which positional arguments need to be cast to the Fortran algorithm
type, in this case arguments 0 and 1 (base and pert).
Following the computation, the result will be cast back to the original type
for the input arguments (usually float), and passed back up the decorator
chain.
Leftmost Dimension Indexing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The WRF-Python algorithms written in Fortran are usually written for fixed
size arrays of 1, 2, or 3 dimensions. If your input arrays have more than
the number of dimensions written for the Fortran algorithm, then we need to
do the following:
1. Determine how many leftmost dimensions there are.
2. Create an output array that has a shape that contains the leftmost
dimensions concatenated with the shape of the result from the Fortran
algorithm.
3. Iterate over the leftmost dimensions and send slices of the input arrays
to the Fortran algorithm.
4. Along with the input arrays above, send a slice of the output array to be
filled by the Fortran algorithm.
5. Return the fully calculated output array.
The :meth:`left_iteration` is general purpose decorator contained in
*decorators.py* to handle most leftmost index iteration cases. Some products,
like cape_2d, return multiple products in the output and don't fall in to
this generic category, so those decorators can be found in *specialdec.py*.
Let's look at how this is used below.
.. code:: python:
@left_iteration(2, 2, ref_var_idx=0)
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The :meth:`wrf.left_iteration` decorator handles many different use cases
with its arguments, but this example is one of the more common cases. The
0th positional argument tells the decorator that the "reference" input
variable should provide at least two dimensions. This should be set to
the same number of dimensions as in the Fortran algorithm, which is two in this
case. Dimensions to the left of these two dimensions are considered "leftmost"
dimensions.
The next positional argument (value of 2) tells the decorator that the
newly created output variable should retain the shape of the reference
variable's right two dimensions. This only applies when your output has less
dimensions than the reference variable (e.g. sea level pressure uses
geopotential height for the reference but produces 2D output). Since we are
not reducing the output dimensions, it should be set to the same value as the
previous argument.
The final keyword argument of *ref_ver_idx* tells the decorator to use
positional argument 0 (for the _pert_add function) as the reference
variable.
The result of this decorator will be the fully computed output array and it
is passed back up the chain.
Checking Argument Shapes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Before any computations can be performed, the argument shapes are checked to
make sure they are correct sizes. Although f2py will catch problems at the
entry point to the Fortran routine, the error thrown is confusing to
users.
The :meth:`wrf.check_args` decorator is used to verify that the arguments are
the correct size before proceeding.
Here is how it is used below
.. code:: python:
@check_args(0, 2, (2, 2))
@left_iteration(2, 2, ref_var_idx=0)
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The 0th positional argument (value of 0), tells :meth:`wrf.check_args` that
the 0th positional argument of _pert_add is the reference variable.
The next postional argument (value of 2) tells :meth:`check_args` that it
should expect at least 2 dimensions for the reference variable. This should
be set to the number of dimensions used in the Fortran algorithm, which is two
in this case.
The final positional argument is a tuple with the number of dimensions that
are expected for each array argument. Again, this should be set to the same
number of dimensions expected in the Fortran routine for each positional
argument. If an argument to your wrapped function is not an array type, you
can use None in the tuple to ignore it, but that is not applicable for this
example.
Putting It All Together
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The previous sections showed how the decorator chain was built up from the
_pert_add function. However, when you actually make a call to _pert_add, the
decorators are called from top to bottom. This means check_args is called
first, then left_iteration, then cast_type, then extract_and_transpose,
and finally _pert_add. After _pert_add is finished, the result is passed
back up the chain and back to the user.
So now that we have a fully wrapped compiled routine, how might we use this?
Let's make a new :meth:`wrf.getvar` product called 'total_pressure'. A
similar product already exists in WRF-Python, but this is just for
illustration of how to use our newly wrapped Fortran routine.
Make a 'getter' Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, we need a 'getter' routine that extracts the required input variables
from the WRF NetCDF file(s) to perform the computation. In this case, the
variables are P and PB.
The currently naming convention in WRF-Python is to prefix the 'getter'
functions with a 'g_', so let's call this file g_totalpres.py and make a
function get_total_pressure inside of it.
The contents of this file will be:
.. code:: python:
# g_totalpres.py
from .extension import _pert_add
from .util import extract_vars
@copy_and_set_metadata(copy_varname="P", name="total_pressure",
description="total pressure",
units="Pa")
def get_total_pressure(wrfin, timeidx=0, method="cat", squeeze=True,
cache=None, meta=True, _key=None):
"""Return total pressure.
This functions extracts the necessary variables from the NetCDF file
object in order to perform the calculation.
Args:
wrfin (:class:`netCDF4.Dataset`, :class:`Nio.NioFile`, or an \
iterable): WRF-ARW NetCDF
data as a :class:`netCDF4.Dataset`, :class:`Nio.NioFile`
or an iterable sequence of the aforementioned types.
timeidx (:obj:`int` or :data:`wrf.ALL_TIMES`, optional): The
desired time index. This value can be a positive integer,
negative integer, or
:data:`wrf.ALL_TIMES` (an alias for None) to return
all times in the file or sequence. The default is 0.
method (:obj:`str`, optional): The aggregation method to use for
sequences. Must be either 'cat' or 'join'.
'cat' combines the data along the Time dimension.
'join' creates a new dimension for the file index.
The default is 'cat'.
squeeze (:obj:`bool`, optional): Set to False to prevent dimensions
with a size of 1 from being automatically removed from the
shape of the output. Default is True.
cache (:obj:`dict`, optional): A dictionary of (varname, ndarray)
that can be used to supply pre-extracted NetCDF variables to
the computational routines. It is primarily used for internal
purposes, but can also be used to improve performance by
eliminating the need to repeatedly extract the same variables
used in multiple diagnostics calculations, particularly when
using large sequences of files.
Default is None.
meta (:obj:`bool`, optional): Set to False to disable metadata and
return :class:`numpy.ndarray` instead of
:class:`xarray.DataArray`. Default is True.
_key (:obj:`int`, optional): A caching key. This is used for
internal purposes only. Default is None.
Returns:
:class:`xarray.DataArray` or :class:`numpy.ndarray`: Omega.
If xarray is
enabled and the *meta* parameter is True, then the result will be a
:class:`xarray.DataArray` object. Otherwise, the result will be a
:class:`numpy.ndarray` object with no metadata.
"""
# Get the base and perturbation pressures
varnames = ("PB", "P")
ncvars = extract_vars(wrfin, timeidx, varnames, method, squeeze, cache,
meta=False, _key=_key)
pb = ncvars["PB"]
p = ncvars["P"]
total_pres = _pert_add(pb, p)
return total_pres
This getter function extracts the PB and P (base and pertrubation pressure)
variables and calls the _pert_add function and returns the result. The
arguments *wrfin*, *timeidx*, *method*, *squeeze*, *cache*, *meta*, and
*_key* are used for every getter function and you can read what they are
used for in the docstring.
You should also notice that the getter function is decorated with a
:meth:`copy_and_set_metadata` decorator. This is a general purpose decorator
used for copying metadata from an input variable and applying it to the result.
In this case, the variable to copy is P. The *name* parameter specifies the
:attr:`xarray.DataArray.name` attribute for the variable (the name that
will be written to a NetCDF variable). The *description* is a brief
description for variable that will be placed in the
:attr:`xarray.DataArray.attrs` dictionary along with the *units* parameter.
Make Your New Diagnostic Available in :meth:`wrf.getvar`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The final step is to make the new 'total_pressure' diagnostic available from
:meth:`wrf.getvar`. To do this, modifications need to be made to
routines.py.
First, import your new getter routine at the top of routines.py.
.. code:: python:
from __future__ import (absolute_import, division, print_function)
from .util import (get_iterable, is_standard_wrf_var, extract_vars,
viewkeys, get_id)
from .g_cape import (get_2dcape, get_3dcape, get_cape2d_only,
get_cin2d_only, get_lcl, get_lfc, get_3dcape_only,
get_3dcin_only)
.
.
.
from .g_cloudfrac import (get_cloudfrac, get_low_cloudfrac,
get_mid_cloudfrac, get_high_cloudfrac)
from .g_totalpres import get_total_pressure
Next, update _FUNC_MAP to map your diagnostic label ('total_pressure')
to the getter routine (get_total_pres).
.. code:: python:
_FUNC_MAP = {"cape2d": get_2dcape,
"cape3d": get_3dcape,
.
.
.
"high_cloudfrac": get_high_cloudfrac,
"total_pressure": get_total_pressure
}
Finally, update _VALID_KARGS to inform :meth:`wrf.getvar` of any additional
keyword argument names that this routine might use. The :meth:`wrf.getvar`
routine will check keyword arguments and throws an error when it gets any that
are not declared in this map.
In this case, there aren't any addtional keyword arguments, so we'll just
supply an empty list.
.. code:: python:
_VALID_KARGS = {"cape2d": ["missing"],
"cape3d": ["missing"],
"dbz": ["do_variant", "do_liqskin"],
"maxdbz": ["do_variant", "do_liqskin"],
.
.
.
"high_cloudfrac": ["vert_type", "low_thresh",
"mid_thresh", "high_thresh"],
"total_pressure": []
}
After this is complete, your new routine is now available for use from
:meth:`wrf.getvar`.
Loading…
Cancel
Save