Browse Source

Updated documentation

Added sections for performance, what's new, FAQ, the citation. Fixed some typos.
lon0
Bill Ladwig 7 years ago
parent
commit
497aeb508e
  1. 174
      doc/source/basic_usage.rst
  2. 26
      doc/source/citation.rst
  3. 17
      doc/source/conf.py
  4. 7
      doc/source/faq.rst
  5. 3
      doc/source/index.rst
  6. 42
      doc/source/new.rst
  7. 11
      doc/source/tutorials/tutorial_03_2018.rst
  8. 9
      doc/source/user_api/index.rst
  9. 4
      src/wrf/extension.py
  10. 16
      src/wrf/g_cloudfrac.py
  11. 44
      test/mocktest.py
  12. 27
      test/test_filevars.py
  13. 85
      test/utests.py

174
doc/source/basic_usage.rst

@ -4,30 +4,32 @@ How To Use @@ -4,30 +4,32 @@ How To Use
Introduction
---------------
The API for wrf-python can be summarized as a variable extraction/computation
routine, several interpolation routines, and some plotting utilities.
The API for wrf-python can be summarized as a variable computation/extraction
routine, several interpolation routines, and a few plotting helper utilities.
The API is kept as simple as possible to help minimize the
programming burden on new users, students, and scientists. In the future, we
plan to extend xarray for programmers desiring a more object oriented API,
learning curve for new programmers, students, and scientists. In the future,
we plan to extend xarray for programmers desiring a more object oriented API,
but this remains a work in progress.
The five most commonly used routines can be summarized as:
- **wrf.getvar**: The routine that extracts WRF NetCDF variables or
computes diagnostic variables. This is the routine you will use most often.
- :meth:`wrf.getvar` - Extracts WRF-ARW NetCDF variables and
computes diagnostic variables that WRF does not compute (e.g. storm
relative helicity). This is the routine that you will use most often.
- **wrf.interplevel**: Interpolates a three-dimensional field to a horizontal
plane at a specified level using simple (fast) linear interpolation.
- :meth:`wrf.interplevel` - Interpolates a three-dimensional field to a
horizontal plane at a specified level using simple (fast) linear
interpolation (e.g. 850 hPa temperature).
- **wrf.vertcross**: Interpolates a three-dimensional field to a vertical plane
through a user-specified horizontal line (i.e. a cross section).
- :meth:`wrf.vertcross` - Interpolates a three-dimensional field to a vertical
plane through a user-specified horizontal line (i.e. a cross section).
- **wrf.interpline**: Interpolates a two-dimensional field to a user-specified
line.
- :meth:`wrf.interpline` - Interpolates a two-dimensional field to a
user-specified line.
- **wrf.vinterp**: Interpolates a three-dimensional field to user-specified
'surface' levels (e.g. theta-e levels). This is a smarter, but slower,
version of wrf.interplevel.
- :meth:`wrf.vinterp` - Interpolates a three-dimensional field to
user-specified 'surface' levels (e.g. theta-e levels). This is a smarter,
albeit slower, version of :meth:`wrf.interplevel`.
Basic Usage
----------------
@ -38,8 +40,8 @@ Computing Diagnostic Variables @@ -38,8 +40,8 @@ Computing Diagnostic Variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The primary use for the :meth:`wrf.getvar` function is to return diagnostic
variables that require a calculation, since WRF does not produce these variables
natively. These diagnostics include CAPE, storm relative helicity,
variables that require a calculation, since WRF does not produce these
variables natively. These diagnostics include CAPE, storm relative helicity,
omega, sea level pressure, etc. A table of all available diagnostics can be
found here: :ref:`diagnostic-table`.
@ -383,9 +385,9 @@ Result: @@ -383,9 +385,9 @@ Result:
pole_lon=0.0)
Note how the 'Time' dimension was replaced with the 'file' dimension, due to the
numpy's automatic squeezing of the single 'Time' dimension. To maintain the
'Time' dimension, set the *squeeze* parameter to False.
Note how the 'Time' dimension was replaced with the 'file' dimension, due to
numpy's automatic squeezing of the single element 'Time' dimension. To maintain
the 'Time' dimension, set the *squeeze* parameter to False.
.. code-block:: python
@ -1642,6 +1644,7 @@ Result: @@ -1642,6 +1644,7 @@ Result:
<Ngl.Resources instance at 0x11d318a70>
<Ngl.Resources instance at 0x11d318710>]
.. _using_omp:
Using OpenMP
-------------------------
@ -1664,7 +1667,7 @@ scheduler should be sufficient. @@ -1664,7 +1667,7 @@ scheduler should be sufficient.
Verifying that OpenMP is Enabled
*************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To take advantage of the performance improvements offered by OpenMP, wrf-python
needs to have been compiled with OpenMP features enabled. The example below
@ -1687,7 +1690,7 @@ Result: @@ -1687,7 +1690,7 @@ Result:
Determining the Number of Available Processors
***************************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The example below shows how you can get the maximum number of processors
that are available on your system.
@ -1709,19 +1712,19 @@ Result: @@ -1709,19 +1712,19 @@ Result:
Specifying the Number of Threads
*************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To enable multicore support via OpenMP, specifying the maximum number
of OpenMP threads (i.e. CPU cores) is the only step that you need to take.
In the example below, :meth:`wrf.omp_set_num_threads` is used to set the
maximum number of threads to use, and :meth:`wrf.omp_get_max_threads` is
get to get (and print) the maximum number of threads used.
maximum number of threads to use, and :meth:`wrf.omp_get_max_threads` is used
to retrieve (and print) the maximum number of threads used.
.. note::
Although there is an OpenMP library named :meth:`wrf.omp_get_num_threads`,
this method will always returns 1 when called from the sequential part of
Although there is an OpenMP routine named :meth:`wrf.omp_get_num_threads`,
this routine will always return 1 when called from the sequential part of
the program. Use :meth:`wrf.omp_get_max_threads` to return the value set by
:meth:`wrf.omp_set_num_threads`.
@ -1743,7 +1746,7 @@ Result: @@ -1743,7 +1746,7 @@ Result:
4
Setting a Different Scheduler Type
**************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When an OpenMP directive is encountered in the Fortran code, a scheduler is
used to determine how the work is divided among the threads. All of the
@ -1811,40 +1814,27 @@ value of 1, which is different than the 0 that was supplied to the @@ -1811,40 +1814,27 @@ value of 1, which is different than the 0 that was supplied to the
:meth:`wrf.omp_set_schedule` routine. This is because the 0 tells OpenMP to use
its own default value for the scheduler, which is 1 for this type of scheduler.
Performance Note
******************
If you have enabled multicore support with OpenMP, you may have noticed that
the routines do not scale linearly with the number of CPU cores added. One main
reason is that the computational routines are already fairly efficient and
vectorize well, so for many grid sizes, the time it takes to extract the
variables is on par with the time required to compute the diagnostic with a
single CPU core. Adding more CPU cores will decrease the time needed to do the
computation, but total performance will still be limited by the time it takes
to extract the variables from the NetCDF file. For local testing, diminishing
returns were seen after 4 CPU cores, but this will largely depend on the
hardware used and grid size for your WRF run.
.. _performance:
Performance Tips
--------------------
Memory Issues and :data:`wrf.ALL_TIMES`
******************************************
Memory Issues with :data:`wrf.ALL_TIMES`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The use of :data:`wrf.ALL_TIMES` for the timeidx to :meth:`wrf.getvar` is
convenient for computing diagnostic variables across multiple files/times, but
there is something that users should be aware of. When :data:`wrf.ALL_TIMES` is
set as the *timeidx* argument, all arrays used in the computation are extracted
for all times before the computation is started. This can cause serious memory
issues on smaller hardware systems like laptops.
The use of :data:`wrf.ALL_TIMES` for the *timeidx* parameter to
:meth:`wrf.getvar` is convenient for computing diagnostic variables across
multiple files/times, but there is something that users should be aware of.
When :data:`wrf.ALL_TIMES` is set as the *timeidx* argument, all arrays used
in the computation are extracted for all times before the computation
is started. This can cause serious memory issues on smaller hardware systems
like laptops.
In this example, the user wants to use a data set that is 289 x 39 x 300 x 300
and compute z for the entire data set. The user is using a laptop with
16 GB of memory.
8 GB of memory.
.. code:: python
.. code-block:: python
from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES
@ -1858,32 +1848,40 @@ In wrf-python, all of the computational routines use 8-byte REAL variables so @@ -1858,32 +1848,40 @@ In wrf-python, all of the computational routines use 8-byte REAL variables so
that both the 4-byte and 8-byte version of WRF output can be used. The
calculation for z extracts three variables (P, PHB, and HGT) and returns a
fourth array (RESULT). The RESULT will get cut in half to 4-byte REALs
after the computation, but need 8-byte REAL when the result is computed.
after the computation, but needs an 8-byte REAL when the result is computed.
Let's look at the approximate amount memory needed:
P: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)
PHB: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)
HGT: 289 x 300 x 300 x 8 = 208,080,000 (~208 MB)
RESULT: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)
**P**: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)
**PHB**: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)
**HGT**: 289 x 300 x 300 x 8 = 208,080,000 (~208 MB)
**RESULT**: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)
Yikes! So, in order to do this calculation using :data:`wrf.ALL_TIMES` as
the timeidx, over 24.2 GB are needed for this one calculation. When the laptop
runs out of memory, it begins using the hard drive for swap memory, which runs
hundreds of times slower than real memory.
the *timeidx*, over 24.2 GB are needed for this one calculation. When the
laptop runs out of memory, it begins using the hard drive for swap memory,
which runs hundreds of times slower than real memory.
To fix this situation, it is better to allocate the output array yourself and
run the calculation for each time step in a loop. The required memory
requirements change to:
run the calculation for each time step in a loop
("loop-and-fill"). The required memory requirements change to:
(Only need to store the result in a 4-byte REAL)
FINAL_RESULT: 289 x 39 x 300 x 300 x 4 = 4,057560,000 bytes (~4 GB)
(Note: only need to store the result in a 4-byte REAL)
(The numbers below are for each loop iteration)
P: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)
PHB: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)
HGT: 300 x 300 x 8 = 720,000 bytes (720 KB)
RESULT: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)
**FINAL_RESULT**: 289 x 39 x 300 x 300 x 4 = 4,057560,000 bytes (~4 GB)
(Note: the numbers below are for each loop iteration)
**P**: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)
**PHB**: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)
**HGT**: 300 x 300 x 8 = 720,000 bytes (720 KB)
**RESULT**: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)
Since the memory for the computation is deleted after each
loop iteration, the total memory usage drops to approximately 4.1 GB.
@ -1892,9 +1890,9 @@ The moral of the story is that you need to make sure that your system has @@ -1892,9 +1890,9 @@ The moral of the story is that you need to make sure that your system has
enough memory to extract everything it needs up front if you want to use
:data:`wrf.ALL_TIMES`, otherwise it is better to "loop-and-fill" yourself.
Here is an example of the loop-and-fill technique:
Here is an example of the "loop-and-fill" technique:
.. code:: python
.. code-block:: python
from __future__ import print_function, division
@ -1904,7 +1902,7 @@ Here is an example of the loop-and-fill technique: @@ -1904,7 +1902,7 @@ Here is an example of the loop-and-fill technique:
filename_list = ["/path/to/file1", "/path/to/file2",...]
# Result shape (hardcoded for this example)
# Result shape (hard coded for this example)
result_shape = (289, 39, 300, 300)
# Only need 4-byte floats
@ -1913,7 +1911,7 @@ Here is an example of the loop-and-fill technique: @@ -1913,7 +1911,7 @@ Here is an example of the loop-and-fill technique:
# Modify this number if using more than 1 time per file
times_per_file = 1
for timeidx in xrange(result_shape[0]):
for timeidx in range(result_shape[0]):
# Compute the file index and the time index inside the file
fileidx = timeidx // times_per_file
file_timeidx = timeidx % times_per_file
@ -1926,7 +1924,7 @@ Here is an example of the loop-and-fill technique: @@ -1926,7 +1924,7 @@ Here is an example of the loop-and-fill technique:
The *cache* Argument for :meth:`wrf.getvar`
*********************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have read through the documentation, you may have noticed that the
:meth:`wrf.getvar` routine contains a *cache* argument. What is this for?
@ -1940,10 +1938,10 @@ in a cache (dictionary) and passed on to the computational function. @@ -1940,10 +1938,10 @@ in a cache (dictionary) and passed on to the computational function.
What isn't widely known is that this cache argument can also be supplied by
end users wishing to speed up their application. This can be useful in
situations where numerous calculations are being performed on the same
data set. For many algorithms, the cost to extract the arrays from the
NetCDF file is on par with the time to perform the calculation. If you are
computing numerous diagnostics, extracting the variables up front allows you
to only pay this extraction penalty once, rather than inside of each call
data set. For many algorithms, the time needed to extract the arrays from the
NetCDF file is on par with the time needed to perform the calculation. If you
are computing numerous diagnostics, extracting the variables up front allows
you to only pay this extraction penalty once, rather than inside of each call
to :meth:`wrf.getvar`.
The cache is nothing more than a dictionary where each key is the variable
@ -1961,12 +1959,12 @@ sequence of variables. @@ -1961,12 +1959,12 @@ sequence of variables.
Some common variables that you can use to create an effective cache are: P, PB,
PH, PHB, T, QVAPOR, HGT, PSFC, U, V, W.
Below is an example showing the same computations done with and without the
Below is an example showing the same computation done with and without the
cache. The execution time is printed. The hardware used is a 2.8 GHz Intel Core
i7, which contains 4 CPU cores with 2 hyper threads (8 total threads). This
will be interpreted as 8 CPUs for OpenMP.
.. code:: python
.. code-block:: python
from __future__ import print_function
@ -2018,21 +2016,21 @@ will be interpreted as 8 CPUs for OpenMP. @@ -2018,21 +2016,21 @@ will be interpreted as 8 CPUs for OpenMP.
Result:
.. code:: none
.. code-block:: none
Time taken to build cache: 0.28154706955 s
Time taken without variable cache: 11.0905270576 s
Time taken with variable cache: 8.25931215286 s
The cache decreased computation time by: 25.5282268378 %
By removing the repeated extraction of common variables in the getvar routine,
for the single threaded case, the computation time has been reduced by
25.5% in the particular example.
By removing the repeated extraction of common variables in the
:meth:`wrf.getvar` routine, for the single threaded case, the computation
time has been reduced by 25.5% in this particular example.
Things get more interesting when OpenMP is turned on, and set to use the
Things get more interesting when OpenMP is turned on and set to use the
maximum number of processors (in this case 8 threads are used).
.. code:: python
.. code-block:: python
from __future__ import print_function
@ -2086,7 +2084,7 @@ maximum number of processors (in this case 8 threads are used). @@ -2086,7 +2084,7 @@ maximum number of processors (in this case 8 threads are used).
Result:
.. code:: none
.. code-block:: none
Time taken to build cache: 0.2700548172 s
Time taken without variable cache: 6.02652812004 s

26
doc/source/citation.rst

@ -0,0 +1,26 @@ @@ -0,0 +1,26 @@
.. _citation:
Citation
=================
WRF-Python has a Digital Object Identifier (DOI), which is a persistent
identifier for web-based resources. The wrf-python DOI, when used in URL form,
https://doi.org/10.5065/D6W094P1, provides a persistent link to the wrf-python
Github page. The benefit of DOIs is that they are widely accepted by academic
publishers as citable locators for scholarly objects.
If you author a paper that involves data analysis with wrf-python, or
visualizations created with wrf-python, we would like to ask you to please
cite wrf-python. This helps us better understand the impact of the software on
the scientific community, which in turns helps us maintain support for the
effort.
You can cite wrf-python using the following citation:
.. code-block:: none
Ladwig, W. (2017). wrf-python (Version x.x.x) [Software]. Boulder, Colorado: UCAR/NCAR. https://doi.org/10.5065/D6W094P1
.. note::
The version number x.x.x should be set to the version of wrf-python that you are using.

17
doc/source/conf.py

@ -23,14 +23,24 @@ except ImportError: @@ -23,14 +23,24 @@ except ImportError:
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return Mock()
return MagicMock()
MOCK_MODULES = ["numpy", "numpy.ma", "xarray", "cartopy",
"pandas", "matplotlib", "netCDF4", "mpl_toolkits.basemap",
"wrf._wrffortran"]
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
consts = {"DEFAULT_FILL" : 9.9692099683868690E36}
consts = {"DEFAULT_FILL" : 9.9692099683868690E36,
"DEFAULT_FILL_INT8" : -127,
"DEFAULT_FILL_INT16" : -32767,
"DEFAULT_FILL_INT32" : -2147483647,
"DEFAULT_FILL_INT64" : -9223372036854775806,
"DEFAULT_FILL_FLOAT" : 9.9692099683868690E36,
"DEFAULT_FILL_DOUBLE" : 9.9692099683868690E36,
"fomp_sched_static" : 1,
"fomp_sched_dynamic" : 2,
"fomp_sched_guided" : 3,
"fomp_sched_auto" : 4}
class MockWrfConstants(object):
def __init__(self):
@ -40,6 +50,8 @@ def mock_asscalar(val): @@ -40,6 +50,8 @@ def mock_asscalar(val):
return float(val)
sys.modules["wrf._wrffortran"].wrf_constants = MockWrfConstants()
sys.modules["wrf._wrffortran"].omp_constants = MockWrfConstants()
sys.modules["numpy"].asscalar = mock_asscalar
@ -106,6 +118,7 @@ author = u'Bill Ladwig' @@ -106,6 +118,7 @@ author = u'Bill Ladwig'
# built documents.
#
# The short X.Y version.
import wrf
version = wrf.__version__
# The full version, including alpha/beta/rc tags.

7
doc/source/faq.rst

@ -61,6 +61,13 @@ In a future release of wrf-python, direct support for Dataset objects will @@ -61,6 +61,13 @@ In a future release of wrf-python, direct support for Dataset objects will
be added and this will no longer be necessary.
Why is wrf-python taking hours to run?
---------------------------------------------
The most likely culprit is insufficient memory for the calculation you are
trying to perform.
See :ref:`performance` for more information.

3
doc/source/index.rst

@ -45,8 +45,9 @@ Documentation @@ -45,8 +45,9 @@ Documentation
./api
./faq
./support
./citation
./license
./tutorial_03_2018
./tutorial
Indices and tables

42
doc/source/new.rst

@ -4,6 +4,48 @@ What's New @@ -4,6 +4,48 @@ What's New
Releases
-------------
v1.1.0
^^^^^^^^^^^^^^
- Release 1.1.0
- Computational routines now support multiple cores using OpenMP. See
:ref:`using_omp` for details on how to use this new feature.
- The CAPE routines should be noticeably faster, even in the single threaded
case (thank you supreethms1809!).
- :meth:`wrf.getvar` now works correctly with non-gridded NetCDF variables
- The cloud fraction diagnostic has changed:
- Users can now select their own cloud threshold levels, and can choose
between a vertical coordinate defined as height (AGL), height (MSL), or
pressure.
- The default vertical coordinate type has been changed to be height (AGL).
This ensures that clouds appear over mountainous regions. If you need
the old behavior, set the *vert_type* argument to 'pressure'.
- Fixed a bug involving the cloud threshold search algorithm, where if the
surface was higher than the threshold for a cloud level, the algorithm
would use whatever was there before (uninitialized variable bug). This
caused some interesting visualization issues when plotted. Now, whenever
the surface is above a cloud level threshold, a fill value is used to
indicate that data is unavailable for that location.
- The cartopy object for LambertConformal should now work correctly in the
southern hemisphere.
- Fixed a bug with the PolarStereographic projection missing a geobounds
argument (thank you hanschen!).
- Renamed the modules containing the 'get_product' routines used
by :meth:`wrf.getvar` to avoid naming conflicts with the raw computational
routine names. Users should be using :meth:`wrf.getvar` instead of these
routines, but for those that imported the 'get_product' routines
directly, you will need to modify your code.
- Fixed a uniqueness issue with the internal coordinate cache that was causing
crashes when input data is changed to a different file in a jupyter notebook
cell.
- Added code to better support building wheels on Windows (thank you letmaik!)
- Improved support for scipy.io.netcdf objects.
- Added a new 'zstag' diagnostic that returns the height values for the
vertically staggered grid.
- A DOI is now available for wrf-python. Please cite wrf-python if you are
using it for your research. (See :ref:`citation`)
v1.0.5
^^^^^^^^^^^^^^

11
doc/source/tutorials/tutorial_03_2018.rst

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
WRF Tutorial 2018
=====================
WRF-Python Tutorial 2018
=========================
NCAR will be providing a four hour tutorial for wrf-python on Wednesday, March
7, 2018. The tutorial is free, but seating is limited to only 16 students, so
@ -15,9 +15,9 @@ Boulder, Colorado @@ -15,9 +15,9 @@ Boulder, Colorado
Overview
--------------
This tutorial provides an introduction to wrf-python. The tutorial is friendly
for new users of wrf-python, but this is not an introduction to the Python
programming language (see Prerequisites below). Due to limited seating, if you
This tutorial provides an introduction to wrf-python. The tutorial is beginner
friendly for new users of wrf-python, but this is not an introduction to the Python
programming language (see :ref:`prereq`). Due to limited seating, if you
do not have any previous experience with Python, please do not register
for this tutorial.
@ -47,6 +47,7 @@ Registration consists of a brief survey, which will help give the instructors @@ -47,6 +47,7 @@ Registration consists of a brief survey, which will help give the instructors
a brief overview of your background and will help tailor the tutorial to
your expectations.
.. _prereq:
Prerequisites
---------------

9
doc/source/user_api/index.rst

@ -257,6 +257,7 @@ in one place. @@ -257,6 +257,7 @@ in one place.
wrf.disable_pyngl
wrf.set_cache_size
wrf.get_cache_size
wrf.omp_enabled
Miscellaneous Routines
@ -322,7 +323,7 @@ use a single point for an (x, y) or (lat, lon) location. @@ -322,7 +323,7 @@ use a single point for an (x, y) or (lat, lon) location.
wrf.CoordPair
CoordPair Methods
~~~~~~~~~~~~~~~~~~~~~~~
************************
.. autosummary::
:nosignatures:
@ -349,7 +350,7 @@ The classes below are used to hold the projection information in the @@ -349,7 +350,7 @@ The classes below are used to hold the projection information in the
'projection' entry within a :attr:`xarray.DataArray.attrs` attribute.
Projection Base Class
~~~~~~~~~~~~~~~~~~~~~~~~
******************************
The base class for all map projection types.
@ -360,7 +361,7 @@ The base class for all map projection types. @@ -360,7 +361,7 @@ The base class for all map projection types.
wrf.WrfProj
Projection Base Class Methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
***********************************
The class methods for all projection types.
@ -378,7 +379,7 @@ The class methods for all projection types. @@ -378,7 +379,7 @@ The class methods for all projection types.
Projection Subclasses
~~~~~~~~~~~~~~~~~~~~~~~~
*****************************
See :class:`wrf.WrfProj` for methods and attributes.

4
src/wrf/extension.py

@ -5,7 +5,7 @@ import numpy as np @@ -5,7 +5,7 @@ import numpy as np
from .constants import Constants, default_fill
from ._wrffortran import (dcomputetk, dinterp3dz, dinterp2dxy, dinterp1d,
from wrf._wrffortran import (dcomputetk, dinterp3dz, dinterp2dxy, dinterp1d,
dcomputeseaprs, dfilter2d, dcomputerh, dcomputeuvmet,
dcomputetd, dcapecalc2d, dcapecalc3d, dcloudfrac2,
wrfcttcalc, calcdbz, dcalrelhl, dcalcuh, dcomputepv,
@ -40,7 +40,7 @@ from .specialdec import (uvmet_left_iter, cape_left_iter, @@ -40,7 +40,7 @@ from .specialdec import (uvmet_left_iter, cape_left_iter,
class DiagnosticError(Exception):
"""Raised when an error occurs in a diagnostic routine."""
def __init__(self, message=None):
"""Initialize a :class:`wrf.DiagnosticError` objection.
"""Initialize a :class:`wrf.DiagnosticError` object.
Args:

16
src/wrf/g_cloudfrac.py

@ -32,7 +32,7 @@ def get_cloudfrac(wrfin, timeidx=0, method="cat", squeeze=True, @@ -32,7 +32,7 @@ def get_cloudfrac(wrfin, timeidx=0, method="cat", squeeze=True,
2000 m <= mid_cloud < 6000 m
6000 m <= high_cloud
For 'pres', the default cloud levels are defined as:
For 'pressure', the default cloud levels are defined as:
97000 Pa <= low_cloud < 80000 Pa
80000 Pa <= mid_cloud < 45000 Pa
@ -40,16 +40,16 @@ def get_cloudfrac(wrfin, timeidx=0, method="cat", squeeze=True, @@ -40,16 +40,16 @@ def get_cloudfrac(wrfin, timeidx=0, method="cat", squeeze=True,
Note that the default low cloud levels are chosen to
exclude clouds near the surface (fog). If you want fog included, set
*low_thresh* to ~99500 Pa if *vert_type* is set to 'pres', or 15 m if using
'height_msl' or 'height_agl'. Keep in mind that the lowest mass grid points
are slightly above the ground, and in order to find clouds, the
*low_thresh* to ~99500 Pa if *vert_type* is set to 'pressure', or 15 m if
using 'height_msl' or 'height_agl'. Keep in mind that the lowest mass grid
points are slightly above the ground, and in order to find clouds, the
*low_thresh* needs to be set to values that are slightly greater than
(less than) the lowest height (pressure) values.
When using 'pres' or 'height_agl' for *vert_type*, there is a possibility
that the lowest WRF level will be higher than the low_cloud or mid_cloud
threshold, particularly for mountainous regions. When this happens, a
fill value will be used in the output.
When using 'pressure' or 'height_agl' for *vert_type*, there is a
possibility that the lowest WRF level will be higher than the low_cloud or
mid_cloud threshold, particularly for mountainous regions. When this
happens, a fill value will be used in the output.
This functions extracts the necessary variables from the NetCDF file
object in order to perform the calculation.

44
test/mocktest.py

@ -0,0 +1,44 @@ @@ -0,0 +1,44 @@
import sys
import os
try:
from unittest.mock import MagicMock
except ImportError:
from mock import Mock as MagicMock
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return Mock()
MOCK_MODULES = ["numpy", "numpy.ma", "xarray", "cartopy",
"pandas", "matplotlib", "netCDF4", "mpl_toolkits.basemap",
"wrf._wrffortran"]
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
consts = {"DEFAULT_FILL" : 9.9692099683868690E36,
"DEFAULT_FILL_INT8" : -127,
"DEFAULT_FILL_INT16" : -32767,
"DEFAULT_FILL_INT32" : -2147483647,
"DEFAULT_FILL_INT64" : -9223372036854775806,
"DEFAULT_FILL_FLOAT" : 9.9692099683868690E36,
"DEFAULT_FILL_DOUBLE" : 9.9692099683868690E36,
"fomp_sched_static" : 1,
"fomp_sched_dynamic" : 2,
"fomp_sched_guided" : 3,
"fomp_sched_auto" : 4}
class MockWrfConstants(object):
def __init__(self):
self.__dict__ = consts
def mock_asscalar(val):
return float(val)
sys.modules["wrf._wrffortran"].wrf_constants = MockWrfConstants()
sys.modules["wrf._wrffortran"].omp_constants = MockWrfConstants()
sys.modules["numpy"].asscalar = mock_asscalar
import wrf
print (wrf.get_coord_pairs.__doc__)

27
test/test_filevars.py

@ -23,21 +23,46 @@ class WRFFileVarsTest(ut.TestCase): @@ -23,21 +23,46 @@ class WRFFileVarsTest(ut.TestCase):
def make_test(ncfiles, varname):
def test(self):
#import time
#very_start = time.time()
#start = time.time()
t1 = getvar(ncfiles, varname, 0)
#end = time.time()
#print ("t1: ", start-end)
#start = time.time()
t2 = getvar(ncfiles, varname, 0, meta=False)
#end = time.time()
#print ("t2: ", start-end)
#start = time.time()
t3 = getvar(ncfiles, varname, ALL_TIMES)
#end = time.time()
#print ("t3: ", start-end)
#start = time.time()
t4 = getvar(ncfiles, varname, ALL_TIMES, meta=False)
#end = time.time()
#print ("t4: ", start-end)
#start = time.time()
t5 = getvar(ncfiles, varname, ALL_TIMES, method="join")
#end = time.time()
#print ("t5: ", start-end)
#start = time.time()
t6 = getvar(ncfiles, varname, ALL_TIMES, method="join", meta=False)
#end = time.time()
#print ("t6: ", start-end)
#start = time.time()
#print ("Total Time: ", (end-start))
return test
if __name__ == "__main__":
from netCDF4 import Dataset
ncfiles = [Dataset(x) for x in TEST_FILES]
#import scipy.io
#ncfiles = [scipy.io.netcdf.netcdf_file(x) for x in TEST_FILES]
file_vars = ncfiles[0].variables.keys()
ignore_vars = []

85
test/utests.py

@ -93,45 +93,48 @@ def make_test(varname, wrf_in, referent, multi=False, repeat=3, pynio=False): @@ -93,45 +93,48 @@ def make_test(varname, wrf_in, referent, multi=False, repeat=3, pynio=False):
multiproduct = varname in ("uvmet", "uvmet10", "cape_2d", "cape_3d",
"cfrac")
# These varnames don't have NCL functions to test against
ignore_referent = ("zstag", "geopt_stag")
if not multi:
ref_vals = refnc.variables[varname][:]
else:
data = refnc.variables[varname][:]
if (varname != "uvmet" and varname != "uvmet10"
and varname != "cape_2d" and varname != "cape_3d"):
new_dims = [repeat] + [x for x in data.shape]
elif (varname == "uvmet" or varname == "uvmet10"
or varname == "cape_3d"):
new_dims = [2] + [repeat] + [x for x in data.shape[1:]]
elif (varname == "cape_2d"):
new_dims = [4] + [repeat] + [x for x in data.shape[1:]]
elif (varname == "cfrac"):
new_dims = [3] + [repeat] + [x for x in data.shape[1:]]
masked=False
if (isinstance(data, ma.core.MaskedArray)):
masked=True
if not masked:
ref_vals = np.zeros(new_dims, data.dtype)
if varname not in ignore_referent:
if not multi:
ref_vals = refnc.variables[varname][:]
else:
ref_vals = ma.asarray(np.zeros(new_dims, data.dtype))
data = refnc.variables[varname][:]
if (varname != "uvmet" and varname != "uvmet10"
and varname != "cape_2d" and varname != "cape_3d"):
new_dims = [repeat] + [x for x in data.shape]
elif (varname == "uvmet" or varname == "uvmet10"
or varname == "cape_3d"):
new_dims = [2] + [repeat] + [x for x in data.shape[1:]]
elif (varname == "cape_2d"):
new_dims = [4] + [repeat] + [x for x in data.shape[1:]]
elif (varname == "cfrac"):
new_dims = [3] + [repeat] + [x for x in data.shape[1:]]
masked=False
if (isinstance(data, ma.core.MaskedArray)):
masked=True
if not masked:
ref_vals = np.zeros(new_dims, data.dtype)
else:
ref_vals = ma.asarray(np.zeros(new_dims, data.dtype))
for i in xrange(repeat):
if not multiproduct:
ref_vals[i,:] = data[:]
for i in xrange(repeat):
if not multiproduct:
ref_vals[i,:] = data[:]
if masked:
ref_vals.mask[i,:] = data.mask[:]
if masked:
ref_vals.mask[i,:] = data.mask[:]
else:
for prod in xrange(ref_vals.shape[0]):
ref_vals[prod,i,:] = data[prod,:]
else:
for prod in xrange(ref_vals.shape[0]):
ref_vals[prod,i,:] = data[prod,:]
if masked:
ref_vals.mask[prod,i,:] = data.mask[prod,:]
if masked:
ref_vals.mask[prod,i,:] = data.mask[prod,:]
if (varname == "tc"):
my_vals = getvar(in_wrfnc, "temp", timeidx=timeidx, units="c")
@ -176,6 +179,10 @@ def make_test(varname, wrf_in, referent, multi=False, repeat=3, pynio=False): @@ -176,6 +179,10 @@ def make_test(varname, wrf_in, referent, multi=False, repeat=3, pynio=False):
#print np.amax(np.abs(to_np(cape_3d[0,:]) - ref_vals[0,:]))
nt.assert_allclose(to_np(cape_3d), ref_vals, tol, atol)
elif (varname == "zstag" or varname == "geopt_stag"):
v = getvar(in_wrfnc, varname, timeidx=timeidx)
# For now, only make sure it runs without crashing since no NCL
# to compare with yet.
else:
my_vals = getvar(in_wrfnc, varname, timeidx=timeidx)
tol = 2/100.
@ -274,6 +281,8 @@ def make_interp_test(varname, wrf_in, referent, multi=False, @@ -274,6 +281,8 @@ def make_interp_test(varname, wrf_in, referent, multi=False,
p = getvar(in_wrfnc, "pressure", timeidx=timeidx)
pivot_point = CoordPair(hts.shape[-1] / 2, hts.shape[-2] / 2)
#ht_cross = vertcross(to_np(hts), p, pivot_point=pivot_point,
# angle=90., latlon=True)
ht_cross = vertcross(hts, p, pivot_point=pivot_point, angle=90.)
# Note: Until the bug is fixed in NCL, the wrf-python cross
@ -304,6 +313,8 @@ def make_interp_test(varname, wrf_in, referent, multi=False, @@ -304,6 +313,8 @@ def make_interp_test(varname, wrf_in, referent, multi=False,
t2 = getvar(in_wrfnc, "T2", timeidx=timeidx)
pivot_point = CoordPair(t2.shape[-1] / 2, t2.shape[-2] / 2)
#t2_line1 = interpline(to_np(t2), pivot_point=pivot_point,
# angle=90.0, latlon=True)
t2_line1 = interpline(t2, pivot_point=pivot_point, angle=90.0)
# Note: After NCL is fixed, remove the slice.
@ -619,9 +630,9 @@ class WRFLatLonTest(ut.TestCase): @@ -619,9 +630,9 @@ class WRFLatLonTest(ut.TestCase):
if __name__ == "__main__":
from wrf import (omp_set_num_threads, omp_set_schedule, omp_get_schedule,
omp_set_dynamic, Constants)
omp_set_num_threads(6)
omp_set_schedule(Constants.OMP_SCHED_STATIC, 0)
omp_set_dynamic, OMP_SCHED_STATIC)
omp_set_num_threads(8)
omp_set_schedule(OMP_SCHED_STATIC, 0)
omp_set_dynamic(False)
ignore_vars = [] # Not testable yet
@ -629,7 +640,7 @@ if __name__ == "__main__": @@ -629,7 +640,7 @@ if __name__ == "__main__":
"geopt", "helicity", "lat", "lon", "omg", "p", "pressure",
"pvo", "pw", "rh2", "rh", "slp", "ter", "td2", "td", "tc",
"theta", "tk", "tv", "twb", "updraft_helicity", "ua", "va",
"wa", "uvmet10", "uvmet", "z", "cfrac"]
"wa", "uvmet10", "uvmet", "z", "cfrac", "zstag", "geopt_stag"]
interp_methods = ["interplevel", "vertcross", "interpline", "vinterp"]
latlon_tests = ["xy", "ll"]

Loading…
Cancel
Save