Browse Source

Merge branch 'release/1.3.2.1'

main 1.3.2.1
Bill Ladwig 6 years ago
parent
commit
80eb5b973c
  1. 23
      .circleci/config.yml
  2. 182
      CODE_OF_CONDUCT.md
  3. 490
      doc/source/contrib.rst
  4. 2
      doc/source/index.rst
  5. 692
      doc/source/internals.rst
  6. 1
      doc/source/support.rst
  7. 3
      fortran/contrib/readme
  8. 2
      fortran/rip_cape.f90
  9. 20
      linux.yml
  10. 20
      osx.yml
  11. 0
      src/wrf/contrib.py
  12. 2
      src/wrf/interp.py
  13. 10
      test/comp_utest.py
  14. 1
      test/contrib/readme
  15. 141
      test/ipynb/loop_and_fill.ipynb
  16. 259
      test/ipynb/reduce_files.ipynb
  17. 4
      test/utests.py
  18. 22
      testenv2.yml
  19. 22
      testenv3.yml
  20. 19
      win64.yml

23
.circleci/config.yml

@ -4,6 +4,23 @@ @@ -4,6 +4,23 @@
#
version: 2
jobs:
lint:
docker:
- image: circleci/python:3.6.1
working_directory: ~/repo
steps:
- checkout
- run:
name: install pycodestyle
command: |
sudo pip install pycodestyle
- run:
name: run pycodestyle
command: |
pycodestyle src test
build:
docker:
# specify the version you desire here
@ -74,3 +91,9 @@ jobs: @@ -74,3 +91,9 @@ jobs:
- store_artifacts:
path: test-reports
destination: test-reports
workflows:
version: 2
build_and_lint:
jobs:
- build
- lint

182
CODE_OF_CONDUCT.md

@ -0,0 +1,182 @@ @@ -0,0 +1,182 @@
# Contributor Code of Conduct
## Related Code of Conduct
Participant Code of Conduct
[https://www2.fin.ucar.edu/ethics/participant-code-conduct](https://www2.fin.ucar.edu/ethics/participant-code-conduct)
## Our Pledge
We, as contributors and maintainers (participants), of WRF-Python pledge to
make participation in our software project and community a safe, productive,
welcoming and inclusive experience for everyone. All participants are required
to abide by this Code of Conduct. This includes respectful treatment of
everyone regardless of age, body size, disability, ethnicity, gender identity
or expression, level of experience, nationality, political affiliation,
veteran status, pregnancy, genetic information, physical appearance, race,
religion, or sexual orientation, as well as any other characteristic protected
under applicable US federal or state law.
## Our Standards
Examples of behaviors that contribute to a positive environment include:
* Using welcoming and inclusive language
* Respectful when offering and gracefully accepting constructive criticism
* Acknowledging the contributions of others
* Focusing on what is best for the community
* Showing empathy towards other community members
* Treating everyone with respect and consideration, valuing a diversity of
views and opinions
* Communicating openly with respect for others, critiquing ideas rather than
individuals
Examples of unacceptable behavior include, but are not limited to:
* Harassment, intimidation, or discrimination in any form
* Personal attacks directed toward other participants
* Unwelcome sexual attention or advances
* Inappropriate, negative, derogatory comments and/or attacks on personal
beliefs
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Refusing to use the pronouns that someone requests
* Alarming, intimidating, threatening, or hostile comments or conduct
* Physical or verbal abuse by anyone to anyone, including but not limited to a
participant, member of the public, guest, member of any institution or
sponsor
* Comments related to characteristics given in the pledge at the top
* Inappropriate use of nudity and/or sexual images
* Threatening or stalking other participants
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Scope
This Code of Conduct applies to all spaces managed by the Project whether it
be online or face-to-face. This includes project code, code repository,
associated web pages, documentation, mailing lists, project websites and
wiki pages, issue tracker, meetings, telecons, events, project social media
accounts, and any other forums created by the project team which the community
uses for communication. In addition, violations of this Code of Conduct
outside these spaces may affect a person's ability to participate within them.
Representation of a project may be further defined and clarified by project
maintainers.
## Community Responsibilities
Everyone in the community is empowered to respond to people who are showing
unacceptable behavior. They can talk to them privately or publicly. Anyone
requested to stop unacceptable behavior is expected to comply immediately.
If the behavior continues concerns may be brought to the project
administrators or to any other party listed in the Reporting section below.
## Project Administrator Responsibilities
Project Administrators are responsible for clarifying the standards of
acceptable behavior and are encouraged to model appropriate behavior and
provide support when people in the community point out inappropriate behavior.
Project administrator(s) are normally the ones that would be tasked to carry
out the actions in the Consequences section below.
Project Administrators are also expected to keep this Code of Conduct updated
with the main one housed at UCAR as listed below in the Attribution section.
## Reporting
Instances of unacceptable behavior can be brought to the attention of the
project administrator(s) who may take any action as outlined in the
Consequences section below. However, making a report to a project
administrator is not considered an 'official report' to UCAR.
Instances of unacceptable behavior may also be reported directly to UCAR via
UCAR's Harassment Reporting and Complaint Procedure at [https://www2.fin.ucar.edu/procedures/hr/harassment-reporting-and-complaint-procedure](https://www2.fin.ucar.edu/procedures/hr/harassment-reporting-and-complaint-procedure),
or anonymously through UCAR's EthicsPoint Hotline at [https://www2.fin.ucar.edu/ethics/anonymous-reporting](https://www2.fin.ucar.edu/ethics/anonymous-reporting).
Complaints received by UCAR will be handled pursuant to the procedures
outlined in UCAR's Harassment Reporting and Complaint Procedure. Complaints
to UCAR will be held as confidential as practicable under the circumstances,
and retaliation against a person who initiates a complaint or an inquiry about
inappropriate behavior will not be tolerated.
Any Contributor can use these reporting methods even if they are not directly
affiliated with UCAR. The Frequently Asked Questions (FAQ) page for reporting
is here: [https://www2.fin.ucar.edu/procedures/hr/reporting-faqs](https://www2.fin.ucar.edu/procedures/hr/reporting-faqs).
## Consequences
Upon receipt of a complaint, the project administrator(s) may take any action
deemed necessary and appropriate under the circumstances. Such action can
include things such as: removing, editing, or rejecting comments, commits,
code, wiki edits, email, issues, and other contributions that are not aligned
to this Code of Conduct, or banning temporarily or permanently any contributor
for other behaviors that are deemed inappropriate, threatening, offensive, or
harmful. Project Administrators also have the right to report violations to
UCAR HR and/or UCAR's Office of Diversity, Equity and Inclusion (ODEI) as
well as a participant's home institution and/or law enforcement. In the event
an incident is reported to UCAR, UCAR will follow its Harassment Reporting
and Complaint Procedure.
## Process for Changes
All UCAR managed projects are required to adopt this Contributor Code of
Conduct. Adoption is assumed even if not expressly stated in the repository.
Projects should fill in sections where prompted with project-specific
information, including, project name, email addresses, adoption date, etc.
There is one section below marked "optional" that may not apply to a given
project.
Projects that adopt this Code of Conduct need to stay up to date with
UCAR's Contributor Code of Conduct, linked with a DOI in the "Attribution"
section below. Projects can make limited substantive changes to the Code of
Conduct, however, the changes must be limited in scope and may not contradict
the UCAR Contributor Code of Conduct.
## Attribution
This Code of Conduct was originally adapted from the Contributor Covenant,
version 1.4, available at [Contributor-Covenant](http://contributor-covenant.org/version/1/4).
We then aligned it with the UCAR Participant Code of Conduct, which also
borrows from the American Geophysical Union (AGU) Code of Conduct. The UCAR
Participant Code of Conduct applies to both UCAR employees as well as
participants in activities run by UCAR. We modified the "scope" section with
the django project description, and we added "Publication Ethics" from
the NGEET/FATES project. The original version of this for all software
projects that have strong management from UCAR or UCAR staff is available
on the UCAR website at [*Enter DOI link name*] (the date that it was adopted
by this project was [*Enter date adopted*]). When responding to complaints
UCAR HR and ODEI will do so based on the latest published version. Therefore,
any project-specific changes should follow the Process for Changes section
above.
## Publication Ethics (optional)
We aim to create an open development environment where developers can be
confident that all members of the community are publishing any research
on the project in an ethical manner. In particular, writing code is a form of
intellectual contribution, and one should expect that all such intellectual
contributions are respected and given credit in any resulting published work.
To support the community and avoid issues of misconduct related to the above
principle, please respect the following rules:
* Document the version of the code used in any publication, preferably by
either using a release tag (existing or newly created) if possible, or a
commit hash if not.
* Do not use code from anywhere other than the central project's development
repository main development branch without discussing with the author(s) of
the modified code your intentions for using the code and receiving their
permission to do so.
* When using project features that have recently been integrated into the
central Project development repository, be mindful of the contributions
of others and, where the novel features qualitatively affect the results,
involve the author(s) of these features in any resulting manuscripts.
Be particularly aware of the concerns of early career researchers, and
ensure they have sufficient time to lead publications using their
developments.
* When discussing results arising from older project features that have been
described in the literature or releases, accurately cite the publications
describing those features or releases.

490
doc/source/contrib.rst

@ -3,12 +3,41 @@ @@ -3,12 +3,41 @@
Contributor Guide
=================================
.. note::
Introduction
-----------------------------
Thank you for your interest in contributing to the WRF-Python project.
WRF-Python is made up of a very small amount of developers, tasked with
supporting more than one project, so we rely on outside contributions
to help keep the project moving forward.
The guidelines below help to ensure that the developers and outside
collaborators remain on the same page regarding contributions.
Source Code Location
------------------------------
The source code is available on GitHub:
https://github.com/NCAR/wrf-python
Git Flow
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This contributor guide is written for wrf-python v1.3.x. In the
not-too-distant future, wrf-python will undergo a significant refactoring
to remove the wrapt decorators (which don't serialize for dask), but the
concepts will remain the same as described below.
This project follows the GitFlow Workflow, which you can read about here if it
is new to you:
https://leanpub.com/git-flow/read
For external contributors, this isn't important, other than making you aware
that when you first clone the repository, you will be on the
**develop** branch, which is what you should use for your development.
Since you will be submitting pull requests for your contributions, you don't
really need to know much about GitFlow, other than making sure that you
are not developing off of the master branch.
Ways to Contribute
@ -21,43 +50,458 @@ Users are encouraged to contribute various ways. This includes: @@ -21,43 +50,458 @@ Users are encouraged to contribute various ways. This includes:
- Submitting new Fortran computational routines
- Submitting new Python computational routines
- Submitting fully wrapped computational routines
- Fixing documentation errors
- Creating new examples in the documentation (e.g. plotting examples)
Getting the source code
Ground Rules
------------------------------
The source code is available on GitHub:
Please follow the `Code of Conduct <https://github.com/NCAR/wrf-python/blob/develop/CODE_OF_CONDUCT.md>`_.
https://github.com/NCAR/wrf-python
- Please create an issue on GitHub for any pull request you wish to submit,
except for documentation issues.
- Each pull request should be for a logical collection of changes. You can
submit multiple bug fixes in a single pull request if the bugs are related.
Otherwise, please submit seperate pull requests.
- Do not commit changes to files that are unrelated to your bug fix
(e.g. .gitignore).
- The pull request and code review process is not immediate, so please be
patient.
To checkout the code::
Submitting Bug Reports
-----------------------------
git clone https://github.com/NCAR/wrf-python
Submitting bug reports is the easiest way to contribute. You will need to
create an account on GitHub to submit a report.
1. Go to the issues page here:
Git Flow
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
https://github.com/NCAR/wrf-python/issues
2. Check to see if an issue has already been created for the problem that
you are having.
3. If an issue already exists for your problem, feel free to add any
additional information to the issue conversation.
4. If there is not an issue created yet for your problem, use the
"New Issue" button to start your new issue.
5. Please provide as much information as you can for the issue. Please supply
your version of WRF-Python you are using and which platform you are
using (e.g. conda-forge build on OSX). Supply a code snippet if you
are doing something more detailed than simply calling :meth:`wrf.getvar`.
6. If you are getting a crash (e.g. segmentation fault), we will most likely
need to see your data file if we cannot reproduce the problem here.
See :ref:`submitting_files`.
This project follows the GitFlow Workflow, which you can read about here if it
is new to you:
Submitting Fortran Computational Routines
--------------------------------------------
https://leanpub.com/git-flow/read
If you have Fortran computational routines that you'd like to contribute,
but don't know how to wrap them in to Python, please follow the instructions
below.
1. Only Fortran 90 code will be accepted, so please port your F77 code to
F90.
2. Follow the :ref:`fortranstyle`.
3. Please only submit routines relevant to WRF-Python (e.g. diagnostics,
interpolation). General purpose climate/meteorology should go in to the
SkyLab project (a project providing similar functionality as
NCL).
4. If you are unsure if you should contribute your Fortran code, make an
issue on GitHub and we can begin a discussion there.
5. Place your code in the fortran/contrib directory in the WRF-Python
source tree.
6. Document your code with a text file that has the same name as your Fortran
file, but ending in .rst. This file should placed with your F90 code
in the fortran/contrib directory. Your documentation can use
restructured text formatting, or just plain text. This documentation
will be used for the docstring when Python wrappers are made.
7. If you are unable to provide any type of test for your routine, please
ensure that your documentation describes what your computation
should produce. You can submit auxiallary documentation and/or images for
this purpose if needed.
Submitting Python Computational Routines
---------------------------------------------
When you first clone the repository, by default you will be on the 'develop'
branch, which is what you should use for your development.
If you would like to submit a computational routine written in Python, but
don't know how to integrate it with the rest of WRF-Python's internals
(e.g. left indexing, arg checking, etc), feel free to
submit the pure Python routine. Below is the guide for submitting pure
Python routines.
1. These routines should be placed in src/wrf/contrib.py. These algorithms
will not be imported in to WRF-Python's default namespace.
2. Follow the :ref:`pythonstyle`.
2. Write your computation as dimension unaware as possible. For example,
adding pressure and perturbation pressure is simply P + PB.
3. If dimensionality is needed, then write for the minimum dimensionality
required to make the computation for one time step (if applicable). For
example, if you're computing CAPE, then you should use three dimensions for
your algorithm, and we will handle the looping over all times.
4. Document your routine by creating a docstring that follows Google docstring
format (see `Sphinx Napoleon <https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#google-vs-numpy>`_).
5. If you are unable to provide a test for this function, please provide
additional documentation (or images) to show what this function should
produce.
Submitting Fully Wrapped Computational Routines
---------------------------------------------------
Submitting a fully wrapped computational routines is the fastest way to get
your contributation released. However, it requires the most effort on your
part.
1. Read the :ref:`internals` guide. This will show you how to wrap your
routine.
2. Follow the :ref:`fortranstyle` and :ref:`pythonstyle`.
3. You should create your contribution in the WRF-Python source tree as if
you were one of the core developers of it. This means:
- Your Fortran code (if applicable) should be placed in the *fortran*
directory.
- Update the "ext1 = numpy.distutils.core.Extension" section of *setup.py*
to include your new Fortran source (if applicable).
- Update *extension.py* to create the Python wrapper that calls your
Fortran function. This must include the appropriate function decorators
for handling argument checking, leftmost dimension indexing, etc. as
described in :ref:`internals`.
- If the current function decorators do not cover your specific needs,
place your custom decorator in *specialdec.py*. Most of the decorators
in specialdec.py are used for products that contain multiple outputs like
cape_2d, but you can use it for other purposes.
- If your function is pure python, you can create a new module for it,
or place it in another module with similar functionality. For example,
if your routine is a new interpolation routine, then it should go
in interp.py. Remember to apply the same type of decorators as
done with Fortran extensions (checking args, leftmost dimension
indexing, etc).
- Create a 'getter' routine which is responsible for extracting the
required variables from a WRF file and calling your computational
routine. This is what will be called by :meth:`wrf.getvar`.
This function should be placed in a new python module with the prefix
'g\_' (i.e. g_yourdiagnostic.py).
- Decorate your getter routine with an appropriate metadata handling
decorator. If you need to make a custom decorator for the metadata,
place it in *metadecorators.py*.
- Update the mappings in *routines.py* to map your diagnostic name to your
function, and to declare any keyword arguments that your function
needs aside from the usual wrfin, varname, timeidx, method,
squeeze, cache, and meta.
- If you would like to make your routine available as a raw computation,
you will need to place an additional thin wrapper in *computation.py*.
This thin wrapper must be decorated with an appropriate metadata decorator
found in *metadecorators.py* (usually set_alg_metadata). If you need to
write your own custom metadata decorator, write it in *metadecorators.py*.
- You must provide a docstring for every function you create using
Google docstring format (see `Sphinx Napoleon <https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#google-vs-numpy>`_).
- You must provide a test for your function. See :ref:`testing`.
Fixing Documentation Errors
--------------------------------------
1. Documenation is made with Sphinx using restructured text.
2. Python docstrings follow `Google docstring <https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_ format.
3. Documentation can be found in the *doc* directory, along with the
docstrings contained within the Python code.
4. For documentation fixes, you can just submit a pull request with the
appropriate corrections already made.
Creating New Examples
--------------------------------------
1. Examples are made with Sphinx using restructured text.
2. Examples are currently found in the *doc* directory, mostly within the
*basic_usage.rst* and *plot.rst* files. Feel free to contribute more
examples to these files.
3. Unless you are drastically changing the documentation structure, you can
submit a pull request with your examples without creating a GitHub
issue. If you are making a large change, or are unsure about it, then
go ahead and create a GitHub issue to discuss with the developers.
.. _dev_setup:
Setting Up Your Development Environment
---------------------------------------------
We recommend using the `conda <https://conda.io/en/latest/>`_
package manager for your Python environments. Our recommended setup for
contributing is:
- Install `miniconda <https://docs.conda.io/en/latest/miniconda.html>`_
- Install git on your system if it is not already there (install XCode command
line tools on a Mac or git bash on Windows)
- Login to your GitHub account and make a fork of the
`WRF-Python <https://github.com/ncar/wrf-python>`_ repository by clicking
the **Fork** button.
- Clone your fork of the WRF-Python repository (in terminal on Mac/Linux or
git shell/ GUI on Windows) in the location you'd like to keep it.
.. code::
git clone https://github.com/your-user-name/wrf-python.git
- Navigate to that folder in the terminal or in Anaconda Prompt if you're
on Windows.
.. code::
cd wrf-python
- Connect your repository to the NCAR WRF-Python repository.
.. code::
git remote add ncar https://github.com/ncar/wrf-python.git
- To create the development environment, you'll need to run the appropriate
command below for your operating system.
OSX:
.. code::
conda env create -f osx.yml
Linux:
.. code::
conda env create -f linux.yml
Win64:
.. code::
conda env create -f win64.yml
Note: For Win64, you will also need VS2015 installed on your system.
- Activate your conda environment.
.. code::
conda activate develop
- CD to the build_scripts directory.
.. code::
cd build_scripts
- Build and install WRF-Python.
OSX/Linux:
.. code::
sh gnu_omp.sh
Windows:
.. code::
./win_msvc_mingw_omp.bat
- The previous step will build and install WRF-Python in to the 'develop'
environment. If you make changes and want to rebuild, uninstall WRF-Python
by running:
.. code::
pip uninstall wrf-python
Now follow the previous step to rebuild.
Code Style
--------------------------
.. _pythonstyle:
Python Style Guide
^^^^^^^^^^^^^^^^^^^^^^^^^^
The Python code in WRF-Python follows the
`PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ coding standard. All
Python code submitted must pass the PEP8 checks performed by the
`pycodestyle <https://pycodestyle.readthedocs.io/en/latest/>`_ code
style guide utility. The utility must pass without any errors or warnings.
For a tool to help automate some of the mundane formatting corrections (e.g.
whitespace characters in blank lines, etc.), try the
`autopep8 <https://pypi.org/project/autopep8/0.8/>`_ utility.
.. _fortranstyle:
Fortran Style Guide
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
At this time, we are only accepting Fortran 90 contributions, so you must
convert any F77 code to F90 before contributing.
Although there is no formal style guide for Fortran contributions, Fortran
code should look similar to other Fortran code in the WRF-Python *fortran*
directory.
A summary of style notes is below:
- Fortran 90 only.
- Use 4 spaces for indentation, not tabs.
- Use all capital letters for Fortran key words (e.g. IF, DO, REAL, INTENT)
- Use all capital letters for Fortran intrinsics (e.g. MAX, MIN, SUM)
- Use all capital letters for any constants declared as PARAMETER (e.g. RD).
- Use all lowercase letters for variables with '_' separting words
(snake case).
- Use all lowercase letters for functions and subroutines using '_' to
separate words (snake case).
- Declare your REAL variables as REAL(KIND=8), unless you really need 4-byte
REALs for a specific reason.
- Do not allocate any memory in your Fortran routine (e.g work arrays). We
will use numpy arrays to manage all memory. Instead, declare your work
array (or dynamic array) as an INTENT(INOUT) argument in your function
signature.
- Avoid submitting code that uses global variables (other than for read only
constants). All Fortran contributions must be threadsafe and have no side
effects.
- Place any computational constants in the wrf_constants module found in
*wrf_constants.f90* and put a "USE wrf_constants, ONLY : YOUR_CONSTANT,..."
declaration in your function.
- Please do not redefine constants already declared in
wrf_constants.f90 (e.g. G, RD, RV, etc). Although the WRF model itself
does not adhere to this, we are trying to be consistent with the constants
used throughout WRF-Python.
- Do not put any STOP statements in your code to handle errors. STOP
statements will bring the entire Python interpreter down with it. Instead,
use *errstat* and *errmsg* arguments to your function signature to tell
Python about the error so it can throw an exception. See WETBULBCALC
in *wrf_rip_phys_routines.f90* for how this is handled.
- If you know how to use OpenMP directives, feel free to add them to your
routine, but this is not required.
Pull Requests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------
In order to submit changes, you must use GitHub to issue a pull request. Your
pull requests should be made against the **develop** branch, since we are
following GitFlow for this project.
Following a pull request, automated continuous integration tools will be
run to ensure that your code follows the PEP 8 style guide, and verifies that
a basic suite of unit tests run.
If your pull request is for a bug fix to an existing computational routine,
then the automated unit tests will probably fail due to the new values. This
is not a problem, but be sure to indicate to the developers in your GitHub
issue that the tests will need to be updated.
.. testing_::
Tests
---------------------------
Once you have submitted your contribution, we need a way to test your
code. Currently, most of WRF-Python's tests are written to ensure that
WRF-Python produces the same result as the NCAR Command Language (NCL), which
is where the code was originally derived. However, this isn't applicable for
new contributions and bug fixes, since there is nothing to test against for
new contributions and bug fixes might change the numerical result. So, we have
some recommendations below for how you can create your own tests.
Sample Data
^^^^^^^^^^^^^^^^^^^
You can download sample data for Hurricane Katrina here: <contact developers>
This data includes both a moving nest and a static nest version. You should
create your tests with this data set (both static and moving nests), unless
you are unable to reproduce a particular problem with it.
Supplying Data
^^^^^^^^^^^^^^^^^^^^^^
If you need to supply us data for your test (due to a bug) please provide us a
link to the file or upload it using :ref:`submitting_files`.
Unless the data is very small, do not add it to the GitHub
repository.
If you can demonstrate the problem/solution with a minimal set of hand created
values, then you can use that in your test.
Guidelines
^^^^^^^^^^^^^^^^^^^
The following are guidelines for testing you contributions. The developers are
aware that some issues have unique needs, so you can use the GitHub
issue related to your contribution to discuss with developers.
1. New computations must work for both moving nests and static nests.
Generally this is not an issue unless your data makes use of lat/lon
information (e.g. cross sections with lat/lon line definitions).
In order to submit changes, you must use GitHub to issue a pull request.
2. WRF-Python's tests can be found in the *test* directory.
3. WRF-Python's tests were written using the standard *unittest* package,
along with numpy's test package for the assert fuctions. One
reason for this is that many of the tests are dynamically generated, and
other testing frameworks can't find the tests when generated this way.
If you need to use another test framework, that's fine, just let us know
in your GitHub issue.
4. Place your test in the test/contrib directory.
5. For new contributions, images may be sufficient to show that your
code is working. Please discuss with the developers in you GitHub issue.
6. For bug related issues, try to create a case that demonstrates the problem,
and demonstrates the fix. If your problem is a crash, then proving that
your code runs without crashing should be sufficient.
7. You might need some creativity here.
Overview of WRF-Python Internals
----------------------------------
WRF-Python is a collection of diagnostic and interpolation routines for WRF-ARW
data. The API consists of a handful of functions

2
doc/source/index.rst

@ -53,6 +53,8 @@ Documentation @@ -53,6 +53,8 @@ Documentation
./api
./faq
./support
./contrib
./internals
./citation
./license
./tutorial

692
doc/source/internals.rst

@ -0,0 +1,692 @@ @@ -0,0 +1,692 @@
.. _internals:
WRF-Python Internals
========================================
WRF-Python is a collection of diagnostic and interpolation routines for
WRF-ARW data. The API is kept to a minimal set of functions, since we've found
this to be the easiest to teach to new programmers, students, and scientists.
Future plans include adopting the Pangeo xarray/dask model, along with an
object oriented API, but this is not currently supported as of this user
guide.
A typical use case for a WRF-Python user is to:
1) Open a WRF data file (or sequence of files) using NetCDF4-python or PyNIO.
2) Compute a WRF diagnostic using :meth:`wrf.getvar`.
3) Perform any additional computations using methods outside of WRF-Python.
4) Create a plot of the output using matplotlib (basemap or cartopy) or
PyNGL.
The purpose of this guide is to explain the internals of item (2) so that
users can help contribute or support the computational diagnostic
routines.
Overview of a :meth:`wrf.getvar` Diagnostic Computation
---------------------------------------------------------------
A diagnostic computed using the :meth:`wrf.getvar` function consists of the
following steps:
1) Call the appropriate 'getter' function based on the specified diagnostic
label. This step occurs in the :meth:`wrf.getvar` routine in routines.py.
2) Extract the required variables from the NetCDF data file (or files).
3) Compute the diagnostic using a wrapped Fortran, C, or Python routine.
4) Convert to the desired units (if applicable).
5) Set the metadata (if desired) and return the result as an
:class:`xarray.DataArray`, or return a :class:`numpy.ndarray` if no
metadata is required.
In the source directory, the :meth:`wrf.getvar` 'getter' routines have a
"\g\_" prefix for the naming convention (the "g" is for "get").
The unit conversion is handled by a :mod:`wrapt` decorator that can be found
in *decorators.py*. The setting of the metadata is handled using a :mod:`wrapt`
decorator, which can be found in the *metadecorators.py* file.
Overview of Compiled Computational Routines
---------------------------------------------------------
Currently, the compiled computational routines are written in Fortran
90 and exposed the Python using f2py. The routines have been aquired over
decades, originated from NCL's Fortran77 codebase, the WRF model itself,
or other tools like RIP (Read Interpolate Plot), and do not necessarily
conform to a common programming mindset (e.g. 1D arrays, 2D arrays, etc).
The raw Fortran routines are compiled in to the :mod:`wrf._wrffortran`
extension module, but are not particularly useful for applications in their
raw form. These routines are imported in the *extension.py* module, where
additional functionality is added to make the routines more user friendly.
The typical behavior for a fully exported Fortran routine in *extension.py*
is:
1) Verify that the supplied arguments are valid in shape. Although f2py does
this as well, the errors thrown by f2py are confusing to users, so this
step helps provide better error messages.
2) Allocate an ouput array based on the output shape of the algorithm,
number of "leftmost"[1]_ dimensions, and size of the data.
3) Iterate over the leftmost [1]_ dimensions and compute output for argument
data slices that are of the same dimensionality as the compiled algorithm.
4) Cast the argument arrays (or array slices) in to the dtype used in the
compiled routine. For WRF data, the conversion is usually from a 4-byte
float to an 8-byte double.
5) Extract the argument arrays out of xarray in to numpy arrays
(if applicable) and transpose them in to Fortran ordering. Note that this
does not actually do any copying of the data, it simply reorders the shape
tuple for the data and sets the Fortran ordering flag. This allows data
pointers from the output array slices to be passed directly to the
Fortran routine, which eliminates the need to copy the result to the output
array.
The steps described above are handled in :mod:`wrapt` decorators that can be
found in *decorators.py*. For some routines that produce multiple outputs or
have atypical behavior, the special case decorators are located in
*specialdec.py*.
.. [1] If the Fortran algorithm is written for a 2-dimensional array,
and a users passes in a 5-dimensional array, there are 3 "leftmost"
dimensions.
Example
----------------------------
The above overviews are better explained by an example. Although there are a
few exceptions (e.g. ll_to_xy), many of the routines in WRF-Python behave this
way.
For this example, let's make a routine that adds a variable's base state
to its perturbation. This is the kind of thing that you'd normally use numpy
for (e.g. Ptot = PB + P), but you could do this if you wanted concurrency
for this operation via OpenMP rather than using dask (in a future release of
WRF-Python, both OpenMP and dask will be available).
Fortran Code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Below is the Fortran 90 code, which will be written to a file called
example.f90.
.. code:: fortran
SUBROUTINE pert_add(base, pert, total, nx, ny)
!f2py threadsafe
!f2py intent(in,out) :: result
REAL(KIND=8), INTENT(IN), DIMENSION(nx, ny) :: base, pert
REAL(KIND=8), INTENT(OUT), DIMENSION(nx, ny) :: total
INTEGER, INTENT(IN) :: nx, ny
INTEGER :: i
!$OMP PARALLEL DO COLLAPSE(2) SCHEDULE(runtime)
DO j=1, ny
DO i=1, nx
total(i, j) = base(i, j) + pert(i, j)
END DO
END DO
!$OMP END PARALLEL DO
END SUBROUTINE pert_add
This code adds the 2D base and perturbation variables and stores the result in
a 2D output array. (For this example, we're using a 2D array to help
illustrate leftmost indexing below, but it could have been written using
a 1D or 3D array).
At the top, there are these two f2py directives:
.. code::
!f2py threadsafe
!f2py intent(in,out) :: total
The *threadsafe* directive tells f2py to release Python's Global Interpreter
Lock (GIL) before calling the Fortran routine. The Fortran code no longer
uses Python variables, so you should relese the GIL before running the
computation. This way, Python threads will contine to run, which may be
important if you are using this in a webserver or in some other
threaded environment like dask's threaded scheduler.
The *intent(in,out)* f2py directive is used because we will
be supplying a slice of the output array directly to this routine and don't
want to have to copy the result from Fortran back in to the result array. By
specifying intent(in,out), we're telling f2py to use the pointer to our
output array directly.
Finally, for the OpenMP directive, the scheduler is set to use runtime
scheduling via *SCHEDULE(runtime)*. By using runtime scheduling, users
can set the scheduling type within Python, but for most users the default is
sufficient.
Building the Fortran Code
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To build the Fortran code, the *example.f90* source code should be placed in
the *fortran* directory of the source tree.
Next, we need to update the numpy.distutils.core.Extension section of
*setup.py* in the root directory of the source tree.
.. code:: python
ext1 = numpy.distutils.core.Extension(
name="wrf._wrffortran",
sources=["fortran/wrf_constants.f90",
"fortran/wrf_testfunc.f90",
"fortran/wrf_user.f90",
"fortran/rip_cape.f90",
"fortran/wrf_cloud_fracf.f90",
"fortran/wrf_fctt.f90",
"fortran/wrf_user_dbz.f90",
"fortran/wrf_relhl.f90",
"fortran/calc_uh.f90",
"fortran/wrf_user_latlon_routines.f90",
"fortran/wrf_pvo.f90",
"fortran/eqthecalc.f90",
"fortran/wrf_rip_phys_routines.f90",
"fortran/wrf_pw.f90",
"fortran/wrf_vinterp.f90",
"fortran/wrf_wind.f90",
"fortran/omp.f90",
"fortran/example.f90 # New file added here
]
)
The easiest way to build your code is to use one of the build scripts located
in the *build_scripts* directory of the source tree. These scripts contain
variants for compiling with or without OpenMP support. Unless you are
debugging a problem, building with OpenMP is recommended.
For this example, we're going to assume you already followed how to
:ref:`dev_setup`. Below are the build instructions for compiling with
OpenMP enabled on GCC (Linux or Mac):
.. code::
pip uninstall wrf-python
cd build_scripts
sh ./gnu_omp.sh
The above command will build and install the new routine, along with the
other Fortran routines. If you recieve errors, then your code failed to
build sucessfully. Otherwise, your new routine can be called as
:meth:`wrf._wrffortran.pert_add`.
Creating a Thin Python Wrapper
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The new Fortran pert_add routine will work well for a 2D slice of data.
However, if you want to extend the functionality
to work with any dimensional array, you'll need to add a thin wrapper
with some extra functionality that make use of :mod:`wrapt` decorators.
First, let's start by creating a very thin wrapper in Python in *extension.py*.
.. code:: python
from wrf._wrffortran import pert_add
.
.
.
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
Despite being only a few lines of code, there is quite a bit going on in the
wrapper. The first thing to note is the arguments to the wrapper function. The
only arguments that we need for the wrapper are the inputs to the function
and an "outview" keyword argument. At this point in the call chain, the
arguments are assumed to be Fortran-ordered, in that the Fortran ordering flag
is set and the shape is transposed from a usual C-ordered numpy array
(the data itself remains in the same order that it was created). By passing
numpy arrays with the Fortran order flag set, f2py will pass the pointer
directly through to the Fortran routine.
The *outview* keyword argument is used during leftmost dimension indexing to
send slices of the output array to the Fortran routine to be filled. If there
are no leftmost dimensions (e.g. this routine is called with 2D data), then the
outview argument will be None and an outview variable will be created with the
same number of dimensions as the *base* argument. It should be created with
Fortran ordering so that the pointer is directly passed to the Fortran routine.
When the actual :meth:`wrf._wrffortran.pert_add` Fortran routine is called,
the nx and ny arguments are ommitted because f2py will supply this for us
based on the shape of the numpy arrays we are supplying as input arguments.
F2py also likes to return an array as a result, so even though we supplied
outview as an array to be filled by the Fortran routine, we will still get a
result from the function call that is pointing to the same thing as outview.
(We could have chosen to ignore the result and return outview instead).
Extract and Transpose
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The arrays that are being passed to the _pert_add thin wrapper need to be
numpy arrays in Fortran ordering, but they won't come this way from
users. They will come in as either :class:`numpy.ndarray`
or :class:`xarray.DataArray` and will be C-ordered. So, we need to make
sure that a Fortran-ordered :class:`numpy.ndarray` is what is passed to
the thin wrapper.
Since this type of operation is repeated for many diagnostic functions, a
decorator has been written in *decorators.py* for this purpose. Let's decorate
our thin wrapper with this function.
.. code:: python
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The :meth:`extract_and_transpose` decorator converts any argument to _pert_add
that are of type :class:`xarray.DataArray` to :class:`numpy.ndarray`, and then
gets the :attr:`numpy.ndarray.T` attribute, and passes this on to the
_pert_add wrapper.
Following the computation, we want the result to be returned back as the
same C-ordered array types that went in as arguments, so this decorator takes
the result of the computation and returns the :attr:`numpy.ndarray.T` from the
Fortran-ordered result. This result gets passed back up the decorator chain.
Cast to Fortran Array Types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Fortran routine expects a specific data type for the arrays (usually
REAL(KIND=8)). WRF files typically store their data as 4-byte floating point
numbers to save space. The arrays being passed to the
:meth:`wrf.decorators.extract_and_transpose` decorator need to be converted
to the type used in the Fortran routine (e.g. double), then converted back to
the original type (e.g. float) after the computation is finished. This is
handled by the :meth:`wrf.decorators.cast_type` decorator function in
*decorators.py*.
.. code:: python
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The :meth:`wrf.decorators.cast_type` decorator function takes an
*arg_idxs* argument to specify which positional arguments need to be cast to
the Fortran algorithm type, in this case arguments 0 and 1 (base and pert).
Following the computation, the result will be cast back to the original type
for the input arguments (usually float), and passed back up the decorator
chain.
Leftmost Dimension Indexing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The WRF-Python algorithms written in Fortran are usually written for fixed
size arrays of 1, 2, or 3 dimensions. If your input arrays have more than
the number of dimensions specified for the Fortran algorithm, then we need to
do the following:
1. Determine how many leftmost dimensions are used.
2. Create an output array that has a shape that contains the leftmost
dimensions concatenated with the shape of the result from the Fortran
algorithm.
3. Iterate over the leftmost dimensions and send slices of the input arrays
to the Fortran algorithm.
4. Along with the input arrays above, send a slice of the output array to be
filled by the Fortran algorithm.
5. Return the fully calculated output array.
The :meth:`wrf.decorators.left_iteration` is general purpose decorator
contained in *decorators.py* to handle most leftmost index iteration cases.
(Note: Some products, like cape_2d, return multiple products in the output
and don't fall in to this generic category, so those decorators can be found
in *specialdec.py*)
Let's look at how this is used below.
.. code:: python
@left_iteration(2, 2, ref_var_idx=0)
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The :meth:`wrf.decorators.left_iteration` decorator handles many different
use cases with its arguments, but this example is one of the more common cases.
The 0th positional argument tells the decorator that the "reference" input
variable should provide at least two dimensions. This should be set to
the same number of dimensions as in the Fortran algorithm, which is two in this
case. Dimensions to the left of these two dimensions are considered "leftmost"
dimensions.
The next positional argument (value of 2) tells the decorator that the
newly created output variable should retain the shape of the reference
variable's right two dimensions. This only applies when your output has less
dimensions than the reference variable (e.g. sea level pressure uses
geopotential height for the reference but produces 2D output). Since we are
not reducing the output dimensions, it should be set to the same value as the
previous argument.
The final keyword argument of *ref_ver_idx* tells the decorator to use
positional argument 0 (for the _pert_add function) as the reference
variable.
The result of this decorator will be the fully computed output array, which
gets passed back up the chain.
Checking Argument Shapes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Before any computations can be performed, the argument shapes are checked to
verify their sizes. Although f2py will catch problems at the
entry point to the Fortran routine, the error thrown is confusing to
users.
The :meth:`wrf.decorators.check_args` decorator is used to verify that the
arguments are the correct size before proceeding.
Here is how it is used:
.. code:: python
@check_args(0, 2, (2, 2))
@left_iteration(2, 2, ref_var_idx=0)
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
"""Wrapper for pert_add.
Located in example.f90.
"""
if outview is None:
outview = np.empty(base.shape[0:2], base.dtype, order="F")
result = pert_add(base,
pert,
outview)
return result
The 0th positional argument (value of 0), tells
:meth:`wrf.decorators.check_args` that the 0th positional argument of
_pert_add is the reference variable.
The next postional argument (value of 2) tells
:meth:`wrf.decorators.check_args` that it should expect at least 2 dimensions
for the reference variable. This should be set to the number of dimensions
used in the Fortran algorithm, which is two in this case.
The final positional argument is a tuple with the number of dimensions that
are expected for each array argument. Again, this should be set to the same
number of dimensions expected in the Fortran routine for each positional
argument. If an argument to your wrapped function is not an array type, you
can use None in the tuple to ignore it, but that is not applicable for this
example.
Putting It All Together
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The previous sections showed how the decorator chain was built up from the
_pert_add function. However, when you actually make a call to _pert_add, the
decorators are called from top to bottom. This means check_args is called
first, then left_iteration, then cast_type, then extract_and_transpose,
and finally _pert_add. After _pert_add is finished, the result is passed
back up the chain and back to the user.
Now that we have a fully wrapped compiled routine, how might we use this?
Let's make a new :meth:`wrf.getvar` diagnostic called 'total_pressure'. A
similar diagnostic already exists in WRF-Python, but this is just for
illustration of how to use our newly wrapped Fortran routine.
Make a 'getter' Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, we need a 'getter' routine that extracts the required input variables
from the WRF NetCDF file(s) to perform the computation. In this case, the
variables are P and PB.
The current naming convention in WRF-Python is to prefix the 'getter'
functions with a '\g\_', so let's call this file g_totalpres.py and make a
function get_total_pressure inside of it.
The contents of this file will be:
.. code:: python
# g_totalpres.py
from .extension import _pert_add
from .util import extract_vars
@copy_and_set_metadata(copy_varname="P", name="total_pressure",
description="total pressure",
units="Pa")
def get_total_pressure(wrfin, timeidx=0, method="cat", squeeze=True,
cache=None, meta=True, _key=None):
"""Return total pressure.
This functions extracts the necessary variables from the NetCDF file
object in order to perform the calculation.
Args:
wrfin (:class:`netCDF4.Dataset`, :class:`Nio.NioFile`, or an \
iterable): WRF-ARW NetCDF
data as a :class:`netCDF4.Dataset`, :class:`Nio.NioFile`
or an iterable sequence of the aforementioned types.
timeidx (:obj:`int` or :data:`wrf.ALL_TIMES`, optional): The
desired time index. This value can be a positive integer,
negative integer, or
:data:`wrf.ALL_TIMES` (an alias for None) to return
all times in the file or sequence. The default is 0.
method (:obj:`str`, optional): The aggregation method to use for
sequences. Must be either 'cat' or 'join'.
'cat' combines the data along the Time dimension.
'join' creates a new dimension for the file index.
The default is 'cat'.
squeeze (:obj:`bool`, optional): Set to False to prevent dimensions
with a size of 1 from being automatically removed from the
shape of the output. Default is True.
cache (:obj:`dict`, optional): A dictionary of (varname, ndarray)
that can be used to supply pre-extracted NetCDF variables to
the computational routines. It is primarily used for internal
purposes, but can also be used to improve performance by
eliminating the need to repeatedly extract the same variables
used in multiple diagnostics calculations, particularly when
using large sequences of files.
Default is None.
meta (:obj:`bool`, optional): Set to False to disable metadata and
return :class:`numpy.ndarray` instead of
:class:`xarray.DataArray`. Default is True.
_key (:obj:`int`, optional): A caching key. This is used for
internal purposes only. Default is None.
Returns:
:class:`xarray.DataArray` or :class:`numpy.ndarray`: Omega.
If xarray is
enabled and the *meta* parameter is True, then the result will be a
:class:`xarray.DataArray` object. Otherwise, the result will be a
:class:`numpy.ndarray` object with no metadata.
"""
# Get the base and perturbation pressures
varnames = ("PB", "P")
ncvars = extract_vars(wrfin, timeidx, varnames, method, squeeze, cache,
meta=False, _key=_key)
pb = ncvars["PB"]
p = ncvars["P"]
total_pres = _pert_add(pb, p)
return total_pres
This getter function extracts the PB and P (base and pertrubation pressure)
variables and calls the _pert_add function and returns the result. The
arguments *wrfin*, *timeidx*, *method*, *squeeze*, *cache*, *meta*, and
*_key* are used for every getter function and you can read what they do in
the docstring.
The getter function is also decorated with a
:meth:`wrf.decorators.copy_and_set_metadata` decorator. This is a general
purpose decorator that is used for copying metadata from an input variable
to the result. In this case, the variable to copy is 'P'. The *name* parameter
specifies that the :attr:`xarray.DataArray.name` attribute for the variable
(the name that will be written to a NetCDF variable). The *description* is a
brief description for variable that will be placed in the
:attr:`xarray.DataArray.attrs` dictionary along with the *units* parameter.
Make Your New Diagnostic Available in :meth:`wrf.getvar`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The final step is to make the new 'total_pressure' diagnostic available from
:meth:`wrf.getvar`. To do this, modifications need to be made to
*routines.py*.
First, import your new getter routine at the top of routines.py.
.. code:: python
from __future__ import (absolute_import, division, print_function)
from .util import (get_iterable, is_standard_wrf_var, extract_vars,
viewkeys, get_id)
from .g_cape import (get_2dcape, get_3dcape, get_cape2d_only,
get_cin2d_only, get_lcl, get_lfc, get_3dcape_only,
get_3dcin_only)
.
.
.
from .g_cloudfrac import (get_cloudfrac, get_low_cloudfrac,
get_mid_cloudfrac, get_high_cloudfrac)
from .g_totalpres import get_total_pressure
Next, update _FUNC_MAP to map your diagnostic label ('total_pressure')
to the getter routine (get_total_pres).
.. code:: python
_FUNC_MAP = {"cape2d": get_2dcape,
"cape3d": get_3dcape,
.
.
.
"high_cloudfrac": get_high_cloudfrac,
"total_pressure": get_total_pressure
}
Finally, update _VALID_KARGS to inform :meth:`wrf.getvar` of any additional
keyword argument names that this routine might use. The :meth:`wrf.getvar`
routine will check keyword arguments and throws an error when it gets any that
are not declared in this map.
In this case, there aren't any addtional keyword arguments, so we'll just
supply an empty list.
.. code:: python
_VALID_KARGS = {"cape2d": ["missing"],
"cape3d": ["missing"],
"dbz": ["do_variant", "do_liqskin"],
"maxdbz": ["do_variant", "do_liqskin"],
.
.
.
"high_cloudfrac": ["vert_type", "low_thresh",
"mid_thresh", "high_thresh"],
"total_pressure": []
}
After this is complete, your new routine is now available for use from
:meth:`wrf.getvar`.

1
doc/source/support.rst

@ -11,6 +11,7 @@ you can submit an issue to the @@ -11,6 +11,7 @@ you can submit an issue to the
This should be used strictly for crashes and bugs. For general usage
questions, please use the :ref:`google-group`.
.. _submitting_files:
Submitting Files
-------------------

3
fortran/contrib/readme

@ -0,0 +1,3 @@ @@ -0,0 +1,3 @@
If you only wish to submit a Fortran contribution, without supplying any
wrappers or other Python code, please submit your code to this
directory.

2
fortran/rip_cape.f90

@ -176,7 +176,7 @@ SUBROUTINE DLOOKUP_TABLE(psadithte, psadiprs, psaditmk, fname, errstat, errmsg) @@ -176,7 +176,7 @@ SUBROUTINE DLOOKUP_TABLE(psadithte, psadiprs, psaditmk, fname, errstat, errmsg)
! FNAME = 'psadilookup.dat'
iustnlist = 33
OPEN (UNIT=iustnlist, FILE=fname, FORM='formatted', STATUS='old')
OPEN (UNIT=iustnlist, FILE=fname, FORM='formatted', STATUS='old', ACTION='read')
DO i = 1,14
READ (iustnlist, FMT=*)

20
linux.yml

@ -0,0 +1,20 @@ @@ -0,0 +1,20 @@
# Create full conda environment for development, including some useful tools
name: develop
channels:
- conda-forge
dependencies:
- python=3
- wrapt
- numpy
- matplotlib
- netcdf4
- xarray
- jupyter
- sphinx
- sphinx_rtd_theme
- pycodestyle
- cartopy
- basemap
- gcc_linux-64
- gfortran_linux-64

20
osx.yml

@ -0,0 +1,20 @@ @@ -0,0 +1,20 @@
# Create full conda environment for development, including some useful tools
name: develop
channels:
- conda-forge
dependencies:
- python=3
- wrapt
- numpy
- matplotlib
- netcdf4
- xarray
- jupyter
- sphinx
- sphinx_rtd_theme
- pycodestyle
- cartopy
- basemap
- clang_osx-64
- gfortran_osx-64

0
src/wrf/contrib.py

2
src/wrf/interp.py

@ -93,7 +93,7 @@ def interplevel(field3d, vert, desiredlev, missing=default_fill(np.float64), @@ -93,7 +93,7 @@ def interplevel(field3d, vert, desiredlev, missing=default_fill(np.float64),
z = getvar(wrfin, "z")
pblh = getvar(wrfin, "PBLH")
rh_pblh = interplevel(rh, p, pblh)
rh_pblh = interplevel(rh, z, pblh)
"""

10
test/comp_utest.py

@ -14,7 +14,7 @@ from wrf import * @@ -14,7 +14,7 @@ from wrf import *
TEST_FILE = ("/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/moving_nest/"
"wrfout_d02_2005-08-28_00:00:00")
OUT_NC_FILE = "/tmp/wrftest.nc"
NCFILE = nc(TEST_FILE)
GROUP_FILES = [
@ -676,10 +676,10 @@ if __name__ == "__main__": @@ -676,10 +676,10 @@ if __name__ == "__main__":
omp_set_num_threads(omp_get_num_procs()//2)
omp_set_schedule(OMP_SCHED_STATIC, 0)
omp_set_dynamic(False)
varnames=["avo", "pvo", "eth", "dbz", "helicity", "updraft_helicity",
"omg", "pw", "rh", "slp", "td", "tk", "tv", "twb", "uvmet",
"cloudfrac", "ctt"]
varnames = ["avo", "pvo", "eth", "dbz", "helicity", "updraft_helicity",
"omg", "pw", "rh", "slp", "td", "tk", "tv", "twb", "uvmet",
"cloudfrac", "ctt"]
omp_set_num_threads(omp_get_num_procs()-1)
omp_set_schedule(OMP_SCHED_STATIC, 0)

1
test/contrib/readme

@ -0,0 +1 @@ @@ -0,0 +1 @@
This directory is for user contributed tests.

141
test/ipynb/loop_and_fill.ipynb

@ -0,0 +1,141 @@ @@ -0,0 +1,141 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<xarray.DataArray 'T2' (Time: 4, bottom_top: 1, south_north: 96, west_east: 96)>\n",
"array([[[[303.6255 , ..., 303.81546],\n",
" ...,\n",
" [304.7401 , ..., 300.15717]]],\n",
"\n",
"\n",
" ...,\n",
"\n",
"\n",
" [[[301.89346, ..., 302.56534],\n",
" ...,\n",
" [302.61246, ..., 298.01028]]]], dtype=float32)\n",
"Coordinates:\n",
" * Time (Time) datetime64[ns] 2005-08-28 2005-08-28T03:00:00 ...\n",
" XTIME (Time) float64 0.0 180.0 360.0 540.0\n",
" XLAT (south_north, west_east) float32 21.302004 21.302004 21.302004 ...\n",
" XLONG (south_north, west_east) float32 -90.57406 -90.484116 ...\n",
"Dimensions without coordinates: bottom_top, south_north, west_east\n",
"Attributes:\n",
" FieldType: 104\n",
" MemoryOrder: XY \n",
" description: TEMP at 2 M\n",
" units: K\n",
" stagger: \n",
" coordinates: XLONG XLAT XTIME\n",
" projection: Mercator(stand_lon=-89.0, moad_cen_lat=27.99999237060547, t...\n"
]
}
],
"source": [
"from __future__ import print_function, division\n",
"\n",
"import os\n",
"import numpy as np\n",
"from netCDF4 import Dataset\n",
"from wrf import getvar, ALL_TIMES, to_np\n",
"import xarray\n",
"\n",
"path = '/scratch/mawagner/2015_GRELL3D/TESTFILES/TestTime'\n",
"filename_list = os.listdir(path)\n",
"filename_list.sort()\n",
"\n",
"#filename_list = [\"/Users/ladwig/Documents/wrf_files/wrf_vortex_single/wrfout_d02_2005-08-28_00:00:00\",\n",
"# \"/Users/ladwig/Documents/wrf_files/wrf_vortex_single/wrfout_d02_2005-08-28_03:00:00\",\n",
"# \"/Users/ladwig/Documents/wrf_files/wrf_vortex_single/wrfout_d02_2005-08-28_06:00:00\",\n",
"# \"/Users/ladwig/Documents/wrf_files/wrf_vortex_single/wrfout_d02_2005-08-28_09:00:00\"]\n",
"\n",
"# Result shape \n",
"result_shape = (6, 1, 290, 265)\n",
"#result_shape = (4, 1, 96, 96)\n",
"\n",
"# Let's get the first time so we can copy the metadata later\n",
"f = Dataset(filename_list[0])\n",
"# By setting squeeze to False, you'll get all the dimension names.\n",
"z1 = getvar(f, \"T2\", 0, squeeze=False)\n",
"xlat = getvar(f, \"XLAT\", 0)\n",
"xlong = getvar(f, \"XLONG\", 0)\n",
"\n",
"\n",
"z_final = np.empty(result_shape, np.float32)\n",
"\n",
"# Modify this number if using more than 1 time per file\n",
"times_per_file = 1\n",
"#times_per_file = 4\n",
"\n",
"data_times = []\n",
"xtimes = []\n",
"for timeidx in range(result_shape[0]):\n",
" # Compute the file index and the time index inside the file\n",
" fileidx = timeidx // times_per_file\n",
" file_timeidx = timeidx % times_per_file\n",
"\n",
" f = Dataset(filename_list[fileidx])\n",
" z = getvar(f, \"T2\", file_timeidx)\n",
" t = getvar(f, \"Times\", file_timeidx)\n",
" xt = getvar(f, \"xtimes\", file_timeidx)\n",
" data_times.append(to_np(t))\n",
" xtimes.append(to_np(xt))\n",
" z_final[timeidx,:] = z[:]\n",
" f.close()\n",
" \n",
"# Let's make the metadata. Dimension names should copy easily if you set sqeeze to False, \n",
"# otherwise you can just set them yourself is a tuple of dimension names. Since you wanted\n",
"# to keep the bottom_top dimension for this 2D variable (which is normally removed), \n",
"# I'm doing this manually.\n",
"z_dims = [\"Time\", \"bottom_top\", \"south_north\", \"west_east\"]\n",
"\n",
"# Xarray doesn't copy coordinates easily (it always complains about shape mismatches), so do this\n",
"# manually\n",
"z_coords = {}\n",
"z_coords[\"Time\"] = data_times\n",
"z_coords[\"XTIME\"] = (\"Time\",), xtimes\n",
"z_coords[\"XLAT\"] = (\"south_north\", \"west_east\"), xlat\n",
"z_coords[\"XLONG\"] = (\"south_north\", \"west_east\"), xlong\n",
"z_name = \"T2\"\n",
"\n",
"# Attributes copy nicely\n",
"z_attrs = {}\n",
"z_attrs.update(z1.attrs)\n",
"\n",
"z_with_meta = xarray.DataArray(z_final, coords=z_coords, dims=z_dims, attrs=z_attrs, name=z_name)\n",
"\n",
"print(z_with_meta)\n",
" \n",
" "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

259
test/ipynb/reduce_files.ipynb

@ -0,0 +1,259 @@ @@ -0,0 +1,259 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tempfile\n",
"import glob\n",
"import shutil\n",
"import os\n",
"import numpy as np\n",
"from netCDF4 import Dataset\n",
"from wrf import getvar, ll_to_xy, CoordPair, GeoBounds, to_np\n",
"\n",
"_VARS_TO_KEEP = (\"Times\", \"XLAT\", \"XLONG\", \"XLAT_U\", \"XLAT_V\", \"XLONG_U\", \n",
" \"XLONG_V\", \"U\", \"V\", \"W\", \"PH\", \"PHB\", \"T\", \"P\", \"PB\", \"Q2\", \n",
" \"T2\", \"PSFC\", \"U10\", \"V10\", \"XTIME\", \"QVAPOR\", \"QCLOUD\", \n",
" \"QGRAUP\", \"QRAIN\", \"QSNOW\", \"QICE\", \"MAPFAC_M\", \"MAPFAC_U\",\n",
" \"MAPFAC_V\", \"F\", \"HGT\", \"RAINC\", \"RAINSH\", \"RAINNC\", \"I_RAINC\", \"I_RAINNC\",\n",
" \"PBLH\")\n",
"\n",
"class FileReduce(object):\n",
" def __init__(self, filenames, geobounds, tempdir=None, vars_to_keep=None, \n",
" max_pres=None, compress=False, delete=True, reuse=False):\n",
" \"\"\"An iterable object for cutting out geographic domains.\n",
" \n",
" Args:\n",
" \n",
" filenames (sequence): A sequence of file paths to the WRF files\n",
" \n",
" geobounds (GeoBounds): A GeoBounds object defining the region of interest\n",
" \n",
" tempdir (str): The location to store the temporary cropped data files. If None, tempfile.mkdtemp is used.\n",
" \n",
" vars_to_keep (sequence): A sequence of variables names to keep from the original file. None for all vars.\n",
" \n",
" max_press (float): The maximum pressure height level to keep. None for all levels.\n",
" \n",
" compress(bool): Set to True to enable zlib compression of variables in the output.\n",
" \n",
" delete (bool): Set to True to delete the temporary directory when FileReduce is garbage collected.\n",
" \n",
" reuse (bool): Set to True when you want to resuse the files that were previously converted. *tempdir* \n",
" must be set to a specific directory that contains the converted files and *delete* must be False.\n",
" \n",
" \n",
" \"\"\"\n",
" self._filenames = filenames\n",
" self._i = 0\n",
" self._geobounds = geobounds\n",
" self._delete = delete\n",
" self._vars_to_keep = vars_to_keep\n",
" self._max_pres = max_pres\n",
" self._compress = compress\n",
" self._cache = set()\n",
" self._own_data = True\n",
" self._reuse = reuse\n",
" \n",
" if tempdir is not None:\n",
" if not os.path.exists(tempdir):\n",
" os.makedirs(tempdir)\n",
" self._tempdir = tempdir\n",
" if self._reuse:\n",
" self._cache = set((os.path.join(self._tempdir, name) \n",
" for name in os.listdir(self._tempdir)))\n",
" else:\n",
" self._tempdir = tempfile.mkdtemp()\n",
"\n",
" print (\"temporary directory is: {}\".format(self._tempdir))\n",
" self._prev = None\n",
" self._set_extents()\n",
" \n",
" def _set_extents(self):\n",
" fname = list(self._filenames)[0]\n",
" with Dataset(fname) as ncfile:\n",
" lons = [self._geobounds.bottom_left.lon, self._geobounds.top_right.lon]\n",
" lats = [self._geobounds.bottom_left.lat, self._geobounds.top_right.lat]\n",
" orig_west_east = len(ncfile.dimensions[\"west_east\"])\n",
" orig_south_north = len(ncfile.dimensions[\"south_north\"])\n",
" orig_bottom_top = len(ncfile.dimensions[\"bottom_top\"])\n",
" \n",
" # Note: Not handling the moving nest here\n",
" # Extra points included around the boundaries to ensure domain is fully included\n",
" x_y = ll_to_xy(ncfile, lats, lons, meta=False)\n",
" self._start_x = 0 if x_y[0,0] == 0 else x_y[0,0] - 1\n",
" self._end_x = orig_west_east - 1 if x_y[0,1] >= orig_west_east - 1 else x_y[0,1] + 1\n",
" self._start_y = 0 if x_y[1,0] == 0 else x_y[1,0] - 1\n",
" self._end_y = orig_south_north - 1 if x_y[1,1] >= orig_south_north - 1 else x_y[1,1] + 1\n",
" \n",
" self._west_east = self._end_x - self._start_x + 1\n",
" self._west_east_stag = self._west_east + 1\n",
" self._south_north = self._end_y - self._start_y + 1\n",
" self._south_north_stag = self._south_north + 1\n",
" \n",
" # Crop the vertical to the specified pressure\n",
" if self._max_pres is not None:\n",
" pres = getvar(ncfile, \"pressure\")\n",
" # Find the lowest terrain height\n",
" ter = to_np(getvar(ncfile, \"ter\"))\n",
" min_ter = float(np.amin(ter)) + 1\n",
" ter_less = ter <= min_ter\n",
" ter_less = np.broadcast_to(ter_less, pres.shape)\n",
" # For the lowest terrain height, find the lowest vertical index to meet \n",
" # the desired pressure level. The lowest terrain height will result in the \n",
" # largest vertical spread to find the pressure level.\n",
" x = np.transpose(((pres.values <= self._max_pres) & ter_less).nonzero())\n",
" self._end_bot_top = np.amin(x, axis=0)[0] \n",
" if (self._end_bot_top >= orig_bottom_top):\n",
" self._end_bot_top = orig_bottom_top - 1\n",
" else:\n",
" self._end_bot_top = orig_bottom_top - 1\n",
" \n",
" self._bottom_top = self._end_bot_top + 1\n",
" self._bottom_top_stag = self._bottom_top + 1\n",
" \n",
" print(\"bottom_top\", self._bottom_top)\n",
" \n",
" \n",
" def __iter__(self):\n",
" return self\n",
" \n",
" def __copy__(self):\n",
" cp = type(self).__new__(self.__class__)\n",
" cp.__dict__.update(self.__dict__)\n",
" cp._own_data = False\n",
" cp._delete = False\n",
" \n",
" return cp\n",
" \n",
" def __del__(self):\n",
" if self._delete:\n",
" shutil.rmtree(self._tempdir)\n",
" \n",
" def reduce(self, fname):\n",
" outfilename = os.path.join(self._tempdir, os.path.basename(fname))\n",
" \n",
" # WRF-Python can iterate over sequences several times during a 'getvar', so a cache is used to \n",
" if outfilename in self._cache:\n",
" return Dataset(outfilename)\n",
" \n",
" # New dimension sizes\n",
" dim_d = {\"west_east\" : self._west_east,\n",
" \"west_east_stag\" : self._west_east_stag,\n",
" \"south_north\" : self._south_north,\n",
" \"south_north_stag\" : self._south_north_stag,\n",
" \"bottom_top\" : self._bottom_top,\n",
" \"bottom_top_stag\" : self._bottom_top_stag\n",
" }\n",
" \n",
" # Data slice sizes for the 2D dimensions\n",
" slice_d = {\"west_east\" : slice(self._start_x, self._end_x + 1),\n",
" \"west_east_stag\" : slice(self._start_x, self._end_x + 2),\n",
" \"south_north\" : slice(self._start_y, self._end_y + 1),\n",
" \"south_north_stag\" : slice(self._start_y, self._end_y + 2),\n",
" \"bottom_top\" : slice(None, self._end_bot_top + 1),\n",
" \"bottom_top_stag\" : slice(None, self._end_bot_top + 2)\n",
" }\n",
" \n",
" with Dataset(fname) as infile, Dataset(outfilename, mode=\"w\") as outfile:\n",
" \n",
" # Copy the global attributes\n",
" outfile.setncatts(infile.__dict__)\n",
"\n",
" # Copy Dimensions, limiting south_north and west_east to desired domain\n",
" for name, dimension in infile.dimensions.items():\n",
" dimsize = dim_d.get(name, len(dimension))\n",
" outfile.createDimension(name, dimsize)\n",
"\n",
" # Copy Variables \n",
" for name, variable in infile.variables.items():\n",
" if self._vars_to_keep is not None:\n",
" if name not in self._vars_to_keep:\n",
" continue\n",
" \n",
" print (name)\n",
" new_slices = tuple((slice_d.get(dimname, slice(None)) for dimname in variable.dimensions))\n",
"\n",
" outvar = outfile.createVariable(name, variable.datatype, variable.dimensions, zlib=self._compress)\n",
"\n",
" outvar[:] = variable[new_slices]\n",
"\n",
" outvar.setncatts(variable.__dict__)\n",
" \n",
" \n",
" result = Dataset(outfilename)\n",
" \n",
" self._cache.add(outfilename)\n",
" \n",
" return result\n",
" \n",
" \n",
" def next(self):\n",
" if self._i >= len(self._filenames):\n",
" if self._prev is not None:\n",
" self._prev.close()\n",
" raise StopIteration\n",
" else:\n",
" fname = self._filenames[self._i]\n",
" reduced_file = self.reduce(fname)\n",
" if self._prev is not None:\n",
" self._prev.close()\n",
" self._prev = reduced_file\n",
" \n",
" self._i += 1\n",
" \n",
" return reduced_file\n",
" \n",
" # Python 3\n",
" def __next__(self):\n",
" return self.next()\n",
"\n",
"# How to use with getvar\n",
"# Set lower left and upper right to your desired domain\n",
"# Idaho bounding box: [\"41.9880561828613\",\"49.000846862793\",\"-117.243034362793\",\"-111.043563842773\"]\n",
"ll = CoordPair(lat=41.8, lon=-117.26)\n",
"ur = CoordPair(lat=49.1, lon=-110.5)\n",
"bounds = GeoBounds(ll, ur)\n",
"reduced_files = FileReduce(glob.glob(\"/Users/ladwig/Documents/wrf_files/boise_tutorial/orig/wrfout_*\"),\n",
" bounds, vars_to_keep=_VARS_TO_KEEP, max_pres=400,\n",
" tempdir=\"/Users/ladwig/Documents/wrf_files/boise_tutorial/reduced\", \n",
" delete=False, reuse=True)\n",
"\n",
"pres = getvar(reduced_files, \"pressure\")\n",
"\n",
"print(pres)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

4
test/utests.py

@ -13,8 +13,8 @@ from wrf import (getvar, interplevel, interpline, vertcross, vinterp, @@ -13,8 +13,8 @@ from wrf import (getvar, interplevel, interpline, vertcross, vinterp,
extract_global_attrs, viewitems, CoordPair, ll_points)
from wrf.util import is_multi_file
NCL_EXE = "/Users/ladwig/miniconda3/envs/ncl_build/bin/ncl"
NCARG_ROOT = "/Users/ladwig/miniconda3/envs/ncl_build"
NCL_EXE = "ncl"
NCARG_ROOT = os.environ["CONDA_PREFIX"]
DIRS = ["/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/moving_nest",
"/Users/ladwig/Documents/wrf_files/wrf_vortex_multi/static_nest"]
PATTERN = "wrfout_d02_*"

22
testenv2.yml

@ -0,0 +1,22 @@ @@ -0,0 +1,22 @@
# Create full conda environment for development, including some useful tools
name: testenv2
channels:
- conda-forge
dependencies:
- python=2
- wrapt
- numpy
- matplotlib
- netcdf4
- xarray
- jupyter
- sphinx
- sphinx_rtd_theme
- pycodestyle
- cartopy
- basemap
- clang_osx-64
- gfortran_osx-64
- pynio
- ncl

22
testenv3.yml

@ -0,0 +1,22 @@ @@ -0,0 +1,22 @@
# Create full conda environment for development, including some useful tools
name: testenv3
channels:
- conda-forge
dependencies:
- python=3
- wrapt
- numpy
- matplotlib
- netcdf4
- xarray
- jupyter
- sphinx
- sphinx_rtd_theme
- pycodestyle
- cartopy
- basemap
- clang_osx-64
- gfortran_osx-64
- pynio
- ncl

19
win64.yml

@ -0,0 +1,19 @@ @@ -0,0 +1,19 @@
# Create full conda environment for development, including some useful tools
name: develop
channels:
- conda-forge
dependencies:
- python=3
- wrapt
- numpy
- matplotlib
- netcdf4
- xarray
- jupyter
- sphinx
- sphinx_rtd_theme
- pycodestyle
- cartopy
- basemap
- m2w64-toolchain
Loading…
Cancel
Save