iris.util#
Miscellaneous utility functions.
- class iris.util.CMLSettings(numpy_formatting=True, data_array_stats=False, coord_checksum=False, coord_data_array_stats=False, coord_order=False, array_edgeitems=3, masked_value_count=False)[source]#
Bases:
_localSettings for controlling the behaviour and formatting of the CML output.
Use the
setmethod of this class as a context manager to temporarily modify the settings.- Parameters:
- set(numpy_formatting=None, data_array_stats=None, coord_checksum=None, coord_data_array_stats=None, coord_order=None, array_edgeitems=None, masked_value_count=None)[source]#
Context manager to control the CML output settings.
Use this method in a with statement to override specific output settings of the Cube Metadata Language (CML), e.g. as generated from
cube.xml().Example:
# Generate a CML output for a cube, but also include array statistics for the # cube coordinate data:
with iris.CML_SETTINGS.set(coord_data_array_stats=True): print(cube.xml())
- Parameters:
numpy_formatting (bool or None, optional) β Whether to use numpy-style formatting for arrays.
data_array_stats (bool or None, optional) β Whether to include statistics for cube data array.
coord_checksum (bool or None, optional) β Whether to include a checksum for coordinate data arrays.
coord_data_array_stats (bool or None, optional) β Whether to include statistics for coordinate data arrays.
coord_order (bool or None, optional) β Whether to output the array ordering (i.e. Fortran/C) for coordinate data arrays.
array_edgeitems (int or None, optional) β The number of elements to display at the edges of arrays.
masked_value_count (bool or None, optional) β Whether to include a count of masked values in the output.
- iris.util.approx_equal(a, b, max_absolute_error=1e-10, max_relative_error=1e-10)[source]#
Check if two numbers are almost equal.
Returns whether two numbers are almost equal, allowing for the finite precision of floating point numbers.
Notes
This function does maintain laziness when called; it doesnβt realise data. See more at Real and Lazy Data.
Deprecated since version 3.2.0: Instead use
math.isclose(). For example, rather than callingapprox_equal(a, b, max_abs, max_rel)replace withmath.isclose(a, b, max_rel, max_abs). Note thatisclose()will return True if the actual error equals the maximum, whereasutil.approx_equal()will return False.
- iris.util.array_checksum(data)[source]#
Calculate a checksum for an array.
Returns the crc32 checksum of the array data as a hex string. Masked data is filled with zeros before calculating to ensure that the checksum is not sensitive to unset values.
- Parameters:
data (array-like) β The array to calculate the checksum for.
- Returns:
32 bit checksum hexstring, e.g. β0x1a2b3c4dβ.
- Return type:
- iris.util.array_equal(array1, array2, withnans=False)[source]#
Return whether two arrays have the same shape and elements.
- Parameters:
array1 (array-like) β Args to be compared, normalised if necessary with
np.asarray().array2 (array-like) β Args to be compared, normalised if necessary with
np.asarray().withnans (default=False) β When unset (default), the result is False if either input contains NaN points. This is the normal floating-point arithmetic result. When set, return True if inputs contain the same value in all elements, _including_ any NaN values.
- Return type:
Notes
This provides similar functionality to
numpy.array_equal(), but will compare arrays as unequal when their masks differ and has additional support for arrays of strings and NaN-tolerant operation.This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.array_summary(data, edgeitems=3, precision=8)[source]#
Return a strictly formatted summarised view of an array (first and last N elements).
Generates a string formatted summary of the array. The first and last edgeitems elements are printed out with strictly controlled formatting. Useful for summarising arrays in tests for comparing against known good outputs.
Multi-dimensional arrays will be flattened prior to summarising.
- Parameters:
- Returns:
A string formatted summary of the array.
- Return type:
- iris.util.between(lh, rh, lh_inclusive=True, rh_inclusive=True)[source]#
Provide convenient way of defining a 3 element inequality.
Such as
a < number < b.- Parameters:
lh β The left hand element of the inequality.
rh β The right hand element of the inequality.
lh_inclusive (bool, default=True) β Affects the left hand comparison operator to use in the inequality. True for
<=false for<. Defaults to True.rh_inclusive (bool, default=True) β Same as lh_inclusive but for right hand operator.
Examples
between_3_and_6 = between(3, 6) for i in range(10): print(i, between_3_and_6(i)) between_3_and_6 = between(3, 6, rh_inclusive=False) for i in range(10): print(i, between_3_and_6(i))
Notes
This function does maintain laziness when called; it doesnβt realise data. See more at Real and Lazy Data.
- iris.util.broadcast_to_shape(array, shape, dim_map, chunks=None)[source]#
Broadcast an array to a given shape.
Each dimension of the array must correspond to a dimension in the given shape. The result is a read-only view (see
numpy.broadcast_to()). If you need to write to the resulting array, make a copy first.- Parameters:
array (
numpy.ndarray-like) β An array to broadcast.shape (
list,tupleetc) β The shape the array should be broadcast to.dim_map (
list,tupleetc) β A mapping of the dimensions of array to their corresponding element in shape. dim_map must be the same length as the number of dimensions in array. Each element of dim_map corresponds to a dimension of array and its value provides the index in shape which the dimension of array corresponds to, so the first element of dim_map gives the index of shape that corresponds to the first dimension of array etc.chunks (
tuple, optional) β If the source array is adask.array.Arrayand a value is provided, then the result will use these chunks instead of the same chunks as the source array. Setting chunks explicitly as part of broadcast_to_shape is more efficient than rechunking afterwards. See alsodask.array.broadcast_to(). Note that the values provided here will only be used along dimensions that are new on the result or have size 1 on the source array.
Examples
Broadcasting an array of shape (2, 3) to the shape (5, 2, 6, 3) where the first dimension of the array corresponds to the second element of the desired shape and the second dimension of the array corresponds to the fourth element of the desired shape:
a = np.array([[1, 2, 3], [4, 5, 6]]) b = broadcast_to_shape(a, (5, 2, 6, 3), (1, 3))
Broadcasting an array of shape (48, 96) to the shape (96, 48, 12):
# a is an array of shape (48, 96) result = broadcast_to_shape(a, (96, 48, 12), (1, 0))
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.clip_string(the_str, clip_length=70, rider='...')[source]#
Return clipped version of the string based on the specified clip length.
Return a clipped version of the string based on the specified clip length and whether or not any graceful clip points can be found.
If the string to be clipped is shorter than the specified clip length, the original string is returned.
If the string is longer than the clip length, a graceful point (a space character) after the clip length is searched for. If a graceful point is found the string is clipped at this point and the rider is added. If no graceful point can be found, then the string is clipped exactly where the user requested and the rider is added.
- Parameters:
the_str (str) β The string to be clipped.
clip_length (int, default=70) β The length in characters that the input string should be clipped to. Defaults to a preconfigured value if not specified.
rider (str, default="...") β A series of characters appended at the end of the returned string to show it has been clipped. Defaults to a preconfigured value if not specified.
- Returns:
The string clipped to the required length with a rider appended. If the clip length was greater than the original string, the original string is returned unaltered.
- Return type:
Notes
This function does maintain laziness when called; it doesnβt realise data. See more at Real and Lazy Data.
- iris.util.column_slices_generator(full_slice, ndims)[source]#
Return a dictionary mapping old data dimensions to new.
Given a full slice full of tuples, return a dictionary mapping old data dimensions to new and a generator which gives the successive slices needed to index correctly (across columns).
This routine deals with the special functionality for tuple based indexing e.g. [0, (3, 5), :, (1, 6, 8)] by first providing a slice which takes the non tuple slices out first i.e. [0, :, :, :] then subsequently iterates through each of the tuples taking out the appropriate slices i.e. [(3, 5), :, :] followed by [:, :, (1, 6, 8)]
This method was developed as numpy does not support the direct approach of [(3, 5), : , (1, 6, 8)] for column based indexing.
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.combine_cubes(cubes, options=None, **kwargs)[source]#
Combine cubes, according to βcombine optionsβ.
Applies a combination of
equalise_cubes(),merge()and/orconcatenate()steps to the given cubes, as determined by the given settings (from options and kwargs).- Parameters:
cubes (list of
Cube) β A list of cubes to combine.options (str or dict, optional) β Either a standard βcombine settingsβ name, i.e. one of the
iris.CombineOptions.SETTINGS_NAMES, or a dictionary of settings options, as described forCombineOptions. Defaults to the currentsettings()of theiris.COMBINE_POLICY.kwargs (dict) β Individual option setting values, i.e. values for keys named in
iris.CombineOptions.OPTION_KEYS, as described forset(). These take precedence over those set by the options arg.
- Return type:
Notes
A
support_multiple_referencesoption will be accepted as valid, but will have no effect oncombine_cubes()because this option only acts during load operations.Examples
>>> # Take a pair of sample cubes which can concatenate together >>> print(cubes) 0: unknown / (unknown) (time: 2) 1: unknown / (unknown) (time: 3) >>> print([cube.coord("time").points for cube in cubes]) [array([1., 2.]), array([13., 14., 15.])]
>>> # Show these do NOT combine with the "default" action, which only merges .. >>> print(combine_cubes(cubes)) 0: unknown / (unknown) (time: 2) 1: unknown / (unknown) (time: 3) >>> # ... however, they **do** combine if you enable concatenation >>> print(combine_cubes(cubes, merge_concat_sequence="mc")) 0: unknown / (unknown) (time: 5) >>> # ... which may be controlled by various means >>> iris.COMBINE_POLICY.set("recommended") >>> print(combine_cubes(cubes)) 0: unknown / (unknown) (time: 5)
>>> # Also, show how a differing attribute will block cube combination >>> cubes[0].attributes["x"] = 3 >>> print(combine_cubes(cubes)) 0: unknown / (unknown) (time: 2) 1: unknown / (unknown) (time: 3) >>> # ... which can then be fixed by enabling attribute equalisation >>> with iris.COMBINE_POLICY.context(equalise_cubes_kwargs={"apply_all":True}): ... print(combine_cubes(cubes)) ... 0: unknown / (unknown) (time: 5)
>>> # .. BUT NOTE : this modifies the original input cubes >>> print(cubes[0].attributes.get("x")) None
- iris.util.create_temp_filename(suffix='')[source]#
Return a temporary file name.
- Parameters:
suffix (str, optional, default="") β Filename extension.
- iris.util.delta(ndarray, dimension, circular=False)[source]#
Calculate the difference between values along a given dimension.
- Parameters:
ndarray β The array over which to do the difference.
dimension β The dimension over which to do the difference on ndarray.
circular (bool, default=False) β
If not False then return n results in the requested dimension with the delta between the last and first element included in the result otherwise the result will be of length n-1 (where n is the length of ndarray in the given dimensionβs direction)
If circular is numeric then the value of circular will be added to the last element of the given dimension if the last element is negative, otherwise the value of circular will be subtracted from the last element.
The example below illustrates the process:
original array -180, -90, 0, 90 delta (with circular=360): 90, 90, 90, -270+360
Notes
Note
The difference algorithm implemented is forward difference:
>>> import numpy as np >>> import iris.util >>> original = np.array([-180, -90, 0, 90]) >>> iris.util.delta(original, 0) array([90, 90, 90]) >>> iris.util.delta(original, 0, circular=360) array([90, 90, 90, 90])
Note
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.demote_dim_coord_to_aux_coord(cube, name_or_coord)[source]#
Demote a dimension coordinate on the cube to an auxiliary coordinate.
The DimCoord is demoted to an auxiliary coordinate on the cube. The dimension of the cube that was associated with the DimCoord becomes anonymous. The class of the coordinate is left as DimCoord, it is not recast as an AuxCoord instance.
- Parameters:
cube β An instance of
iris.cube.Cube.name_or_coord β
(a) An instance of
iris.coords.DimCoord(b) the
standard_name,long_name, orvar_nameof an instance of an instance ofiris.coords.DimCoord.
Examples
>>> print(cube) air_temperature / (K) (time: 240; latitude: 37; longitude: 49) Dimension coordinates: time x - - latitude - x - longitude - - x Auxiliary coordinates: forecast_period x - - year x - - >>> demote_dim_coord_to_aux_coord(cube, "time") >>> print(cube) air_temperature / (K) (-- : 240; latitude: 37; longitude: 49) Dimension coordinates: latitude - x - longitude - - x Auxiliary coordinates: forecast_period x - - time x - - year x - -
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.describe_diff(cube_a, cube_b, output_file=None)[source]#
Print the differences that prevent compatibility between two cubes.
Print the differences that prevent compatibility between two cubes, as defined by
iris.cube.Cube.is_compatible().- Parameters:
cube_a β An instance of
iris.cube.Cubeoriris.cube.CubeMetadata.cube_b β An instance of
iris.cube.Cubeoriris.cube.CubeMetadata.output_file (optional) β A
fileor file-like object to receive output. Defaults to sys.stdout.
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
Note
Compatibility does not guarantee that two cubes can be merged. Instead, this function is designed to provide a verbose description of the differences in metadata between two cubes. Determining whether two cubes will merge requires additional logic that is beyond the scope of this function.
See also
iris.cube.Cube.is_compatibleCheck if a Cube is compatible with another.
- iris.util.equalise_attributes(cubes)[source]#
Delete cube attributes that are not identical over all cubes in a group.
This function deletes any attributes which are not the same for all the given cubes. The cubes will then have identical attributes, and the removed attributes are returned. The given cubes are modified in-place.
- Parameters:
cubes (iterable of
iris.cube.Cube) β A collection of cubes to compare and adjust.- Returns:
A list of dicts holding the removed attributes.
- Return type:
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.equalise_cubes(cubes, apply_all=False, normalise_names=False, equalise_attributes=False, unify_time_units=False)[source]#
Modify a set of cubes to assist merge/concatenate operations.
Various different adjustments can be applied to the input cubes, to remove differences which may prevent them from combining into larger cubes. The requested βequalisationβ operations are applied to each group of input cubes with matching cube metadata (names, units, attributes and cell-methods).
- Parameters:
cubes (sequence of
Cube) β The input cubes, in a list or similar.apply_all (bool, default=False) β Enable all the equalisation operations.
normalise_names (bool, default=False) β When True, remove any redundant
var_nameandlong_nameproperties, leaving only onestandard_name,long_nameorvar_nameper cube. In this case, the adjusted names are also used when selecting input groups.equalise_attributes (bool, default=False) β When
True, apply anequalise_attributes()operation to each input group. In this case, attributes are ignored when selecting input groups.unify_time_units (bool, default=False) β When True, apply the
unify_time_units()operation to each input group. Note : while this may convert units of time reference coordinates, it does not affect the units of the cubes themselves.
- Returns:
A CubeList containing the original input cubes, modified as required (in-place) ready for merge or concatenate operations.
- Return type:
Notes
All the βequaliseβ operations operate in a similar fashion, in that they identify and remove differences in a specific metadata element, altering metadata so that a merge or concatenate can potentially combine a group of cubes into a single result cube.
The various βequaliseβ operations are not applied to the entire input, but to groups of input cubes with the same
cube.metadata.The input cube groups also depend on the equalisation operation(s) selected : Operations which equalise a specific cube metadata element (names, units, attributes or cell-methods) exclude that element from the input grouping criteria.
- iris.util.file_is_newer_than(result_path, source_paths)[source]#
Determine if the βresultβ file was modified last.
Return whether the βresultβ file has a later modification time than all of the βsourceβ files.
If a stored result depends entirely on known βsourcesβ, it need only be re-built when one of them changes. This function can be used to test that by comparing file timestamps.
- Parameters:
- Returns:
True if all the sources are older than the result, else False. If any of the file paths describes no existing files, an exception will be raised.
- Return type:
Notes
Note
There are obvious caveats to using file timestamps for this, as correct usage depends on how the sources might change. For example, a file could be replaced by one of the same name, but an older timestamp.
If wildcards and β~β expansions are used, this introduces even more uncertainty, as then you cannot even be sure that the resulting list of file names is the same as the originals. For example, some files may have been deleted or others added.
Note
The result file may often be a
picklefile. In that case, it also depends on the relevant module sources, so extra caution is required. Ideally, an additional check on iris.__version__ is advised.
- iris.util.find_discontiguities(cube, rel_tol=1e-05, abs_tol=1e-08)[source]#
Identify spatial discontiguities.
Searches the βxβ and βyβ coord on the cube for discontiguities in the bounds array, returned as a boolean array (True for all cells which are discontiguous with the cell immediately above them or to their right).
- Parameters:
cube (iris.cube.Cube) β The cube to be checked for discontinuities in its βxβ and βyβ coordinates. These coordinates must be 2D.
rel_tol (float, default=1e-5) β The relative equality tolerance to apply in coordinate bounds checking.
abs_tol (float, default=1e-8) β The absolute value tolerance to apply in coordinate bounds checking.
- Returns:
Boolean True/false map of which cells in the cube XY grid have discontiguities in the coordinate points array.
This can be used as the input array for
iris.util.mask_cube().- Return type:
Examples
# Find any unknown discontiguities in your cube's x and y arrays: discontiguities = iris.util.find_discontiguities(cube) # Pass the resultant boolean array to `iris.util.mask_cube` # with a cube slice; this will use the boolean array to mask # any discontiguous data points before plotting: masked_cube_slice = iris.util.mask_cube(cube[0], discontiguities) # Plot the masked cube slice: iplt.pcolormesh(masked_cube_slice)
Notes
This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.
- iris.util.format_array(arr, edgeitems=3)[source]#
Create a new axis as the leading dimension of the cube.
Returns the given array as a string, using the python builtin str function on a piecewise basis.
Useful for xml representation of arrays.
For customisations, use the
numpy.core.arrayprintdirectly.Notes
This function does maintain laziness when called; it doesnβt realise data. See more at Real and Lazy Data.
- iris.util.guess_coord_axis(coord)[source]#
Return a βbest guessβ axis name of the coordinate.
Heuristic categorisation of the coordinate into either label βTβ, βZβ, βYβ, βXβ or None.
- Parameters:
coord β The
iris.coords.Coord.- Return type:
{βTβ, βZβ, βYβ, βXβ} or None.
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
The
guess_coord_axisbehaviour can be skipped by setting theignore_axisproperty on coord toFalse.
- iris.util.is_masked(array)[source]#
Equivalent to
numpy.ma.is_masked(), but works for both lazy AND realised arrays.- Parameters:
array (
numpy.Arrayordask.array.Array) β The array to be checked for masks.- Returns:
Whether or not the array has any masks.
- Return type:
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.is_regular(coord)[source]#
Determine if the given coord is regular.
Notes
This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.
- iris.util.make_gridcube(nx=30, ny=20, xlims=(0.0, 360.0), ylims=(-90.0, 90.0), *, x_points=None, y_points=None, coord_system=None)[source]#
Make a 2D sample cube with a specified XY grid.
The cube is suitable as a regridding target, amongst other uses. It has one-dimensional X and Y coordinates, in a specific coordinate system. Both can be given either regularly spaced or irregular points.
The cube is dataless, meaning that
cube.dataisNone, but data can easily be assigned if required. See Dataless Cubes.- Parameters:
nx (int, optional) β Number of points on the X axis. Defaults to 30.
ny (int, optional) β Number of points on the Y axis. Defaults to 20.
xlims (pair of floats or ints, optional) β End points of the X coordinate: (first, last). Defaults to (0., 360.).
ylims (pair of floats or ints, optional) β End points of the Y coordinate: (first, last). Defaults to (-90., +90.).
x_points (array-like, optional) β If not None, this sets the number and exact values of x points: in this case, nx and xlims are ignored. Defaults to None.
y_points (array-like, optional) β If not None, this sets the number and exact values of y points: in this case, ny and ylims are ignored. Defaults to None.
coord_system (iris.coord_system.CoordSystem, optional) β The coordinate system of the cube: also sets the coordinate system and
standard_names of the X and Y coordinates. Defaults to airis.coord_systems.GeogCS(i.e. spherical lat-lon), with a standard radius ofiris.fileformats.pp.EARTH_RADIUS.
- Returns:
cube β A cube with the specified grid, but no data.
- Return type:
Warning
If given, the x_points or y_points args define a DimCoord, and so must be one-dimensional, have at least 1 value, and be strictly monotonic (increasing or decreasing).
- iris.util.mask_cube(cube, points_to_mask, in_place=False, dim=None)[source]#
Mask any cells in the cubeβs data array.
Masks any cells in the cubeβs data array which correspond to cells marked
True(or non zero) inpoints_to_mask.points_to_maskmay be specified as anumpy.ndarray,dask.array.Array,iris.coords.Coordoriris.cube.Cube, following the same broadcasting approach as cube arithmetic (see Cube Maths).- Parameters:
cube (iris.cube.Cube) β Cube containing data that requires masking.
points_to_mask (numpy.ndarray, dask.array.Array, iris.coords.Coord or iris.cube.Cube) β Specifies booleans (or ones and zeros) indicating which points will be masked.
in_place (bool, default=False) β If True, masking is applied to the input cube. Otherwise a copy is masked and returned.
dim (int, optional) β If points_to_mask is a coord which does not exist on the cube, specify the dimension to which it should be mapped.
- Returns:
A cube whose data array is masked at points specified by
points_to_mask.- Return type:
Notes
If either
cubeorpoints_to_maskis lazy, the result will be lazy.This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.mask_cube_from_shape(cube, shape, shape_crs=None, in_place=False, minimum_weight=0.0, all_touched=None, invert=False)[source]#
Mask all points in a cube that do not intersect a shape object.
Mask a
Cubewith any shape object, (e.g. Natural Earth Shapefiles viacartopy). Finds the overlap between theshapeand theCubeand masks out any cells that do not intersect the shape.Shapes can be Polygons, Lines or Points, or their multi-part equivalents.
By default, all cells touched by geometries are kept (equivalent to
minimum_weight=0). This behaviour can be changed by increasing theminimum_weightkeyword argument or settingall_touched=False, then only the only cells whose centre is within the polygon or that are selected by Bresenhamβs line algorithm (for line type shapes) are kept.For points,
minimum_weightis ignored, and the cell that intersects the point is kept.- Parameters:
cube (
Cubeobject) β TheCubeobject to masked. Must be singular, rather than aCubeList.shape (shapely.Geometry object) β A single
shapeof the area to remain unmasked on thecube. If it a line object of some kind then minimum_weight will be ignored, because you cannot compare the area of a 1D line and 2D Cell.shape_crs (cartopy.crs.CRS, default=None) β The coordinate reference system of the
shapeobject.in_place (bool, default=False) β Whether to mask the
cubein-place or return a newly maskedcube. Defaults toFalse.minimum_weight (float, default=0.0) β A number between 0-1 describing what percentage of a cube cell area must the shape overlap to be masked. Only applied to polygon shapes. If the shape is a line or point then this is ignored.
all_touched (bool, default=None) β If
True, all cells touched by the shape are kept. IfFalse, only cells whose center is within the polygon or that are selected by Bresenhamβs line algorithm (for line type shape) are kept. Note thatminimum_weightandall_touchedare mutually exclusive options: an error will be raised if aminimum_weight> 0 andall_touchedis set toTrue. This is becauseall_touched=Trueis equivalent tominimum_weight=0.invert (bool, default=False) β If
True, the mask is inverted, meaning that cells that intersect the shape are masked out and cells that do not intersect the shape are kept. IfFalse, the mask is applied normally, meaning that cells that intersect the shape are kept and cells that do not intersect the shape are masked out.
- Returns:
A masked version of the input cube, if
in_placeisFalse.- Return type:
iris.Cube
See also
mask_cube()Mask any cells in the cubeβs data array.
Examples
>>> import numpy as np >>> import shapely >>> from pyproj import CRS >>> from iris.util import mask_cube_from_shape
Extract a rectangular region covering the UK from a stereographic projection cube:
>>> cube = iris.load_cube(iris.sample_data_path("toa_brightness_stereographic.nc")) >>> shape = shapely.geometry.box(-10, 50, 2, 60) # box around the UK >>> wgs84 = CRS.from_epsg(4326) # WGS84 coordinate system >>> masked_cube = mask_cube_from_shape(cube, shape, wgs84)
Note that there is no change in the dimensions of the masked cube, only in the mask applied to data values:
>>> print(cube.summary(shorten=True)) toa_brightness_temperature / (K) (projection_y_coordinate: 160; projection_x_coordinate: 256) >>> print(f"Masked cells: {np.sum(cube.data.mask)} of {np.multiply(*cube.shape)}") Masked cells: 3152 of 40960 >>> print(masked_cube.summary(shorten=True)) toa_brightness_temperature / (K) (projection_y_coordinate: 160; projection_x_coordinate: 256) >>> print(f"Masked cells: {sum(sum(masked_cube.data.mask))} of {np.multiply(*masked_cube.shape)}") Masked cells: 40062 of 40960
Extract a trajectory by using a line shapefile:
>>> from shapely import LineString >>> line = LineString([(-45, 40), (-28, 53), (-2, 55), (19, 45)]) >>> masked_cube = mask_cube_from_shape(cube, line, wgs84)
Standard shapely manipulations can be applied. For example, to extract a trajectory with a 1 degree buffer around it:
>>> buffer = line.buffer(1) >>> masked_cube = mask_cube_from_shape(cube, buffer, wgs84)
You can load more complex shapes from other libraries. For example, to extract the Canadian provience of Ontario from a cube:
>>> import cartopy.io.shapereader as shpreader >>> admin1 = shpreader.natural_earth(resolution='110m', ... category='cultural', ... name='admin_1_states_provinces_lakes') >>> admin1shp = shpreader.Reader(admin1).geometries()
>>> cube = iris.load_cube(iris.sample_data_path("E1_north_america.nc")) >>> shape = shapely.geometry.box(-100,30, -80,40) # box between 30N-40N 100W-80W >>> wgs84 = CRS.from_epsg(4326) >>> masked_cube = mask_cube_from_shape(cube, shape, wgs84)
Notes
Iris does not handle the shape loading so it is agnostic to the source type of the shape. The shape can be loaded from an Esri shapefile, created using the shapely library, or any other source that can be interpreted as a shapely.Geometry object, such as shapes encoded in a geoJSON or KML file.
Warning
For best masking results, both the cube and masking geometry should have a coordinate reference system (CRS) defined. Note that CRS of the masking geometry must be provided explicitly to this function (via
shape_crs), whereas the cube CRS is read from the cube itself. The cube must have a coord_system defined.Masking results will be most consistent when the cube and masking geometry have the same CRS.
If a CRS is not provided for the the masking geometry, the CRS of the cube is assumed.
This function requires additional dependencies: rasterio and affine.
Because shape vectors are inherently Cartesian in nature, they contain no inherent understanding of the spherical geometry underpinning geographic coordinate systems. For this reason, shapefiles or shape vectors that cross the antimeridian or poles are not supported by this function to avoid unexpected masking behaviour. For shapes that do cross these boundaries, this function expects the user to undertake fixes upstream of Iris, using tools like GDAL or antimeridian to fix shape wrapping.
- iris.util.mask_cube_from_shapefile(cube, shape, minimum_weight=0.0, in_place=False)[source]#
Mask all points in a cube that do not intersect a shapefile object.
- Parameters:
cube (
Cubeobject) βshape (Shapely.Geometry object) β A single shape of the area to remain unmasked on the
cube. If it a line object of some kind then minimum_weight will be ignored, because you cannot compare the area of a 1D line and 2D Cell.minimum_weight (float , default=0.0) β A number between 0-1 describing what % of a cube cell area must the shape overlap to include it.
in_place (bool, default=False) β Whether to mask the
cubein-place or return a newly maskedcube. Defaults to False.
- Returns:
A masked version of the input cube, if in_place is False.
- Return type:
iris.Cube
See also
mask_cube()Mask any cells in the cubeβs data array.
mask_cube_from_shape()Mask all points in a cube that do not intersect a shape object.
Notes
Deprecated since version 3.14:
mask_cube_from_shapefile()function is scheduled for removal in a future release, being replaced byiris.util.mask_cube_from_shape(), which offers richer shape handling.
- iris.util.monotonic(array, strict=False, return_direction=False)[source]#
Return whether the given 1d array is monotonic.
Note that, the array must not contain missing data.
- Parameters:
strict (bool, default=False) β Flag to enable strict monotonic checking.
return_direction (bool, default=False) β Flag to change return behaviour to return (monotonic_status, direction). Direction will be 1 for positive or -1 for negative. The direction is meaningless if the array is not monotonic.
- Returns:
Whether the array was monotonic. If the return_direction flag was given then the returned value will be:
(monotonic_status, direction).- Return type:
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.new_axis(src_cube, scalar_coord=None, expand_extras=())[source]#
Create a new axis as the leading dimension of the cube.
Create a new axis as the leading dimension of the cube, promoting a scalar coordinate if specified.
- Parameters:
src_cube (
iris.cube.Cube) β Source cube on which to generate a new axis.scalar_coord (
iris.coord.Coordor str, optional) β Scalar coordinate to promote to a dimension coordinate.expand_extras (iterable, optional) β Auxiliary coordinates, ancillary variables and cell measures which will be expanded so that they map to the new dimension as well as the existing dimensions.
- Returns:
A new
iris.cube.Cubeinstance with one extra leading dimension (length 1). Chosen auxiliary coordinates, cell measures and ancillary variables will also be given an additional dimension, associated with the leading dimension of the cube.- Return type:
Examples
>>> cube.shape (360, 360) >>> ncube = iris.util.new_axis(cube, 'time') >>> ncube.shape (1, 360, 360)
Notes
This function does maintain laziness when called; it doesnβt realise data. See more at Real and Lazy Data.
- iris.util.points_step(points)[source]#
Determine whether points has a regular step.
- Parameters:
points (numeric, array-like) β The sequence of values to check for a regular difference.
- Returns:
A tuple containing the average difference between values, and whether the difference is regular.
- Return type:
numeric, bool
Notes
This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.
- iris.util.promote_aux_coord_to_dim_coord(cube, name_or_coord)[source]#
Promote an auxiliary to a dimension coordinate on the cube.
This AuxCoord must be associated with a single cube dimension. If the AuxCoord is associated with a dimension that already has a DimCoord, that DimCoord gets demoted to an AuxCoord.
- Parameters:
cube β An instance of
iris.cube.Cube.name_or_coord β
(a) An instance of
iris.coords.AuxCoord(b) the
standard_name,long_name, orvar_nameof an instance of an instance ofiris.coords.AuxCoord.
Examples
>>> print(cube) air_temperature / (K) (time: 240; latitude: 37; longitude: 49) Dimension coordinates: time x - - latitude - x - longitude - - x Auxiliary coordinates: forecast_period x - - year x - - >>> promote_aux_coord_to_dim_coord(cube, "year") >>> print(cube) air_temperature / (K) (year: 240; latitude: 37; longitude: 49) Dimension coordinates: year x - - latitude - x - longitude - - x Auxiliary coordinates: forecast_period x - - time x - -
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.regular_points(zeroth, step, count)[source]#
Make an array of regular points.
Create an array of count points from zeroth + step, adding step each time. In float32 if this gives a sufficiently regular array (tested with points_step) and float64 if not.
- Parameters:
zeroth (number) β The value prior to the first point value.
step (number) β The numeric difference between successive point values.
count (number) β The number of point values.
Notes
This function does maintain laziness when called; it doesnβt realise data. See more at Real and Lazy Data.
- iris.util.regular_step(coord)[source]#
Return the regular step from a coord or fail.
Notes
This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.
- iris.util.reverse(cube_or_array, coords_or_dims)[source]#
Reverse the cube or array along the given dimensions.
- Parameters:
cube_or_array (
iris.cube.Cubeornumpy.ndarray) β The cube or array to reverse.coords_or_dims (int, str,
iris.coords.Coordor sequence of these) β Identify one or more dimensions to reverse. If cube_or_array is a numpy array, use int or a sequence of ints, as in the examples below. If cube_or_array is a Cube, a Coord or coordinate name (or sequence of these) may be specified instead.
Examples
>>> import numpy as np >>> a = np.arange(24).reshape(2, 3, 4) >>> print(a) [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] >>> print(reverse(a, 1)) [[[ 8 9 10 11] [ 4 5 6 7] [ 0 1 2 3]] [[20 21 22 23] [16 17 18 19] [12 13 14 15]]] >>> print(reverse(a, [1, 2])) [[[11 10 9 8] [ 7 6 5 4] [ 3 2 1 0]] [[23 22 21 20] [19 18 17 16] [15 14 13 12]]]
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.rolling_window(a, window=1, step=1, axis=-1)[source]#
Make an ndarray with a rolling window of the last dimension.
- Parameters:
- Returns:
Array that is a view of the original array with an added dimension of the size of the given window at axis + 1.
- Return type:
array
Examples
>>> x = np.arange(10).reshape((2, 5)) >>> rolling_window(x, 3) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]])
Calculate rolling mean of last dimension:
>>> np.mean(rolling_window(x, 3), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]])
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.squeeze(cube)[source]#
Remove any dimension of length one.
Remove any dimension of length one. If it has an associated DimCoord or AuxCoord, this becomes a scalar coord.
- Parameters:
cube (
iris.cube.Cube) β Source cube to remove length 1 dimension(s) from.- Returns:
A new
iris.cube.Cubeinstance without any dimensions of length 1.- Return type:
Examples
For example:
>>> cube.shape (1, 360, 360) >>> ncube = iris.util.squeeze(cube) >>> ncube.shape (360, 360)
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.
- iris.util.unify_time_units(cubes)[source]#
Perform an in-place conversion of the time units.
Perform an in-place conversion of the time units of all time coords in the cubes in a given iterable. One common epoch is defined for each calendar found in the cubes to prevent units being defined with inconsistencies between epoch and calendar. During this process, all time coordinates have their data type converted to 64-bit floats to allow for smooth concatenation.
Each epoch is defined from the first suitable time coordinate found in the input cubes.
- Parameters:
cubes β An iterable containing
iris.cube.Cubeinstances.
Notes
This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.