home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

28 rows where user = 3958036 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, updated_at, closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 16
  • pull 12

state 2

  • closed 25
  • open 3

repo 1

  • xarray 28
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1798441216 I_kwDOAMm_X85rMgkA 7978 Clean up colormap code? johnomotani 3958036 open 0     1 2023-07-11T08:49:36Z 2023-07-11T15:47:52Z   CONTRIBUTOR      

In fixing some bugs with color bars (https://github.com/pydata/xarray/pull/3601), we had to do some clunky workarounds because of limitations of matplotlib's API for modifying colormaps - this prompted a matplotlib issue https://github.com/matplotlib/matplotlib/issues/16296#issuecomment-1629755861. That issue has now been closed, and apparently the limitations are now fixed, so it should be possible to tidy up some of the colorbar code (at least at some point, once the oldest xarray-supported matplotlib includes the new API).

I have no time to look into this myself, but opening this issue to flag the new matplotlib features in case someone is looking at refactoring colorbar or colormap code.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7978/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
666523009 MDU6SXNzdWU2NjY1MjMwMDk= 4276 isel with 0d dask array index fails johnomotani 3958036 closed 0     0 2020-07-27T19:14:22Z 2023-03-15T02:48:01Z 2023-03-15T02:48:01Z CONTRIBUTOR      

What happened: If a 0d dask array is passed as an argument to isel(), an error occurs because dask arrays do not have a .item() method. I came across this when trying to use the result of da.argmax() from a dask-backed array to select from the DataArray.

What you expected to happen: isel() returns the value at the index contained in the 0d dask array.

Minimal Complete Verifiable Example:

```python import dask.array as daskarray import numpy as np import xarray as xr

a = daskarray.from_array(np.linspace(0., 1.)) da = xr.DataArray(a, dims="x")

x_selector = da.argmax(dim=...)

da_max = da.isel(x_selector) ```

Anything else we need to know?:

I think the problem is here https://github.com/pydata/xarray/blob/a198218ddabe557adbb04311b3234ec8d20419e7/xarray/core/variable.py#L546-L548 and k.values.item() or int(k.data) would fix my issue, but I don't know the reason for using .item() in the first place, so I'm not sure if either of these would have some undesirable side-effect.

May be related to #2511, but from the code snippet above, I think this is a specific issue of 0d dask arrays rather than a generic dask-indexing issue like #2511.

I'd like to fix this because it breaks the nice new features of argmin() and argmax() if the DataArray is dask-backed.

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.4.0-42-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.4 xarray: 0.16.0 pandas: 1.0.5 numpy: 1.18.5 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.19.0 distributed: 2.21.0 matplotlib: 3.2.2 cartopy: None seaborn: None numbagg: None pint: 0.13 setuptools: 49.2.0.post20200712 pip: 20.1.1 conda: 4.8.3 pytest: 5.4.3 IPython: 7.15.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4276/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
698577111 MDU6SXNzdWU2OTg1NzcxMTE= 4417 Inconsistency in whether index is created with new dimension coordinate? johnomotani 3958036 closed 0     6 2020-09-10T22:44:54Z 2022-09-13T07:54:32Z 2022-09-13T07:54:32Z CONTRIBUTOR      

It seems like set_coords() doesn't create an index variable. Is there a reason for this? I was surprised that the following code snippets produce different Datasets (first one has empty indexes, second one has x in indexes), even though both Datasets have a 'dimension coordinate' x:

(1) ``` import numpy as np import xarray as xr

ds = xr.Dataset() ds['a'] = ('x', np.linspace(0,1)) ds['b'] = ('x', np.linspace(3,4)) ds = ds.rename(b='x') ds = ds.set_coords('x')

print(ds) print('indexes', ds.indexes) ```

(2) ``` import numpy as np import xarray as xr

ds = xr.Dataset() ds['a'] = ('x', np.linspace(0,1)) ds['x'] = ('x', np.linspace(3,4))

print(ds) print('indexes', ds.indexes) ```

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.4.0-47-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.4 xarray: 0.16.0 pandas: 1.1.1 numpy: 1.18.5 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.23.0 distributed: 2.25.0 matplotlib: 3.2.2 cartopy: None seaborn: None numbagg: None pint: 0.13 setuptools: 49.6.0.post20200814 pip: 20.2.3 conda: 4.8.4 pytest: 5.4.3 IPython: 7.15.0 sphinx: 3.2.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4417/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
708337538 MDU6SXNzdWU3MDgzMzc1Mzg= 4456 workaround for file with variable and dimension having same name johnomotani 3958036 closed 0     4 2020-09-24T17:10:04Z 2021-12-29T16:55:53Z 2021-12-29T16:55:53Z CONTRIBUTOR      

Adding a variable that's not a 1d "dimension coordinate" with the same name as a dimension is an error. This makes sense. However, if I have a .nc file that has such a variable, is there any workaround to get the badly-named variable into xarray short of altering the .nc file or loading it separately with netCDF4? I.e. to make the following work somehow ``` import xarray as xr import netCDF4

f = netCDF4.Dataset() f = netCDF4.Dataset("test.nc", "w") f.createDimension("x", 2) f.createDimension("y", 3) f["y"] = np.ones([2,3]) f["y"][...] = 1.0 f.close()

ds = xr.open_dataset('test.nc') `` rather than getting the current errorMissingDimensionsError: 'y' has more than 1-dimension and the same name as one of its dimensions ('x', 'y'). xarray disallows such variables because they conflict with the coordinates used to label dimensions.`

I think it might be nice to have something like a rename_vars argument to open_dataset(). Similar to how drop_vars ignores a list of variables, rename_vars could rename a dict of variables so the example above could do ds = xr.open_dataset("test.nc", rename_vars={"y": "y_not_dimension"}) and get a Dataset with a dimension "y" and a variable "y_not_dimension".

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4456/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1022478180 I_kwDOAMm_X8488cdk 5852 Surprising behaviour of Dataset/DataArray.interp() with NaN entries johnomotani 3958036 open 0     0 2021-10-11T09:33:11Z 2021-10-11T09:33:11Z   CONTRIBUTOR      

I think this is due to documented 'undefined behaviour' of scipy.interpolate.interp1d, so not really a bug, but I think it would be more user-friendly if xarray gave an error in this case rather than producing an 'incorrect' result.

What happened:

If a DataArray contains a NaN value and is interpolated, output values that do not depend on the entry that was NaN may still be NaN.

What you expected to happen:

The docs for scipy.interpolate.interp1d say

Calling interp1d with NaNs present in input values results in undefined behaviour.

which explain the output below, and presumably mean it is not fixable on the xarray side (short of some ugly work-around). I think it would be good though to check for NaNs in DataArray/Dataset.interp(), and if they are present raise an exception (or possibly a warning?) about 'undefined behaviour'.

scipy.interpolate.interp2d has a similar note, while scipy.interpolate.interpn does not mention it (but has very limited information).

What I'd initially expected was an output would be valid at locations in the array that shouldn't depend on the NaN input: interpolating a 2d DataArray (with dims x and y) in the x-dimension, if only one y-index in the input has a NaN value, that y-index in the output might contain NaNs, but the others should be OK.

Minimal Complete Verifiable Example:

```python import numpy as np import xarray as xr

da = xr.DataArray(np.ones([3, 4]), dims=("x", "y"))

da[0, 0] = float("nan")

newx = np.linspace(0., 3., 5)

interp_da = da.interp(x=newx)

print(interp_da) ```

On my system, this gives output: <xarray.DataArray (x: 5, y: 4)> array([[nan, 1., 1., 1.], [nan, 1., 1., 1.], [ 1., 1., 1., 1.], [nan, nan, nan, nan], [nan, nan, nan, nan]]) Coordinates: * x (x) float64 0.0 0.75 1.5 2.25 3.0 Dimensions without coordinates: y [Surprisingly, I get the same output even using method="nearest".]

You might expect at least the following, with NaN only at y=0: <xarray.DataArray (x: 5, y: 4)> array([[nan, 1., 1., 1.], [nan, 1., 1., 1.], [ 1., 1., 1., 1.], [nan, 1., 1., 1.], [nan, 1., 1., 1.]]) Coordinates: * x (x) float64 0.0 0.75 1.5 2.25 3.0 Dimensions without coordinates: y

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.9.6 | packaged by conda-forge | (default, Jul 11 2021, 03:39:48) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.11.0-37-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: ('en_GB', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.8.0 xarray: 0.19.0 pandas: 1.3.1 numpy: 1.21.1 scipy: 1.7.1 netCDF4: 1.5.7 pydap: None h5netcdf: None h5py: 3.3.0 Nio: None zarr: None cftime: 1.5.0 nc_time_axis: 1.3.1 PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.07.2 distributed: 2021.07.2 matplotlib: 3.4.2 cartopy: None seaborn: 0.11.1 numbagg: None pint: 0.17 setuptools: 49.6.0.post20210108 pip: 21.2.4 conda: 4.10.3 pytest: 6.2.4 IPython: 7.26.0 sphinx: 4.1.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5852/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
940702754 MDU6SXNzdWU5NDA3MDI3NTQ= 5589 Call .compute() in all plot methods? johnomotani 3958036 open 0     3 2021-07-09T12:03:30Z 2021-07-09T15:57:27Z   CONTRIBUTOR      

I noticed what I think might be a performance bug: should .compute() be called on the input data in all the plotting methods (e.g. plot.pcolormesh()) like it is in .plot() here https://github.com/pydata/xarray/blob/bf27e2c1fd81f5e0afe1ef91c13f651db54b44bb/xarray/plot/plot.py#L166 See also discussion in https://github.com/pydata/xarray/issues/2390.

I was making plots from a large dataset of a quantity that is the output of quite a bit of computation. A script which made an animation of the full time-series (a couple of thousand time points) actually ran significantly faster than a script that made pcolormesh plots of just 3 time points (~2hrs compared to ~5hrs). The difference I can think of is that the animation script called .values before the animation function, but the plotting script called xarray.plot.pcolormesh() without calling .values/.load()/.compute() first. A modified version of the script that calls .load() before any plot calls reduced the run time to 30 mins even though I plotted 18 time points, not just 3.

2d plots might all be covered by adding a darray = darray.compute() call in newplotfunc()?https://github.com/pydata/xarray/blob/bf27e2c1fd81f5e0afe1ef91c13f651db54b44bb/xarray/plot/plot.py#L609 I guess the 1d plot functions would all need to be modified individually.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5589/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
609907735 MDExOlB1bGxSZXF1ZXN0NDExNDIxODIz 4017 Combining attrs of member DataArrays of Datasets johnomotani 3958036 closed 0     0 2020-04-30T12:23:10Z 2021-05-05T16:37:25Z 2021-05-05T16:37:25Z CONTRIBUTOR   0 pydata/xarray/pulls/4017

While looking at #4009, I noticed that the combine_attrs kwarg to concat or merge only affects the global attrs of the Dataset, not the attrs of DataArray or Variable members of the Datasets being combined. I think this is a bug.

So far this PR adds tests that reproduce the issue in #4009, and the issue described above. Fixing should be fairly simple: for #4009 pass combine_attrs="drop" to combine_by_coords and combine_nested in open_mfdataset; for this issue insert the combine_attrs handling in an appropriate place - possibly in merge.unique_variable. I'll update with fixes when I get a chance.

  • [ ] Closes #4009
  • [x] Tests added
  • [ ] Passes isort -rc . && black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4017/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
847489988 MDExOlB1bGxSZXF1ZXN0NjA2NTAyMzA4 5101 Surface plots johnomotani 3958036 closed 0     8 2021-03-31T22:58:20Z 2021-05-03T13:05:59Z 2021-05-03T13:05:02Z CONTRIBUTOR   0 pydata/xarray/pulls/5101

.plot.surface() method that wraps matplotlib's plot_surface() method from the 3d toolkit. * works with facet grids * disables all the automatic color-map, etc. because it doesn't necessarily make sense for surface plots - the default (in matplotlib, and as implemented now) is not to color the surface. xarray's auto-selection stuff would interfere with plot_surface()'s kwargs.

I'm not sure if there's somewhere good to note the new surface() method in the docs to make it more discoverable? Maybe in one of the examples?

  • [x] Closes #5084
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5101/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
842583817 MDU6SXNzdWU4NDI1ODM4MTc= 5084 plot_surface() wrapper johnomotani 3958036 closed 0     2 2021-03-27T19:16:09Z 2021-05-03T13:05:02Z 2021-05-03T13:05:02Z CONTRIBUTOR      

Is there an xarray way to make a surface plot, like matplotlib's plot_surface()? I didn't see one on a quick skim, but expect it should be fairly easy to add, following the style for contour(), pcolormesh(), etc.? For the matplotlib version, see https://matplotlib.org/stable/gallery/mplot3d/surface3d.html.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5084/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
857378504 MDExOlB1bGxSZXF1ZXN0NjE0ODA5NTg1 5153 cumulative_integrate() method johnomotani 3958036 closed 0     8 2021-04-13T22:53:58Z 2021-05-02T10:34:23Z 2021-05-01T20:01:31Z CONTRIBUTOR   0 pydata/xarray/pulls/5153

Provides the functionality of scipy.integrate.cumulative_trapezoid.

  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5153/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
823059252 MDU6SXNzdWU4MjMwNTkyNTI= 5002 Dataset.plot.quiver() docs slightly misleading johnomotani 3958036 closed 0     3 2021-03-05T12:54:19Z 2021-05-01T17:38:39Z 2021-05-01T17:38:39Z CONTRIBUTOR      

In the docs for Dataset.plot.quiver() http://xarray.pydata.org/en/latest/generated/xarray.Dataset.plot.quiver.html the u and v arguments are labelled as 'optional'. They are required for quiver plots though, so this is slightly confusing. I guess it is like this because the docs are created from a generic _dsplot() docstring, so don't know if it's fixable in a sensible way...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5002/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
847006334 MDU6SXNzdWU4NDcwMDYzMzQ= 5097 2d plots may fail for some choices of `x` and `y` johnomotani 3958036 closed 0     1 2021-03-31T17:26:34Z 2021-04-22T07:16:17Z 2021-04-22T07:16:17Z CONTRIBUTOR      

What happened: When making a 2d plot with a 1d x argument and a 2d y, if the two dimensions have the same size and are in the wrong order, no plot is produced - the third plot in the MCVE is blank.

What you expected to happen: All three plots in the MCVE should be identical.

Minimal Complete Verifiable Example:

```python from matplotlib import pyplot as plt import numpy as np import xarray as xr

ds = xr.Dataset({"z": (["x", "y"], np.random.rand(4,4))})

x2d, y2d = np.meshgrid(ds["x"], ds["y"])

ds = ds.assign_coords(x2d=(["x", "y"], x2d.T), y2d=(["x", "y"], y2d.T))

fig, axes = plt.subplots(1,3)

h0 = ds["z"].plot.pcolormesh(x="y2d", y="x2d", ax=axes[0]) h1 = ds["z"].plot.pcolormesh(x="y", y="x", ax=axes[1]) h2 = ds["z"].plot.pcolormesh(x="y", y="x2d", ax=axes[2])

plt.show() ```

result:

Anything else we need to know?:

The bug is present in both the 0.17.0 release and current master.

I came across this while starting to work on #5084. I think the problem is here https://github.com/pydata/xarray/blob/ddc352faa6de91f266a1749773d08ae8d6f09683/xarray/plot/plot.py#L678-L684 as the check xval.shape[0] == yval.shape[0] doesn't work if the single dimension of x is actually the second dimension of y, but happened to have the same size as the first dimension of y? I think it needs to check the actual dimensions of x and y.

Why don't we just do something like xval = xval.broadcast_like(darray) yval = yval.broadcast_like(darray) if either coordinate is 2d before using .values to convert to numpy arrays?

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 19:08:05) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.4.0-70-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.1.5 numpy: 1.19.4 scipy: 1.5.3 netCDF4: 1.5.5.1 pydap: None h5netcdf: None h5py: 3.1.0 Nio: None zarr: None cftime: 1.3.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2020.12.0 distributed: 2020.12.0 matplotlib: 3.3.3 cartopy: None seaborn: None numbagg: None pint: 0.16.1 setuptools: 49.6.0.post20201009 pip: 20.3.3 conda: 4.9.2 pytest: 6.2.1 IPython: 7.19.0 sphinx: 3.4.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5097/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
847199398 MDExOlB1bGxSZXF1ZXN0NjA2MjMwMjE1 5099 Use broadcast_like for 2d plot coordinates johnomotani 3958036 closed 0     3 2021-03-31T19:34:32Z 2021-04-22T07:16:17Z 2021-04-22T07:16:17Z CONTRIBUTOR   0 pydata/xarray/pulls/5099

Use broadcast_like if either x or y inputs are 2d to ensure that both have dimensions in the same order as the DataArray being plotted. Convert to numpy arrays after possibly using broadcast_like. Simplifies code, and fixes #5097 (bug when dimensions have the same size).

@dcherian

IIRC the "resolving intervals" section later will break and is somewhat annoying to fix. This is why the current ugly code exists.

This change seems to 'just work', and unit tests pass. Is there some extra check that needs doing to make sure "resolving intervals" is behaving correctly?

I can't think of a unit test that would have caught #5097, since even when the bug happens, a plot is produced without errors or warnings. If anyone has an idea, suggestions/pushes welcome!

  • [x] Closes #5097
  • [ ] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5099/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
823290488 MDExOlB1bGxSZXF1ZXN0NTg1Nzc0MjY4 5003 Add Dataset.plot.streamplot() method johnomotani 3958036 closed 0     2 2021-03-05T17:41:49Z 2021-03-30T16:41:08Z 2021-03-30T16:41:07Z CONTRIBUTOR   0 pydata/xarray/pulls/5003

Since @dcherian added Quiver plots in #4407, it's fairly simple to extend the functionality to streamplot().

For example (copying from @dcherian's unit test setup) ``` import xarray as xr from matplotlib import pyplot as plt

das = [ xr.DataArray( np.random.randn(3, 3), dims=["x", "y"], coords=[range(k) for k in [3, 3]], ) for _ in [1, 2] ] ds = xr.Dataset({"u": das[0], "v": das[1]}) ds["mag"] = np.hypot(ds.u, ds.v) ds.plot.streamplot(x="x",y="y",u="u",v="v", hue="mag") plt.show() ```

  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5003/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
698263021 MDU6SXNzdWU2OTgyNjMwMjE= 4415 Adding new DataArray to a Dataset removes attrs of existing coord johnomotani 3958036 closed 0     3 2020-09-10T17:21:32Z 2020-09-10T17:38:08Z 2020-09-10T17:38:08Z CONTRIBUTOR      

Minimal Complete Verifiable Example: ``` import numpy as np import xarray as xr

ds = xr.Dataset() ds["a"] = xr.DataArray(np.linspace(0., 1.), dims="x")

ds["x"] = xr.DataArray(np.linspace(0., 2., len(ds["x"])), dims="x") ds["x"].attrs["foo"] = "bar"

print(ds["x"])

ds["b"] = xr.DataArray(np.linspace(0., 1.), dims="x")

print(ds["x"]) ```

What happened: Attribute "foo" is present at the first print, but missing in the second, after adding b to ds.

full output</tt> ``` <xarray.DataArray 'x' (x: 50)> array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898, 0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612, 0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327, 0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041, 1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755, 1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469, 1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184, 2. ]) Coordinates: * x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0 Attributes: foo: bar <xarray.DataArray 'x' (x: 50)> array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898, 0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612, 0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327, 0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041, 1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755, 1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469, 1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184, 2. ]) Coordinates: * x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0 ```

What you expected to happen: Coordinate x should be unchanged.

full expected output</tt> ``` <xarray.DataArray 'x' (x: 50)> array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898, 0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612, 0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327, 0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041, 1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755, 1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469, 1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184, 2. ]) Coordinates: * x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0 Attributes: foo: bar <xarray.DataArray 'x' (x: 50)> array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898, 0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612, 0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327, 0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041, 1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755, 1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469, 1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184, 2. ]) Coordinates: * x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0 Attributes: foo: bar ```

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.4.0-47-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.4 xarray: 0.16.0 pandas: 1.1.1 numpy: 1.18.5 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.23.0 distributed: 2.25.0 matplotlib: 3.2.2 cartopy: None seaborn: None numbagg: None pint: 0.13 setuptools: 49.6.0.post20200814 pip: 20.2.3 conda: 4.8.4 pytest: 5.4.3 IPython: 7.15.0 sphinx: 3.2.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4415/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
668515620 MDU6SXNzdWU2Njg1MTU2MjA= 4289 title bar of docs displays incorrect version johnomotani 3958036 closed 0     5 2020-07-30T09:00:43Z 2020-08-18T22:32:51Z 2020-08-18T22:32:51Z CONTRIBUTOR      

What happened: The browser title bar displays an incorrect version when viewing the docs online. See below - title bar says 0.15.1 but actual version in URL is 0.16.0.

http://xarray.pydata.org/en/stable/ also displays 0.15.1 in the title bar, but I guess is actually showing 0.16.0 docs (?), which is confusing!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4289/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
671222189 MDExOlB1bGxSZXF1ZXN0NDYxNDQzNzcx 4298 Fix docstring for missing_dims argument to isel methods johnomotani 3958036 closed 0     1 2020-08-01T21:40:27Z 2020-08-03T20:23:29Z 2020-08-03T20:23:28Z CONTRIBUTOR   0 pydata/xarray/pulls/4298

Incorrect value "exception" was given in the description of missing_dims argument - "exception" was renamed to "raise".

  • [x] Passes isort . && black . && mypy . && flake8
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4298/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
594594646 MDExOlB1bGxSZXF1ZXN0Mzk5MjAwODg3 3936 Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods johnomotani 3958036 closed 0     27 2020-04-05T18:52:52Z 2020-06-29T20:22:49Z 2020-06-29T19:36:26Z CONTRIBUTOR   0 pydata/xarray/pulls/3936

These return dicts of the indices of the minimum or maximum of a DataArray over several dimensions. Inspired by @fujiisoup's work in #1469. With #3871, replaces #1469. Provides a simpler solution to #3160.

Implemented so that da.isel(da.argmin(list_of_dim)) == da.min(list_of_dim) da.isel(da.argmax(list_of_dim)) == da.max(list_of_dim)

  • [x] Closes #1469
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3936/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
644485140 MDExOlB1bGxSZXF1ZXN0NDM5MDk4NTcz 4173 Fix 4009 johnomotani 3958036 closed 0     1 2020-06-24T09:59:28Z 2020-06-24T18:22:20Z 2020-06-24T18:22:19Z CONTRIBUTOR   0 pydata/xarray/pulls/4173

Don't know if/when I'll have time to finish #4017, so pulling out the fix for #4009 into a separate PR here that is ready to merge.

  • [x] Closes #4009
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4173/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
595882590 MDU6SXNzdWU1OTU4ODI1OTA= 3948 Releasing memory? johnomotani 3958036 closed 0     6 2020-04-07T13:49:07Z 2020-04-07T14:18:36Z 2020-04-07T14:18:36Z CONTRIBUTOR      

Once xarray (or dask) has loaded some array into memory, is there any way to force the memory to be released again? Or should this never be necessary?

For example, what would be the best workflow for this case: I have several large arrays on disk. Each will fit into memory individually. I want to do some analysis on each array (which produces small results), and keep the results in memory, but I do not need the large arrays any more after the analysis.

I'm wondering if some sort of release() method would be useful, so that I could say explicitly "can drop this DataArray from memory, even though the user might have modified it". My proposed workflow for the case above would then be something like: ``` da1 = ds["variable1"] result1 = do_some_work(da1) # may load large parts of da1 into memory da1.release() # any changes to da1 not already saved to disk are lost, but do not want da1 any more

da2 = ds["variable2"] result2 = do_some_work(da2) # may load large parts of da2 into memory da2.release() # any changes to da2 not already saved to disk are lost, but do not want da1 any more

... etc. ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3948/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
585868107 MDExOlB1bGxSZXF1ZXN0MzkyMTExMTI4 3877 Control attrs of result in `merge()`, `concat()`, `combine_by_coords()` and `combine_nested()` johnomotani 3958036 closed 0     7 2020-03-23T01:32:59Z 2020-04-05T20:44:47Z 2020-03-24T20:40:18Z CONTRIBUTOR   0 pydata/xarray/pulls/3877

combine_attrs argument for merge(), concat(), combine_by_coords() and combine_nested() controls what attributes the result is given. Defaults maintain the current behaviour. Possible values (named following compat arguments) are: - 'drop': empty attrs on returned Dataset. - 'identical': all attrs must be the same on every object. - 'no_conflicts': attrs from all objects are combined, any that have the same name must also have the same value. - 'override': skip comparing and copy attrs from the first dataset to the result.

  • [x] Closes #3865
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3877/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
583220835 MDU6SXNzdWU1ODMyMjA4MzU= 3866 Allow `isel` to ignore missing dimensions? johnomotani 3958036 closed 0     0 2020-03-17T18:41:13Z 2020-04-03T19:47:08Z 2020-04-03T19:47:08Z CONTRIBUTOR      

Sometimes it would be nice for isel() to be able to ignore a dimension if it is missing in the Dataset/DataArray. E.g. ``` ds = Dataset()

ds.isel(t=0) # currently raises an exception

ds.isel(t=0, ignore_missing=True) # would be nice if this was allowed, just returning ds ``` For example, when writing a function can be called on variables with different combinations of dimensions.

I think it should be fairly easy to implement, just add the argument to the condition here https://github.com/pydata/xarray/blob/65a5bff79479c4b56d6f733236fe544b7f4120a8/xarray/core/variable.py#L1059-L1062 the only downside would be increased complexity of adding another argument to the API for an issue where a workaround is not hard (at least in the case I have at the moment), just a bit clumsy.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3866/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
591471233 MDExOlB1bGxSZXF1ZXN0Mzk2NjM5NjM2 3923 Add missing_dims argument allowing isel() to ignore missing dimensions johnomotani 3958036 closed 0     5 2020-03-31T22:19:54Z 2020-04-03T19:47:08Z 2020-04-03T19:47:08Z CONTRIBUTOR   0 pydata/xarray/pulls/3923

Note: only added to DataArray.isel() and Variable.isel(). A Dataset should include all dimensions, so presumably it should always be an error to pass a non-existent dimension when slicing a Dataset?

  • [x] Closes #3866
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3923/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
587280307 MDExOlB1bGxSZXF1ZXN0MzkzMjU2NTgx 3887 Rename ordered_dict_intersection -> compat_dict_intersection johnomotani 3958036 closed 0     4 2020-03-24T21:08:26Z 2020-03-24T22:59:07Z 2020-03-24T22:59:07Z CONTRIBUTOR   0 pydata/xarray/pulls/3887

xarray does not use OrderedDicts any more, so name did not make sense. As suggested here https://github.com/pydata/xarray/pull/3877#discussion_r396620551.

  • [x] Test~~s~~ ~~added~~ updated
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3887/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
583080947 MDU6SXNzdWU1ODMwODA5NDc= 3865 `merge` drops attributes johnomotani 3958036 closed 0     1 2020-03-17T15:06:18Z 2020-03-24T20:40:18Z 2020-03-24T20:40:18Z CONTRIBUTOR      

xarray.merge() drops the attrs of Datasets being merged. They should be kept, at least if they are compatible

MCVE Code Sample

```python

Your code here

import xarray as xr

ds1 = xr.Dataset() ds1.attrs['a'] = 42 ds2 = xr.Dataset() ds2.attrs['a'] = 42

merged = xr.merge([ds1, ds2])

print(merged) the result is <xarray.Dataset> Dimensions: () Data variables: empty ```

Expected Output

<xarray.Dataset> Dimensions: () Data variables: *empty* Attributes: a: 42

Problem Description

The current behaviour means I have to check and copy attrs to the result of merge by hand, even if the attrs of the inputs were identical or not conflicting.

I'm happy to attempt a PR to fix this. Proposal (following pattern of compat arguments): * add a combine_attrs argument to xarray.merge * combine_attrs = 'drop' do not copy attrs (current behaviour) * combine_attrs = 'identical' if attrs of all inputs are identical (using dict_equiv) then copy the attrs to the result, otherwise raise an exception * combine_attrs = 'no_conflicts' merge the attrs of all inputs, as long as any keys shared by more than one input have the same value (if not raise an exception) [I propose this is the default behaviour] * override copy the attrs from the first input, to the result

This proposal should also allow combine_by_coords, etc. to preserve attributes. These should probably also take a combine_attrs argument, which would be passed through to merge.

Versions

Current master of pydata/xarray on 17/3/2020

Output of `xr.show_versions()` INSTALLED VERSIONS ------------------ commit: None python: 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] python-bits: 64 OS: Linux OS-release: 5.3.0-40-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.3 xarray: 0.15.0 pandas: 1.0.2 numpy: 1.18.1 scipy: 1.3.0 netCDF4: 1.5.1.2 pydap: None h5netcdf: None h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.12.0 distributed: None matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 45.2.0 pip: 9.0.1 conda: None pytest: 4.4.1 IPython: 7.8.0 sphinx: 1.8.3
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3865/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
533996523 MDExOlB1bGxSZXF1ZXN0MzQ5OTc0NjUz 3601 Fix contourf set under johnomotani 3958036 closed 0     5 2019-12-06T13:47:37Z 2020-02-24T20:20:09Z 2020-02-24T20:20:08Z CONTRIBUTOR   0 pydata/xarray/pulls/3601

Copies the cmap._rgba_bad, cmap._rgba_under, and cmap._rgba_over values to new_cmap, in case they have been set to non-default values. Allows the user to customize plots more by using matplotlib methods on a cmap before passing as an argument to xarray's plotting methods. Previously these settings were overridden by defaults when creating the cmap actually used to make the plot.

I'm not a fan of copying attributes one-by-one like this, but I guess this is an issue with matplotlib's API, unless there's a nicer way to convert a cmap to a discrete cmap than mpl.colors.from_levels_and_colors().

  • [x] Closes #3590
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3601/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
532165408 MDU6SXNzdWU1MzIxNjU0MDg= 3590 cmap.set_under() does not work as expected johnomotani 3958036 closed 0     5 2019-12-03T18:04:07Z 2020-02-24T20:20:07Z 2020-02-24T20:20:07Z CONTRIBUTOR      

When using matplotlib, the set_under() method can be used to set values below the range of a colormap to a certain color, for example ``` import matplotlib from matplotlib import pyplot import numpy

dat = numpy.linspace(0, 1)[numpy.newaxis, :]*numpy.linspace(0, 1)[:, numpy.newaxis]

cmap = matplotlib.cm.viridis

cmap.set_under('w')

pyplot.contourf(dat, vmin=.3, cmap=cmap) pyplot.colorbar() pyplot.show() ``` produces

while uncommenting the cmap.set_under() call produces

However, using xarray to do the same thing, ``` import matplotlib from matplotlib import pyplot from xarray import DataArray import numpy

da = DataArray(numpy.linspace(0, 1)[numpy.newaxis, :]*numpy.linspace(0, 1)[:, numpy.newaxis])

cmap = matplotlib.cm.viridis cmap.set_under('w')

da.plot.contourf(vmin=.3, cmap=cmap) pyplot.show() `` produces ![xarray](https://user-images.githubusercontent.com/3958036/70076465-bc174080-15f6-11ea-9f59-37f20569cf6e.png) where it seems the call tocmap.set_under('w')` had no effect. Expected behaviour would be output like the second plot.

Output of xr.show_versions()

``` In [2]: xarray.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] python-bits: 64 OS: Linux OS-release: 5.0.0-37-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.0 libnetcdf: 4.6.0 xarray: 0.14.1 pandas: 0.24.2 numpy: 1.16.3 scipy: 1.2.1 netCDF4: 1.3.1 pydap: None h5netcdf: None h5py: 2.9.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.1.0 distributed: None matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 41.0.1 pip: 19.3.1 conda: None pytest: 4.4.1 IPython: 7.6.1 sphinx: None ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3590/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
507468507 MDU6SXNzdWU1MDc0Njg1MDc= 3401 _get_scheduler() exception if dask.multiprocessing missing johnomotani 3958036 closed 0     0 2019-10-15T20:35:14Z 2019-10-21T00:17:48Z 2019-10-21T00:17:48Z CONTRIBUTOR      

These lines were recently changed in #3358 https://github.com/pydata/xarray/blob/3f9069ba376afa35c0ca83b09a6126dd24cb8127/xarray/backends/locks.py#L87-L92

If the 'cloudpickle' package is not installed, then dask.multiprocessing is not available. The try/except that used to be wrapped around if actual_get is dask.multiprocessing.get meant that _get_scheduler() worked in that case, returning "threaded" (I assume this was the expected behaviour). After #3358, _get_scheduler() raised an AttributeError: module 'dask' has no attribute 'multiprocessing' until I installed 'cloudpickle'.

Suggest either reverting the changes that removed the try/except or making 'cloudpickle' a dependency.


To reproduce: 1. check 'cloudpickle' is not installed, but 'dask' is 2. execute the following commands ```

import xarray xarray.backends.api._get_scheduler() Expected result: `"threaded"` Actual result:


AttributeError Traceback (most recent call last) <ipython-input-2-20da238796b7> in <module> ----> 1 xarray.backends.api._get_scheduler()

~/.local/lib/python3.6/site-packages/xarray/backends/locks.py in _get_scheduler(get, collection) 87 pass 88 ---> 89 if actual_get is dask.multiprocessing.get: 90 return "multiprocessing" 91 else:

AttributeError: module 'dask' has no attribute 'multiprocessing' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3401/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 26.836ms · About: xarray-datasette