home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

279 rows where state = "closed", type = "issue" and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: milestone, comments, state_reason, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 279 ✖

state 1

  • closed · 279 ✖

repo 1

  • xarray 279
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
267542085 MDU6SXNzdWUyNjc1NDIwODU= 1647 Representing missing values in string arrays on disk shoyer 1217238 closed 0     3 2017-10-23T05:01:10Z 2024-02-06T13:03:40Z 2024-02-06T13:03:40Z MEMBER      

This came up as part of my clean-up of serializing unicode strings in https://github.com/pydata/xarray/pull/1648.

There are two ways to represent strings in netCDF files.

  • As character arrays (NC_CHAR), supported by both netCDF3 and netCDF4
  • As variable length unicode strings (NC_STRING), only supported by netCDF4/HDF5.

Currently, by default (if no _FillValue is set) we replace missing values (NaN) with an empty string when writing data to disk.

For character arrays, we could use the normal _FillValue mechanism to set a fill value and decode when data is read back from disk. In fact, this already currently works for dtype=bytes (though it isn't documented): ``` In [10]: ds = xr.Dataset({'foo': ('x', np.array([b'bar', np.nan], dtype=object), {}, {'_FillValue': b''})})

In [11]: ds Out[11]: <xarray.Dataset> Dimensions: (x: 2) Dimensions without coordinates: x Data variables: foo (x) object b'bar' nan

In [12]: ds.to_netcdf('foobar.nc')

In [13]: xr.open_dataset('foobar.nc').load() Out[13]: <xarray.Dataset> Dimensions: (x: 2) Dimensions without coordinates: x Data variables: foo (x) object b'bar' nan ```

For variable length strings, it currently isn't possible to set a fill-value. So there's no good way to indicate missing values, though this may change if the future depending on the resolution of the netCDF-python issue.

It would obviously be nice to always automatically round-trip missing values, both for strings and bytes. I see two possible ways to do this: 1. Require setting an explicit _FillValue when a string contains missing values, by raising an error if this isn't done. We need an explicit choice because there aren't any extra unused characters left over, at least for character arrays. (NetCDF explicitly allows arbitrary bytes to be stored in NC_CHAR, even though this maps to an HDF5 fixed-width string with ASCII encoding.) For variable length strings, we could potentially set a non-character unicode symbol like U+FFFF, but again that isn't supported yet. 2. Treat empty strings as equivalent to a missing value (NaN). This has the advantage of not requiring an explicit choice of _FillValue, so we don't need to wait for any netCDF4 issues to be resolved. However, this does mean that empty strings would not round-trip. Still, given the relative prevalence of missing values vs empty strings in xarray/pandas, it's probably the lesser evil to not preserve empty string.

The default option is to adopt neither of these, and keep the current behavior where missing values are written as empty strings and not decoded at all.

Any opinions? I am leaning towards option (2).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1647/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
197939448 MDU6SXNzdWUxOTc5Mzk0NDg= 1189 Document using a spawning multiprocessing pool for multiprocessing with dask shoyer 1217238 closed 0     3 2016-12-29T01:21:50Z 2023-12-05T21:51:04Z 2023-12-05T21:51:04Z MEMBER      

This is a nice option for working with in-file HFD5/netCDF4 compression: https://github.com/pydata/xarray/pull/1128#issuecomment-261936849

Mixed multi-threading/multi-processing could also be interesting, if anyone wants to revive that: https://github.com/dask/dask/pull/457 (I think it would work now that xarray data stores are pickle-able)

CC @mrocklin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1189/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
430188626 MDU6SXNzdWU0MzAxODg2MjY= 2873 Dask distributed tests fail locally shoyer 1217238 closed 0     3 2019-04-07T20:26:53Z 2023-12-05T21:43:02Z 2023-12-05T21:43:02Z MEMBER      

I'm not sure why, but when I run the integration tests with dask-distributed locally (on my MacBook pro), they fail: ``` $ pytest xarray/tests/test_distributed.py --maxfail 1 ================================================ test session starts ================================================= platform darwin -- Python 3.7.2, pytest-4.0.1, py-1.7.0, pluggy-0.8.0 rootdir: /Users/shoyer/dev/xarray, inifile: setup.cfg plugins: repeat-0.7.0 collected 19 items

xarray/tests/test_distributed.py F

====================================================== FAILURES ====================================================== ____ test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF3_CLASSIC] _______

loop = <tornado.platform.asyncio.AsyncIOLoop object at 0x1c182da1d0> tmp_netcdf_filename = '/private/var/folders/15/qdcz0wqj1t9dg40m_ld0fjkh00b4kd/T/pytest-of-shoyer/pytest-3/test_dask_distributed_netcdf_r0/testfile.nc' engine = 'netcdf4', nc_format = 'NETCDF3_CLASSIC'

@pytest.mark.parametrize('engine,nc_format', ENGINES_AND_FORMATS)  # noqa
def test_dask_distributed_netcdf_roundtrip(
        loop, tmp_netcdf_filename, engine, nc_format):

    if engine not in ENGINES:
        pytest.skip('engine not available')

    chunks = {'dim1': 4, 'dim2': 3, 'dim3': 6}

    with cluster() as (s, [a, b]):
        with Client(s['address'], loop=loop):

            original = create_test_data().chunk(chunks)

            if engine == 'scipy':
                with pytest.raises(NotImplementedError):
                    original.to_netcdf(tmp_netcdf_filename,
                                       engine=engine, format=nc_format)
                return

            original.to_netcdf(tmp_netcdf_filename,
                               engine=engine, format=nc_format)

            with xr.open_dataset(tmp_netcdf_filename,
                                 chunks=chunks, engine=engine) as restored:
                assert isinstance(restored.var1.data, da.Array)
                computed = restored.compute()
              assert_allclose(original, computed)

xarray/tests/test_distributed.py:87:


../../miniconda3/envs/xarray-py37/lib/python3.7/contextlib.py:119: in exit next(self.gen)


nworkers = 2, nanny = False, worker_kwargs = {}, active_rpc_timeout = 1, scheduler_kwargs = {}

@contextmanager
def cluster(nworkers=2, nanny=False, worker_kwargs={}, active_rpc_timeout=1,
            scheduler_kwargs={}):
    ...  # trimmed
    start = time()
    while list(ws):
        sleep(0.01)
      assert time() < start + 1, 'Workers still around after one second'

E AssertionError: Workers still around after one second

../../miniconda3/envs/xarray-py37/lib/python3.7/site-packages/distributed/utils_test.py:721: AssertionError ------------------------------------------------ Captured stderr call ------------------------------------------------ distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:51715 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:51718 distributed.worker - INFO - Listening to: tcp://127.0.0.1:51718 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:51715 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 17.18 GB distributed.worker - INFO - Local Directory: /Users/shoyer/dev/xarray/_test_worker-5cabd1b7-4d9c-49eb-a79e-205c588f5dae/worker-n8uv72yx distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:51720 distributed.worker - INFO - Listening to: tcp://127.0.0.1:51720 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:51715 distributed.scheduler - INFO - Register tcp://127.0.0.1:51718 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 17.18 GB distributed.worker - INFO - Local Directory: /Users/shoyer/dev/xarray/_test_worker-71a426d4-bd34-4808-9d33-79cac2bb4801/worker-a70rlf4r distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:51718 distributed.core - INFO - Starting established connection distributed.worker - INFO - Registered to: tcp://127.0.0.1:51715 distributed.worker - INFO - ------------------------------------------------- distributed.core - INFO - Starting established connection distributed.scheduler - INFO - Register tcp://127.0.0.1:51720 distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:51720 distributed.core - INFO - Starting established connection distributed.worker - INFO - Registered to: tcp://127.0.0.1:51715 distributed.worker - INFO - ------------------------------------------------- distributed.core - INFO - Starting established connection distributed.scheduler - INFO - Receive client connection: Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.core - INFO - Starting established connection distributed.core - INFO - Event loop was unresponsive in Worker for 1.05s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. distributed.scheduler - INFO - Receive client connection: Client-worker-5a5c81de-5972-11e9-9136-8c85907bce57 distributed.core - INFO - Starting established connection distributed.core - INFO - Event loop was unresponsive in Worker for 1.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. distributed.scheduler - INFO - Receive client connection: Client-worker-5b2496d8-5972-11e9-9137-8c85907bce57 distributed.core - INFO - Starting established connection distributed.scheduler - INFO - Remove client Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.scheduler - INFO - Remove client Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.scheduler - INFO - Close client connection: Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:51720 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:51718 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:51720 distributed.core - INFO - Removing comms to tcp://127.0.0.1:51720 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:51718 distributed.core - INFO - Removing comms to tcp://127.0.0.1:51718 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Remove client Client-worker-5b2496d8-5972-11e9-9137-8c85907bce57 distributed.scheduler - INFO - Remove client Client-worker-5a5c81de-5972-11e9-9136-8c85907bce57 distributed.scheduler - INFO - Close client connection: Client-worker-5b2496d8-5972-11e9-9137-8c85907bce57 distributed.scheduler - INFO - Close client connection: Client-worker-5a5c81de-5972-11e9-9136-8c85907bce57 distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms ```

Version info: ``` In [2]: xarray.show_versions()

INSTALLED VERSIONS

commit: 2ce0639ee2ba9c7b1503356965f77d847d6cfcdf python: 3.7.2 (default, Dec 29 2018, 00:00:04) [Clang 4.0.1 (tags/RELEASE_401/final)] python-bits: 64 OS: Darwin OS-release: 18.2.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2

xarray: 0.12.1+4.g2ce0639e pandas: 0.24.0 numpy: 1.15.4 scipy: 1.1.0 netCDF4: 1.4.3.2 pydap: None h5netcdf: 0.7.0 h5py: 2.9.0 Nio: None zarr: 2.2.0 cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.1.5 distributed: 1.26.1 matplotlib: 3.0.2 cartopy: 0.17.0 seaborn: 0.9.0 setuptools: 40.0.0 pip: 18.0 conda: None pytest: 4.0.1 IPython: 6.5.0 sphinx: 1.8.2 ```

@mrocklin does this sort of error look familiar to you?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2873/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  not_planned xarray 13221727 issue
253395960 MDU6SXNzdWUyNTMzOTU5NjA= 1533 Index variables loaded from dask can be computed twice shoyer 1217238 closed 0     6 2017-08-28T17:18:27Z 2023-04-06T04:15:46Z 2023-04-06T04:15:46Z MEMBER      

as reported by @crusaderky in #1522

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1533/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
98587746 MDU6SXNzdWU5ODU4Nzc0Ng== 508 Ignore missing variables when concatenating datasets? shoyer 1217238 closed 0     8 2015-08-02T06:03:57Z 2023-01-20T16:04:28Z 2023-01-20T16:04:28Z MEMBER      

Several users (@raj-kesavan, @richardotis, now myself) have wondered about how to concatenate xray Datasets with different variables.

With the current xray.concat, you need to awkwardly create dummy variables filled with NaN in datasets that don't have them (or drop mismatched variables entirely). Neither of these are great options -- concat should have an option (the default?) to take care of this for the user.

This would also be more consistent with pd.concat, which takes a more relaxed approach to matching dataframes with different variables (it does an outer join).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/508/reactions",
    "total_count": 6,
    "+1": 6,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
623804131 MDU6SXNzdWU2MjM4MDQxMzE= 4090 Error with indexing 2D lat/lon coordinates shoyer 1217238 closed 0     2 2020-05-24T06:19:45Z 2022-09-28T12:06:03Z 2022-09-28T12:06:03Z MEMBER      

``` filslp = "ChonghuaYinData/prmsl.mon.mean.nc" filtmp = "ChonghuaYinData/air.sig995.mon.mean.nc" filprc = "ChonghuaYinData/precip.mon.mean.nc"

ds_slp = xr.open_dataset(filslp).sel(time=slice(str(yrStrt)+'-01-01', str(yrLast)+'-12-31'))

ds_slp outputs: <xarray.Dataset> Dimensions: (nbnds: 2, time: 480, x: 349, y: 277) Coordinates: * time (time) datetime64[ns] 1979-01-01 ... 2018-12-01 lat (y, x) float32 ... lon (y, x) float32 ... * y (y) float32 0.0 32463.0 64926.0 ... 8927325.0 8959788.0 * x (x) float32 0.0 32463.0 64926.0 ... 11264660.0 11297120.0 Dimensions without coordinates: nbnds Data variables: Lambert_Conformal int32 ... prmsl (time, y, x) float32 ... time_bnds (time, nbnds) float64 ... Attributes: Conventions: CF-1.2 centerlat: 50.0 centerlon: -107.0 comments:
institution: National Centers for Environmental Prediction latcorners: [ 1.000001 0.897945 46.3544 46.63433 ] loncorners: [-145.5 -68.32005 -2.569891 148.6418 ] platform: Model standardpar1: 50.0 standardpar2: 50.000001 title: NARR Monthly Means dataset_title: NCEP North American Regional Reanalysis (NARR) history: created 2016/04/12 by NOAA/ESRL/PSD references: https://www.esrl.noaa.gov/psd/data/gridded/data.narr.html source: http://www.emc.ncep.noaa.gov/mmb/rreanl/index.html References:
```

``` yrStrt = 1950 # manually specify for convenience yrLast = 2018 # 20th century ends 2018

clStrt = 1950 # reference climatology for SOI clLast = 1979

yrStrtP = 1979 # 1st year GPCP yrLastP = yrLast # match 20th century

latT = -17.6 # Tahiti lonT = 210.75
latD = -12.5 # Darwin lonD = 130.83

select grids of T and D

T = ds_slp.sel(lat=latT, lon=lonT, method='nearest') D = ds_slp.sel(lat=latD, lon=lonD, method='nearest') outputs:


ValueError Traceback (most recent call last) <ipython-input-27-6702b30f473f> in <module> 1 # select grids of T and D ----> 2 T = ds_slp.sel(lat=latT, lon=lonT, method='nearest') 3 D = ds_slp.sel(lat=latD, lon=lonD, method='nearest')

~\Anaconda3\lib\site-packages\xarray\core\dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2004 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2005 pos_indexers, new_indexes = remap_label_indexers( -> 2006 self, indexers=indexers, method=method, tolerance=tolerance 2007 ) 2008 result = self.isel(indexers=pos_indexers, drop=drop)

~\Anaconda3\lib\site-packages\xarray\core\coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 378 379 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 380 obj, v_indexers, method=method, tolerance=tolerance 381 ) 382 # attach indexer's coordinate to pos_indexers

~\Anaconda3\lib\site-packages\xarray\core\indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 257 new_indexes = {} 258 --> 259 dim_indexers = get_dim_indexers(data_obj, indexers) 260 for dim, label in dim_indexers.items(): 261 try:

~\Anaconda3\lib\site-packages\xarray\core\indexing.py in get_dim_indexers(data_obj, indexers) 223 ] 224 if invalid: --> 225 raise ValueError("dimensions or multi-index levels %r do not exist" % invalid) 226 227 level_indexers = defaultdict(dict)

ValueError: dimensions or multi-index levels ['lat', 'lon'] do not exist ```

Does any know how fix to this problem?Thank you very much.

Originally posted by @JimmyGao0204 in https://github.com/pydata/xarray/issues/475#issuecomment-633172787

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4090/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1210147360 I_kwDOAMm_X85IIWIg 6504 test_weighted.test_weighted_operations_nonequal_coords should avoid depending on random number seed shoyer 1217238 closed 0 shoyer 1217238   0 2022-04-20T19:56:19Z 2022-08-29T20:42:30Z 2022-08-29T20:42:30Z MEMBER      

What happened?

In testing an upgrade to the latest version of xarray in our systems, I noticed this test failing: ``` def test_weighted_operations_nonequal_coords(): # There are no weights for a == 4, so that data point is ignored. weights = DataArray(np.random.randn(4), dims=("a",), coords=dict(a=[0, 1, 2, 3])) data = DataArray(np.random.randn(4), dims=("a",), coords=dict(a=[1, 2, 3, 4])) check_weighted_operations(data, weights, dim="a", skipna=None)

    q = 0.5
    result = data.weighted(weights).quantile(q, dim="a")
    # Expected value computed using code from [https://aakinshin.net/posts/weighted-quantiles/](https://www.google.com/url?q=https://aakinshin.net/posts/weighted-quantiles/&sa=D) with values at a=1,2,3
    expected = DataArray([0.9308707], coords={"quantile": [q]}).squeeze()
  assert_allclose(result, expected)

E AssertionError: Left and right DataArray objects are not close E
E Differing values: E L E array(0.919569) E R E array(0.930871) ```

It appears that this test is hard-coded to match a particular random number seed, which in turn would fix the resutls of np.random.randn().

What did you expect to happen?

Whenever possible, Xarray's own tests should avoid relying on particular random number generators, e.g., in this case we could specify random numbers instead.

A back-up option would be to explicitly set random seed locally inside the tests, e.g., by creating a np.random.RandomState() with a fixed seed and using that. The global random state used by np.random.randn() is sensitive to implementation details like the order in which tests are run.

Minimal Complete Verifiable Example

No response

Relevant log output

No response

Anything else we need to know?

No response

Environment

...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6504/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1210267320 I_kwDOAMm_X85IIza4 6505 Dropping a MultiIndex variable raises an error after explicit indexes refactor shoyer 1217238 closed 0     3 2022-04-20T22:07:26Z 2022-07-21T14:46:58Z 2022-07-21T14:46:58Z MEMBER      

What happened?

With the latest released version of Xarray, it is possible to delete all variables corresponding to a MultiIndex by simply deleting the name of the MultiIndex.

After the explicit indexes refactor (i.e,. using the "main" development branch) this now raises error about how this would "corrupt" index state. This comes up when using drop() and assign_coords() and possibly some other methods.

This is not hard to work around, but we may want to consider this bug a blocker for the next Xarray release. I found the issue surfaced in several projects when attempting to use the new version of Xarray inside Google's codebase.

CC @benbovy in case you have any thoughts to share.

What did you expect to happen?

For now, we should preserve the behavior of deleting the variables corresponding to MultiIndex levels, but should issue a deprecation warning encouraging users to explicitly delete everything.

Minimal Complete Verifiable Example

```Python import xarray

array = xarray.DataArray( [[1, 2], [3, 4]], dims=['x', 'y'], coords={'x': ['a', 'b']}, ) stacked = array.stack(z=['x', 'y']) print(stacked.drop('z')) print() print(stacked.assign_coords(z=[1, 2, 3, 4])) ```

Relevant log output

```Python ValueError Traceback (most recent call last) Input In [1], in <cell line: 9>() 3 array = xarray.DataArray( 4 [[1, 2], [3, 4]], 5 dims=['x', 'y'], 6 coords={'x': ['a', 'b']}, 7 ) 8 stacked = array.stack(z=['x', 'y']) ----> 9 print(stacked.drop('z')) 10 print() 11 print(stacked.assign_coords(z=[1, 2, 3, 4]))

File ~/dev/xarray/xarray/core/dataarray.py:2425, in DataArray.drop(self, labels, dim, errors, labels_kwargs) 2408 def drop( 2409 self, 2410 labels: Mapping = None, (...) 2414 labels_kwargs, 2415 ) -> DataArray: 2416 """Backward compatible method based on drop_vars and drop_sel 2417 2418 Using either drop_vars or drop_sel is encouraged (...) 2423 DataArray.drop_sel 2424 """ -> 2425 ds = self._to_temp_dataset().drop(labels, dim, errors=errors) 2426 return self._from_temp_dataset(ds)

File ~/dev/xarray/xarray/core/dataset.py:4590, in Dataset.drop(self, labels, dim, errors, **labels_kwargs) 4584 if dim is None and (is_scalar(labels) or isinstance(labels, Iterable)): 4585 warnings.warn( 4586 "dropping variables using drop will be deprecated; using drop_vars is encouraged.", 4587 PendingDeprecationWarning, 4588 stacklevel=2, 4589 ) -> 4590 return self.drop_vars(labels, errors=errors) 4591 if dim is not None: 4592 warnings.warn( 4593 "dropping labels using list-like labels is deprecated; using " 4594 "dict-like arguments with drop_sel, e.g. `ds.drop_sel(dim=[labels]).", 4595 DeprecationWarning, 4596 stacklevel=2, 4597 )

File ~/dev/xarray/xarray/core/dataset.py:4549, in Dataset.drop_vars(self, names, errors) 4546 if errors == "raise": 4547 self._assert_all_in_dataset(names) -> 4549 assert_no_index_corrupted(self.xindexes, names) 4551 variables = {k: v for k, v in self._variables.items() if k not in names} 4552 coord_names = {k for k in self._coord_names if k in variables}

File ~/dev/xarray/xarray/core/indexes.py:1394, in assert_no_index_corrupted(indexes, coord_names) 1392 common_names_str = ", ".join(f"{k!r}" for k in common_names) 1393 index_names_str = ", ".join(f"{k!r}" for k in index_coords) -> 1394 raise ValueError( 1395 f"cannot remove coordinate(s) {common_names_str}, which would corrupt " 1396 f"the following index built from coordinates {index_names_str}:\n" 1397 f"{index}" 1398 )

ValueError: cannot remove coordinate(s) 'z', which would corrupt the following index built from coordinates 'z', 'x', 'y': <xarray.core.indexes.PandasMultiIndex object at 0x148c95150> ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: 33cdabd261b5725ac357c2823bd0f33684d3a954 python: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:42:03) [Clang 12.0.1 ] python-bits: 64 OS: Darwin OS-release: 21.4.0 machine: arm64 processor: arm byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.18.3.dev137+g96c56836 pandas: 1.4.2 numpy: 1.22.3 scipy: 1.8.0 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.11.3 cftime: 1.6.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2022.04.1 distributed: 2022.4.1 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2022.3.0 cupy: None pint: None sparse: None setuptools: 62.1.0 pip: 22.0.4 conda: None pytest: 7.1.1 IPython: 8.2.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6505/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
711626733 MDU6SXNzdWU3MTE2MjY3MzM= 4473 Wrap numpy-groupies to speed up Xarray's groupby aggregations shoyer 1217238 closed 0     8 2020-09-30T04:43:04Z 2022-05-15T02:38:29Z 2022-05-15T02:38:29Z MEMBER      

Is your feature request related to a problem? Please describe.

Xarray's groupby aggregations (e.g., groupby(..).sum()) are very slow compared to pandas, as described in https://github.com/pydata/xarray/issues/659.

Describe the solution you'd like

We could speed things up considerably (easily 100x) by wrapping the numpy-groupies package.

Additional context

One challenge is how to handle dask arrays (and other duck arrays). In some cases it might make sense to apply the numpy-groupies function (using apply_ufunc), but in other cases it might be better to stick with the current indexing + concatenate solution. We could either pick some simple heuristics for choosing the algorithm to use on dask arrays, or could just stick with the current algorithm for now.

In particular, it might make sense to stick with the current algorithm if there are a many chunks in the arrays to aggregated along the "grouped" dimension (depending on the size of the unique group values).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4473/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
621123222 MDU6SXNzdWU2MjExMjMyMjI= 4081 Wrap "Dimensions" onto multiple lines in xarray.Dataset repr? shoyer 1217238 closed 0     4 2020-05-19T16:31:59Z 2022-04-29T19:59:24Z 2022-04-29T19:59:24Z MEMBER      

Here's an example dataset of a large dataset from @alimanfoo: https://nbviewer.jupyter.org/gist/alimanfoo/b74b08465727894538d5b161b3ced764 <xarray.Dataset> Dimensions: (__variants/BaseCounts_dim1: 4, __variants/MLEAC_dim1: 3, __variants/MLEAF_dim1: 3, alt_alleles: 3, ploidy: 2, samples: 1142, variants: 21442865) Coordinates: samples/ID (samples) object dask.array<chunksize=(1142,), meta=np.ndarray> variants/CHROM (variants) object dask.array<chunksize=(21442865,), meta=np.ndarray> variants/POS (variants) int32 dask.array<chunksize=(4194304,), meta=np.ndarray> Dimensions without coordinates: __variants/BaseCounts_dim1, __variants/MLEAC_dim1, __variants/MLEAF_dim1, alt_alleles, ploidy, samples, variants Data variables: variants/ABHet (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/ABHom (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/AC (variants, alt_alleles) int32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> variants/AF (variants, alt_alleles) float32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> ...

I know similarly large datasets with lots of dimensions come up in other contexts as well, e.g., with geophysical model output.

That's a very long first line! This would be easier to read as: <xarray.Dataset> Dimensions: (__variants/BaseCounts_dim1: 4, __variants/MLEAC_dim1: 3, __variants/MLEAF_dim1: 3, alt_alleles: 3, ploidy: 2, samples: 1142, variants: 21442865) Coordinates: samples/ID (samples) object dask.array<chunksize=(1142,), meta=np.ndarray> variants/CHROM (variants) object dask.array<chunksize=(21442865,), meta=np.ndarray> variants/POS (variants) int32 dask.array<chunksize=(4194304,), meta=np.ndarray> Dimensions without coordinates: __variants/BaseCounts_dim1, __variants/MLEAC_dim1, __variants/MLEAF_dim1, alt_alleles, ploidy, samples, variants Data variables: variants/ABHet (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/ABHom (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/AC (variants, alt_alleles) int32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> variants/AF (variants, alt_alleles) float32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> ...

or maybe: <xarray.Dataset> Dimensions: __variants/BaseCounts_dim1: 4 __variants/MLEAC_dim1: 3 __variants/MLEAF_dim1: 3 alt_alleles: 3 ploidy: 2 samples: 1142 variants: 21442865 Coordinates: samples/ID (samples) object dask.array<chunksize=(1142,), meta=np.ndarray> variants/CHROM (variants) object dask.array<chunksize=(21442865,), meta=np.ndarray> variants/POS (variants) int32 dask.array<chunksize=(4194304,), meta=np.ndarray> Dimensions without coordinates: __variants/BaseCounts_dim1, __variants/MLEAC_dim1, __variants/MLEAF_dim1, alt_alleles, ploidy, samples, variants Data variables: variants/ABHet (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/ABHom (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/AC (variants, alt_alleles) int32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> variants/AF (variants, alt_alleles) float32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> ...

Dimensions without coordinates could probably use some wrapping, too.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4081/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
205455788 MDU6SXNzdWUyMDU0NTU3ODg= 1251 Consistent naming for xarray's methods that apply functions shoyer 1217238 closed 0     13 2017-02-05T21:27:24Z 2022-04-27T20:06:25Z 2022-04-27T20:06:25Z MEMBER      

We currently have two types of methods that take a function to apply to xarray objects: - pipe (on DataArray and Dataset): apply a function to this entire object (array.pipe(func) -> func(array)) - apply (on Dataset and GroupBy): apply a function to each labeled object in this object (e.g., ds.apply(func) -> ds({k: func(v) for k, v in ds.data_vars.items()})).

And one more method that we want to add but isn't finalized yet -- currently named apply_ufunc: - Apply a function that acts on unlabeled (i.e., numpy) arrays to each array in the object

I'd like to have three distinct names that makes it clear what these methods do and how they are different. This has come up a few times recently, e.g., https://github.com/pydata/xarray/issues/1130

One proposal: rename apply to map, and then use apply only for methods that act on unlabeled arrays. This would require a deprecation cycle, but eventually it would let us add .apply methods for handling raw arrays to both Dataset and DataArray. (We could use a separate apply method from apply_ufunc to convert dim arguments to axis and not do automatic broadcasting.)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1251/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
864249974 MDU6SXNzdWU4NjQyNDk5NzQ= 5202 Make creating a MultiIndex in stack optional shoyer 1217238 closed 0     7 2021-04-21T20:21:03Z 2022-03-17T17:11:42Z 2022-03-17T17:11:42Z MEMBER      

As @Hoeze notes in https://github.com/pydata/xarray/issues/5179, calling stack() can be "incredibly slow and memory-demanding, since it creates a MultiIndex of every possible coordinate in the array."

This is true with how stack() works currently, but I'm not sure this is necessary. I suspect it's a vestigial design choice from copying pandas, back from before Xarray had optional indexes. One benefit is that it's convenient for making unstack() the inverse of stack(), but isn't always required.

Regardless of how we define the semantics for boolean indexing (https://github.com/pydata/xarray/issues/1887), it seems like it could be a good idea to allow stack to skip creating a MultiIndex for the new dimension, via a new keyword argument such as ds.stack(index=False). This would be equivalent to calling reset_index() after stack() but would be cheaper because the MultiIndex is never created in the first place.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5202/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
874292512 MDU6SXNzdWU4NzQyOTI1MTI= 5251 Switch default for Zarr reading/writing to consolidated=True? shoyer 1217238 closed 0     4 2021-05-03T06:59:42Z 2021-08-30T15:21:11Z 2021-08-30T15:21:11Z MEMBER      

Consolidated metadata was a new feature in Zarr v2.3, which was released over two year ago (March 22, 2019).

Since then, I have used consolidated=True every time I've written or opened a Zarr store. As far as I can tell, this is almost always a good idea: - With local storage, it usually doesn't really matter. You spend a bit of time writing the consolidated metadata and have one extra file on disk, but the overhead is typically negligible. - With Cloud object stores or network filesystems, it can matter quite a large amount. Without consolidated metadata, these systems can be unusably slow for opening datasets. Cloud storage is of course the main use-case for Zarr. If you're using a local disk, you might as well stick with single files such as netCDF.

I wonder if consolidated metadata is mature enough now that we could consider switching the default behavior in Xarray. From my perspective, this is a big "gotcha" for getting good performance with Zarr. More than one of my colleagues has been unimpressed with the performance of Zarr until they learned to set consolidated=True.

I would suggest doing this in way is almost entirely backwards compatible, with only a minor performance costs for reading non-consolidated datasets: - to_zarr() switches the default to consolidated=True. The consolidate_metadata() will thus happen by default. - open_zarr() switches the default to consolidated=None, which means "Try reading consolidated metadata, and fall-back to non-consolidated if that fails." This will be slightly slower for non-consolidated metadata due to the extra file-lookup, but given that opening with non-consolidated metadata already requires a moderately large number of file look-ups, I doubt anyone will notice the difference.

CC @rabernat

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5251/reactions",
    "total_count": 11,
    "+1": 11,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
928402742 MDU6SXNzdWU5Mjg0MDI3NDI= 5516 Rename master branch -> main shoyer 1217238 closed 0     4 2021-06-23T15:45:57Z 2021-07-23T21:58:39Z 2021-07-23T21:58:39Z MEMBER      

This is a best practice for inclusive projects.

See https://github.com/github/renaming for guidance.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5516/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
890534794 MDU6SXNzdWU4OTA1MzQ3OTQ= 5295 Engine is no longer inferred for filenames not ending in ".nc" shoyer 1217238 closed 0     0 2021-05-12T22:28:46Z 2021-07-15T14:57:54Z 2021-05-14T22:40:14Z MEMBER      

This works with xarray=0.17.0: python import xarray xarray.Dataset({'x': [1, 2, 3]}).to_netcdf('tmp') xarray.open_dataset('tmp')

On xarray 0.18.0, it fails: ```


ValueError Traceback (most recent call last) <ipython-input-1-20e128a730aa> in <module>() 2 3 xarray.Dataset({'x': [1, 2, 3]}).to_netcdf('tmp') ----> 4 xarray.open_dataset('tmp')

/usr/local/lib/python3.7/dist-packages/xarray/backends/api.py in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, backend_kwargs, args, *kwargs) 483 484 if engine is None: --> 485 engine = plugins.guess_engine(filename_or_obj) 486 487 backend = plugins.get_backend(engine)

/usr/local/lib/python3.7/dist-packages/xarray/backends/plugins.py in guess_engine(store_spec) 110 warnings.warn(f"{engine!r} fails while guessing", RuntimeWarning) 111 --> 112 raise ValueError("cannot guess the engine, try passing one explicitly") 113 114

ValueError: cannot guess the engine, try passing one explicitly ```

I'm not entirely sure what changed. My guess is that we used to fall-back to trying to use SciPy, but don't do that anymore. A potential fix would be reading strings as filenames in xarray.backends.utils.read_magic_number.

Related: https://github.com/pydata/xarray/issues/5291

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5295/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
891281614 MDU6SXNzdWU4OTEyODE2MTQ= 5302 Suggesting specific IO backends to install when open_dataset() fails shoyer 1217238 closed 0     3 2021-05-13T18:45:28Z 2021-06-23T08:18:07Z 2021-06-23T08:18:07Z MEMBER      

Currently, Xarray's internal backends don't get registered unless the necessary dependencies are installed: https://github.com/pydata/xarray/blob/1305d9b624723b86050ca5b2d854e5326bbaa8e6/xarray/backends/netCDF4_.py#L567-L568

In order to facilitating suggesting a specific backend to install (e.g., to improve error messages from opening tutorial datasets https://github.com/pydata/xarray/issues/5291), I would suggest that Xarray always registers its own backend entrypoints. Then we make the following changes to the plugin protocol:

  • guess_can_open() should work regardless of whether the underlying backend is installed
  • installed() returns a boolean reporting whether backend is installed. The default method in the base class would return True, for backwards compatibility.
  • open_dataset() of course should error if the backend is not installed.

This will let us leverage the existing guess_can_open() functionality to suggest specific optional dependencies to install. E.g., if you supply a netCDF3 file: Xarray cannot find a matching installed backend for this file in the installed backends ["h5netcdf"]. Consider installing one of the following backends which reports a match: ["scipy", "netcdf4"]

Does this reasonable and worthwhile?

CC @aurghs @alexamici

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5302/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
416554477 MDU6SXNzdWU0MTY1NTQ0Nzc= 2797 Stalebot is being overly aggressive shoyer 1217238 closed 0     7 2019-03-03T19:37:37Z 2021-06-03T21:31:46Z 2021-06-03T21:22:48Z MEMBER      

E.g., see https://github.com/pydata/xarray/issues/1151 where stalebot closed an issue even after another comment.

Is this something we need to reconfigure or just a bug?

cc @pydata/xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2797/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
46049691 MDU6SXNzdWU0NjA0OTY5MQ== 255 Add Dataset.to_pandas() method shoyer 1217238 closed 0   0.5 987654 2 2014-10-17T00:01:36Z 2021-05-04T13:56:00Z 2021-05-04T13:56:00Z MEMBER      

This would be the complement of the DataArray constructor, converting an xray.DataArray into a 1D series, 2D DataFrame or 3D panel, whichever is appropriate.

to_pandas would also makes sense for Dataset, if it could convert 0d datasets to series, e.g., pd.Series({k: v.item() for k, v in ds.items()}) (there is currently no direct way to do this), and revert to to_dataframe for higher dimensional input. - [x] DataArray method - [ ] Dataset method

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/255/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
346822633 MDU6SXNzdWUzNDY4MjI2MzM= 2336 test_88_character_filename_segmentation_fault should not try to write to the current working directory shoyer 1217238 closed 0     2 2018-08-02T01:06:41Z 2021-04-20T23:38:53Z 2021-04-20T23:38:53Z MEMBER      

This files in cases where the current working directory does not support writes, e.g., as seen here ``` def test_88_character_filename_segmentation_fault(self): # should be fixed in netcdf4 v1.3.1 with mock.patch('netCDF4.version', '1.2.4'): with warnings.catch_warnings(): message = ('A segmentation fault may occur when the ' 'file path has exactly 88 characters') warnings.filterwarnings('error', message) with pytest.raises(Warning): # Need to construct 88 character filepath

              xr.Dataset().to_netcdf('a' * (88 - len(os.getcwd()) - 1))

tests/test_backends.py:1234:


core/dataset.py:1150: in to_netcdf compute=compute) backends/api.py:715: in to_netcdf autoclose=autoclose, lock=lock) backends/netCDF4_.py:332: in open ds = opener() backends/netCDF4_.py:231: in _open_netcdf4_group ds = nc4.Dataset(filename, mode=mode, **kwargs) third_party/py/netCDF4/_netCDF4.pyx:2111: in netCDF4._netCDF4.Dataset.init ???


??? E IOError: [Errno 13] Permission denied ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2336/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
621082480 MDU6SXNzdWU2MjEwODI0ODA= 4080 Most arguments to open_dataset should be keyword only shoyer 1217238 closed 0     1 2020-05-19T15:38:51Z 2021-03-16T10:56:09Z 2021-03-16T10:56:09Z MEMBER      

open_dataset has a long list of arguments: xarray.open_dataset(filename_or_obj, group=None, decode_cf=True, mask_and_scale=None, decode_times=True, autoclose=None, concat_characters=True, decode_coords=True, engine=None, chunks=None, lock=None, cache=None, drop_variables=None, backend_kwargs=None, use_cftime=None)

Similarly to the case for pandas (https://github.com/pandas-dev/pandas/issues/27544), it would be nice to make most of these arguments keyword-only, e.g., def open_dataset(filename_or_obj, group, *, ...). For consistency, this would also apply to open_dataarray, decode_cf, open_mfdataset, etc.

This would encourage writing readable code when calling open_dataset() and would allow us to use better organization when adding new arguments (e.g., decode_timedelta in https://github.com/pydata/xarray/pull/4071).

To make this change, we could make use of the deprecate_nonkeyword_arguments decorator from https://github.com/pandas-dev/pandas/pull/27573

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4080/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
645154872 MDU6SXNzdWU2NDUxNTQ4NzI= 4179 Consider revising our minimum dependency version policy shoyer 1217238 closed 0     7 2020-06-25T05:04:38Z 2021-02-22T05:02:25Z 2021-02-22T05:02:25Z MEMBER      

Our current policy is that xarray supports "the minor version (X.Y) initially published no more than N months ago" where N is:

  • Python: 42 months (NEP 29)
  • numpy: 24 months (NEP 29)
  • pandas: 12 months
  • scipy: 12 months
  • sparse, pint and other libraries that rely on NEP-18 for integration: very latest available versions only,
  • all other libraries: 6 months

I think this policy is too aggressive, particularly for pandas, SciPy and other libraries. Some of these projects can go 6+ months between minor releases. For example, version 2.3 of zarr is currently more than 6 months old. So if zarr released 2.4 today and xarray issued a new release tomorrow, and then our policy would dictate that we could ask users to upgrade to the new version.

In https://github.com/pydata/xarray/pull/4178, I misinterpreted our policy as supporting "the most recent minor version (X.Y) initially published more than N months ago". This version makes a bit more sense to me: users only need to upgrade dependencies at least every N months to use the latest xarray release.

I understand that NEP-29 chose its language intentionally, so that distributors know ahead of time when they can drop support for a Python or NumPy version. But this seems like a (very) poor fit for projects without regular releases. At the very least we should adjust the specific time windows.

I'll see if I can gain some understanding of the motivation for this particular language over on the NumPy tracker...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4179/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
267927402 MDU6SXNzdWUyNjc5Mjc0MDI= 1652 Resolve warnings issued in the xarray test suite shoyer 1217238 closed 0     10 2017-10-24T07:36:55Z 2021-02-21T23:06:35Z 2021-02-21T23:06:34Z MEMBER      

82 warnings are currently issued in the process of running our test suite: https://gist.github.com/shoyer/db0b2c82efd76b254453216e957c4345

Some of can probably be safely ignored, but others are likely noticed by users, e.g., https://stackoverflow.com/questions/41130138/why-is-invalid-value-encountered-in-greater-warning-thrown-in-python-xarray-fo/41147570#41147570

It would be nice to clean up all of these, either by catching the appropriate upstream warning (if irrelevant) or changing our usage to avoid the warning. There may very well be a lurking FutureWarning in there somewhere that could cause issues when another library updates.

Probably the easiest way to get started here is to get the test suite running locally, and use py.test -W error to turn all warnings into errors.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1652/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
777327298 MDU6SXNzdWU3NzczMjcyOTg= 4749 Option for combine_attrs with conflicting values silently dropped shoyer 1217238 closed 0     0 2021-01-01T18:04:49Z 2021-02-10T19:50:17Z 2021-02-10T19:50:17Z MEMBER      

merge() currently supports four options for merging attrs: combine_attrs : {"drop", "identical", "no_conflicts", "override"}, \ default: "drop" String indicating how to combine attrs of the objects being merged: - "drop": empty attrs on returned Dataset. - "identical": all attrs must be the same on every object. - "no_conflicts": attrs from all objects are combined, any that have the same name must also have the same value. - "override": skip comparing and copy attrs from the first dataset to the result.

It would be nice to have an option to combine attrs from all objects like "no_conflicts", but that drops attributes with conflicting values rather than raising an error. We might call this combine_attrs="drop_conflicts" or combine_attrs="matching".

This is similar to how xarray currently handles conflicting values for DataArray.name and would be more suitable to consider for the default behavior of merge and other functions/methods that merge coordinates (e.g., apply_ufunc, concat, where, binary arithmetic).

cc @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4749/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
124809636 MDU6SXNzdWUxMjQ4MDk2MzY= 703 Document xray internals / advanced API shoyer 1217238 closed 0     2 2016-01-04T18:12:30Z 2020-11-03T17:33:32Z 2020-11-03T17:33:32Z MEMBER      

It would be useful to document the internal Variable class and the internal structure of Dataset and DataArray. This would be helpful for both new contributors and expert users, who might find Variable helpful as an advanced API.

I had some notes in an earlier version of the docs that could be adapted. Note, however, that the internal structure of DataArray changed in #648: http://xray.readthedocs.org/en/v0.2/tutorial.html#notes-on-xray-s-internals

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/703/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
169274464 MDU6SXNzdWUxNjkyNzQ0NjQ= 939 Consider how to deal with the proliferation of decoder options on open_dataset shoyer 1217238 closed 0     8 2016-08-04T01:57:26Z 2020-10-06T15:39:11Z 2020-10-06T15:39:11Z MEMBER      

There are already lots of keyword arguments, and users want even more! (#843)

Maybe we should use some sort of object to encapsulate desired options?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/939/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
644821435 MDU6SXNzdWU2NDQ4MjE0MzU= 4176 Pre-expand data and attributes in DataArray/Variable HTML repr? shoyer 1217238 closed 0     7 2020-06-24T18:22:35Z 2020-09-21T20:10:26Z 2020-06-28T17:03:40Z MEMBER      

Proposal

Given that a major purpose for plotting an array is to look at data or attributes, I wonder if we should expand these sections by default? - I worry that clicking on icons to expand sections may not be easy to discover - This would also be consistent with the text repr, which shows these sections by default (the Dataset repr is already consistent by default between text and HTML already)

Context

Currently the HTML repr for DataArray/Variable looks like this:

To see array data, you have to click on the icon:

(thanks to @max-sixty for making this a little bit more manageably sized in https://github.com/pydata/xarray/pull/3905!)

There's also a really nice repr for nested dask arrays:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4176/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
417542619 MDU6SXNzdWU0MTc1NDI2MTk= 2803 Test failure with TestValidateAttrs.test_validating_attrs shoyer 1217238 closed 0     6 2019-03-05T23:03:02Z 2020-08-25T14:29:19Z 2019-03-14T15:59:13Z MEMBER      

This is due to setting multi-dimensional attributes being an error, as of the latest netCDF4-Python release: https://github.com/Unidata/netcdf4-python/blob/master/Changelog

E.g., as seen on Appveyor: https://ci.appveyor.com/project/shoyer/xray/builds/22834250/job/9q0ip6i3cchlbkw2 ``` ================================== FAILURES =================================== ___ TestValidateAttrs.test_validating_attrs _____ self = <xarray.tests.test_backends.TestValidateAttrs object at 0x00000096BE5FAFD0> def test_validating_attrs(self): def new_dataset(): return Dataset({'data': ('y', np.arange(10.0))}, {'y': np.arange(10)})

    def new_dataset_and_dataset_attrs():
        ds = new_dataset()
        return ds, ds.attrs

    def new_dataset_and_data_attrs():
        ds = new_dataset()
        return ds, ds.data.attrs

    def new_dataset_and_coord_attrs():
        ds = new_dataset()
        return ds, ds.coords['y'].attrs

    for new_dataset_and_attrs in [new_dataset_and_dataset_attrs,
                                  new_dataset_and_data_attrs,
                                  new_dataset_and_coord_attrs]:
        ds, attrs = new_dataset_and_attrs()

        attrs[123] = 'test'
        with raises_regex(TypeError, 'Invalid name for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs[MiscObject()] = 'test'
        with raises_regex(TypeError, 'Invalid name for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs[''] = 'test'
        with raises_regex(ValueError, 'Invalid name for attr'):
            ds.to_netcdf('test.nc')

        # This one should work
        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = 'test'
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = {'a': 5}
        with raises_regex(TypeError, 'Invalid value for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = MiscObject()
        with raises_regex(TypeError, 'Invalid value for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = 5
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = 3.14
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = [1, 2, 3, 4]
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = (1.9, 2.5)
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = np.arange(5)
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = np.arange(12).reshape(3, 4)
        with create_tmp_file() as tmp_file:
          ds.to_netcdf(tmp_file)

xarray\tests\test_backends.py:3450:


xarray\core\dataset.py:1323: in to_netcdf compute=compute) xarray\backends\api.py:767: in to_netcdf unlimited_dims=unlimited_dims) xarray\backends\api.py:810: in dump_to_store unlimited_dims=unlimited_dims) xarray\backends\common.py:262: in store self.set_attributes(attributes) xarray\backends\common.py:278: in set_attributes self.set_attribute(k, v) xarray\backends\netCDF4_.py:418: in set_attribute set_nc_attribute(self.ds, key, value) xarray\backends\netCDF4.py:294: in _set_nc_attribute obj.setncattr(key, value) netCDF4_netCDF4.pyx:2781: in netCDF4._netCDF4.Dataset.setncattr ???


??? E ValueError: multi-dimensional array attributes not supported netCDF4_netCDF4.pyx:1514: ValueError ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2803/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
676306518 MDU6SXNzdWU2NzYzMDY1MTg= 4331 Support explicitly setting a dimension order with to_dataframe() shoyer 1217238 closed 0     0 2020-08-10T17:45:17Z 2020-08-14T18:28:26Z 2020-08-14T18:28:26Z MEMBER      

As discussed in https://github.com/pydata/xarray/issues/2346, it would be nice to support explicitly setting the desired order of dimensions when calling Dataset.to_dataframe() or DataArray.to_dataframe().

There is nice precedent for this in the to_dask_dataframe method: http://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_dask_dataframe.html

I imagine we could copy the exact same API for `to_dataframe.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4331/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
671019427 MDU6SXNzdWU2NzEwMTk0Mjc= 4295 We shouldn't require a recent version of setuptools to install xarray shoyer 1217238 closed 0     33 2020-08-01T16:49:57Z 2020-08-14T09:52:42Z 2020-08-14T09:52:42Z MEMBER      

@canol reports on our mailing that our setuptools 41.2 (released 21 August 2019) install requirement is making it hard to install recent versions of xarray at his company: https://groups.google.com/g/xarray/c/HS_xcZDEEtA/m/GGmW-3eMCAAJ

Hello, this is just a feedback about an issue we experienced which caused our internal tools stack to stay with xarray 0.15 version instead of a newer versions.

We are a company using xarray in our internal frameworks and at the beginning we didn't have any restrictions on xarray version in our requirements file, so that new installations of our framework were using the latest version of xarray. But a few months ago we started to hear complaints from users who were having problems with installing our framework and the installation was failing because of xarray's requirement to use at least setuptools 41.2 which is released on 21th of August last year. So it hasn't been a year since it got released which might be considered relatively new.

During the installation of our framework, pip was failing to update setuptools by saying that some other process is already using setuptools files so it cannot update setuptools. The people who are using our framework are not software developers so they didn't know how to solve this problem and it became so overwhelming for us maintainers that we set the xarray requirement to version >=0.15 <0.16. We also share our internal framework with customers of our company so we didn't want to bother the customers with any potential problems.

You can see some other people having having similar problem when trying to update setuptools here (although not related to xarray): https://stackoverflow.com/questions/49338652/pip-install-u-setuptools-fail-windows-10

It is not a big deal but I just wanted to give this as a feedback. I don't know how much xarray depends on setuptools' 41.2 version.

I was surprised to see this in our setup.cfg file, added by @crusaderky in #3628. The version requirement is not documented in our docs.

Given that setuptools may be challenging to upgrade, would it be possible to relax this version requirement?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4295/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
290593053 MDU6SXNzdWUyOTA1OTMwNTM= 1850 xarray contrib module shoyer 1217238 closed 0     25 2018-01-22T19:50:08Z 2020-07-23T16:34:10Z 2020-07-23T16:34:10Z MEMBER      

Over in #1288 @nbren12 wrote:

Overall, I think the xarray community could really benefit from some kind of centralized contrib package which has a low barrier to entry for these kinds of functions.

Yes, I agree that we should explore this. There are a lot of interesting projects building on xarray now but not great ways to discover them.

Are there other open source projects with a good model we should copy here? - Scikit-Learn has a separate GitHub org/repositories for contrib projects: https://github.com/scikit-learn-contrib. - TensorFlow has a contrib module within the TensorFlow namespace: tensorflow.contrib

This gives us two different models to consider. The first "separate repository" model might be easier/flexible from a maintenance perspective. Any preferences/thoughts?

There's also some nice overlap with the Pangeo project.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1850/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
35682274 MDU6SXNzdWUzNTY4MjI3NA== 158 groupby should work with name=None shoyer 1217238 closed 0     2 2014-06-13T15:38:00Z 2020-05-30T13:15:56Z 2020-05-30T13:15:56Z MEMBER      
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/158/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
612772669 MDU6SXNzdWU2MTI3NzI2Njk= 4030 Doc build on Azure is timing out on master shoyer 1217238 closed 0     1 2020-05-05T17:30:16Z 2020-05-05T21:49:26Z 2020-05-05T21:49:26Z MEMBER      

I don't know what's going on, but it currently times out after 1 hour: https://dev.azure.com/xarray/xarray/_build/results?buildId=2767&view=logs&j=7e620c85-24a8-5ffa-8b1f-642bc9b1fc36&t=68484831-0a19-5145-bfe9-6309e5f7691d

Is it possible to login to Azure to debug this stuff?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4030/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
598567792 MDU6SXNzdWU1OTg1Njc3OTI= 3966 HTML repr is slightly broken in Google Colab shoyer 1217238 closed 0     1 2020-04-12T20:44:51Z 2020-04-16T20:14:37Z 2020-04-16T20:14:32Z MEMBER      

The "data" toggles are pre-expanded and don't work.

See https://github.com/googlecolab/colabtools/issues/1145 for a full description.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3966/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
479434052 MDU6SXNzdWU0Nzk0MzQwNTI= 3206 DataFrame with MultiIndex -> xarray with sparse array shoyer 1217238 closed 0     1 2019-08-12T00:46:16Z 2020-04-06T20:41:26Z 2019-08-27T08:54:26Z MEMBER      

Now that we have preliminary support for sparse arrays in xarray, one really cool feature we could explore is creating sparse arrays from MultiIndexed pandas DataFrames.

Right now, xarray's methods for creating objects from pandas always create dense arrays, but the size of these dense arrays can get big really quickly if the MultiIndex is sparsely populated, e.g., python import pandas as pd import numpy as np import xarray df = pd.DataFrame({ 'w': range(10), 'x': list('abcdefghij'), 'y': np.arange(0, 100, 10), 'z': np.ones(10), }).set_index(['w', 'x', 'y']) print(xarray.Dataset.from_dataframe(df)) This length 10 DataFrame turned into a dense array with 1000 elements (only 10 of which are not NaN): <xarray.Dataset> Dimensions: (w: 10, x: 10, y: 10) Coordinates: * w (w) int64 0 1 2 3 4 5 6 7 8 9 * x (x) object 'a' 'b' 'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' * y (y) int64 0 10 20 30 40 50 60 70 80 90 Data variables: z (w, x, y) float64 1.0 nan nan nan nan nan ... nan nan nan nan 1.0

We can imagine xarray.Dataset.from_dataframe(df, sparse=True) would make the same Dataset, but with sparse array (with a NaN fill value) instead of dense arrays.

Once sparse arrays work pretty well, this could actually obviate most of the use cases for MultiIndex in arrays. Arguably the model is quite a bit cleaner.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3206/reactions",
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 3,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
28376794 MDU6SXNzdWUyODM3Njc5NA== 25 Consistent rules for handling merges between variables with different attributes shoyer 1217238 closed 0     13 2014-02-26T22:37:01Z 2020-04-05T19:13:13Z 2014-09-04T06:50:49Z MEMBER      

Currently, variable attributes are checked for equality before allowing for a merge via a call to xarray_equal. It should be possible to merge datasets even if some of the variable metadata disagrees (conflicting attributes should be dropped). This is already the behavior for global attributes.

The right design of this feature should probably include some optional argument to Dataset.merge indicating how strict we want the merge to be. I can see at least three versions that could be useful: 1. Drop conflicting metadata silently. 2. Don't allow for conflicting values, but drop non-matching keys. 3. Require all keys and values to match.

We can argue about which of these should be the default option. My inclination is to be as flexible as possible by using 1 or 2 in most cases.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/25/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
29136905 MDU6SXNzdWUyOTEzNjkwNQ== 60 Implement DataArray.idxmax() shoyer 1217238 closed 0   1.0 741199 14 2014-03-10T22:03:06Z 2020-03-29T01:54:25Z 2020-03-29T01:54:25Z MEMBER      

Should match the pandas function: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/60/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
261805282 MDU6SXNzdWUyNjE4MDUyODI= 1600 groupby doesn't work when a dimension is resized as part of apply shoyer 1217238 closed 0     1 2017-09-30T01:01:06Z 2020-03-25T15:30:18Z 2020-03-25T15:30:17Z MEMBER      

``` In [60]: da = xarray.DataArray([1, 2, 3], dims='x', coords={'y': ('x', [1, 1, 1])})

In [61]: da.groupby('y').apply(lambda x: x[:2])

IndexError Traceback (most recent call last) <ipython-input-61-4c28a4712c34> in <module>() ----> 1 da.groupby('y').apply(lambda x: x[:2])

~/dev/xarray/xarray/core/groupby.py in apply(self, func, shortcut, kwargs) 516 applied = (maybe_wrap_array(arr, func(arr, kwargs)) 517 for arr in grouped) --> 518 return self._combine(applied, shortcut=shortcut) 519 520 def _combine(self, applied, shortcut=False):

~/dev/xarray/xarray/core/groupby.py in _combine(self, applied, shortcut) 526 else: 527 combined = concat(applied, dim) --> 528 combined = _maybe_reorder(combined, dim, positions) 529 530 if isinstance(combined, type(self._obj)):

~/dev/xarray/xarray/core/groupby.py in _maybe_reorder(xarray_obj, dim, positions) 436 return xarray_obj 437 else: --> 438 return xarray_obj[{dim: order}] 439 440

~/dev/xarray/xarray/core/dataarray.py in getitem(self, key) 476 else: 477 # orthogonal array indexing --> 478 return self.isel(**self._item_key_to_dict(key)) 479 480 def setitem(self, key, value):

~/dev/xarray/xarray/core/dataarray.py in isel(self, drop, indexers) 710 DataArray.sel 711 """ --> 712 ds = self._to_temp_dataset().isel(drop=drop, indexers) 713 return self._from_temp_dataset(ds) 714

~/dev/xarray/xarray/core/dataset.py in isel(self, drop, indexers) 1172 for name, var in iteritems(self._variables): 1173 var_indexers = dict((k, v) for k, v in indexers if k in var.dims) -> 1174 new_var = var.isel(var_indexers) 1175 if not (drop and name in var_indexers): 1176 variables[name] = new_var

~/dev/xarray/xarray/core/variable.py in isel(self, **indexers) 596 if dim in indexers: 597 key[i] = indexers[dim] --> 598 return self[tuple(key)] 599 600 def squeeze(self, dim=None):

~/dev/xarray/xarray/core/variable.py in getitem(self, key) 426 dims = tuple(dim for k, dim in zip(key, self.dims) 427 if not isinstance(k, integer_types)) --> 428 values = self._indexable_data[key] 429 # orthogonal indexing should ensure the dimensionality is consistent 430 if hasattr(values, 'ndim'):

~/dev/xarray/xarray/core/indexing.py in getitem(self, key) 476 def getitem(self, key): 477 key = self._convert_key(key) --> 478 return self._ensure_ndarray(self.array[key]) 479 480 def setitem(self, key, value):

IndexError: index 2 is out of bounds for axis 1 with size 2 ```

This would be useful, for example, for grouped sampling: https://stackoverflow.com/questions/46498247/how-to-downsample-xarray-dataset-using-groupby

To fix this, we will need to update our heuristics that decide if a groupby operation is a "transform" type operation that should have the output reordered to the original order: https://github.com/pydata/xarray/blob/24643ecee2eab04d0f84c41715d753e829f448e6/xarray/core/groupby.py#L293-L299

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1600/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
390774883 MDU6SXNzdWUzOTA3NzQ4ODM= 2605 Pad method shoyer 1217238 closed 0     9 2018-12-13T17:08:25Z 2020-03-19T14:41:49Z 2020-03-19T14:41:49Z MEMBER      

It would be nice to have a generic .pad() method to xarray objects based on numpy.pad and dask.array.pad.

In particular,pad with mode='wrap' could solve several use-cases related to periodic boundary conditions: https://github.com/pydata/xarray/issues/1005 , https://github.com/pydata/xarray/issues/2007. For example, ds.pad(longitude=(0, 1), mode='wrap') to add an extra point with periodic boundary conditions along the longitude dimension.

It probably makes sense to linearly extrapolate coordinates along padded dimensions, as long as they are regularly spaced. This might use heuristics and/or a keyword argument.

I don't have a plans to work on this in the near term. It could be a good project of moderate complexity for a new contributor.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2605/reactions",
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
484622545 MDU6SXNzdWU0ODQ2MjI1NDU= 3252 interp and reindex should work for 1d -> nd indexing shoyer 1217238 closed 0     12 2019-08-23T16:52:44Z 2020-03-13T13:58:38Z 2020-03-13T13:58:38Z MEMBER      

This works with isel and sel (see below). There's no particular reason why it can't work with reindex and interp, too, though we would need to implement our own vectorized version of linear interpolation (should not be too bad, it's mostly a matter of indexing twice and calculating weights from the difference in coordinate values).

Apparently this is quite important for vertical regridding in weather/climate science (cc @rabernat, @nbren12 ) ``` In [35]: import xarray as xr

In [36]: import numpy as np

In [37]: data = xr.DataArray(np.arange(12).reshape((3, 4)), [('x', np.arange(3)), ('y', np.arange(4))])

In [38]: ind = xr.DataArray([[0, 2], [1, 0], [1, 2]], dims=['x', 'z'], coords={'x': [0, 1, 2]})

In [39]: data Out[39]: <xarray.DataArray (x: 3, y: 4)> array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) Coordinates: * x (x) int64 0 1 2 * y (y) int64 0 1 2 3

In [40]: ind Out[40]: <xarray.DataArray (x: 3, z: 2)> array([[0, 2], [1, 0], [1, 2]]) Coordinates: * x (x) int64 0 1 2 Dimensions without coordinates: z

In [41]: data.isel(y=ind) Out[41]: <xarray.DataArray (x: 3, z: 2)> array([[ 0, 2], [ 5, 4], [ 9, 10]]) Coordinates: * x (x) int64 0 1 2 y (x, z) int64 0 2 1 0 1 2 Dimensions without coordinates: z

In [42]: data.sel(y=ind) Out[42]: <xarray.DataArray (x: 3, z: 2)> array([[ 0, 2], [ 5, 4], [ 9, 10]]) Coordinates: * x (x) int64 0 1 2 y (x, z) int64 0 2 1 0 1 2 Dimensions without coordinates: z

In [43]: data.interp(y=ind)

ValueError Traceback (most recent call last) <ipython-input-43-e6eb7e39fd31> in <module> ----> 1 data.interp(y=ind)

~/dev/xarray/xarray/core/dataarray.py in interp(self, coords, method, assume_sorted, kwargs, coords_kwargs) 1303 kwargs=kwargs, 1304 assume_sorted=assume_sorted, -> 1305 coords_kwargs 1306 ) 1307 return self._from_temp_dataset(ds)

~/dev/xarray/xarray/core/dataset.py in interp(self, coords, method, assume_sorted, kwargs, coords_kwargs) 2455 } 2456 variables[name] = missing.interp( -> 2457 var, var_indexers, method, kwargs 2458 ) 2459 elif all(d not in indexers for d in var.dims):

~/dev/xarray/xarray/core/missing.py in interp(var, indexes_coords, method, *kwargs) 533 else: 534 out_dims.add(d) --> 535 return result.transpose(tuple(out_dims)) 536 537

~/dev/xarray/xarray/core/variable.py in transpose(self, *dims) 1219 return self.copy(deep=False) 1220 -> 1221 data = as_indexable(self._data).transpose(axes) 1222 return type(self)(dims, data, self._attrs, self._encoding, fastpath=True) 1223

~/dev/xarray/xarray/core/indexing.py in transpose(self, order) 1218 1219 def transpose(self, order): -> 1220 return self.array.transpose(order) 1221 1222 def getitem(self, key):

ValueError: axes don't match array

In [44]: data.reindex(y=ind) /Users/shoyer/dev/xarray/xarray/core/dataarray.py:1240: FutureWarning: Indexer has dimensions ('x', 'z') that are different from that to be indexed along y. This will behave differently in the future. fill_value=fill_value,


ValueError Traceback (most recent call last) <ipython-input-44-1277ead996ae> in <module> ----> 1 data.reindex(y=ind)

~/dev/xarray/xarray/core/dataarray.py in reindex(self, indexers, method, tolerance, copy, fill_value, **indexers_kwargs) 1238 tolerance=tolerance, 1239 copy=copy, -> 1240 fill_value=fill_value, 1241 ) 1242 return self._from_temp_dataset(ds)

~/dev/xarray/xarray/core/dataset.py in reindex(self, indexers, method, tolerance, copy, fill_value, **indexers_kwargs) 2360 tolerance, 2361 copy=copy, -> 2362 fill_value=fill_value, 2363 ) 2364 coord_names = set(self._coord_names)

~/dev/xarray/xarray/core/alignment.py in reindex_variables(variables, sizes, indexes, indexers, method, tolerance, copy, fill_value) 398 ) 399 --> 400 target = new_indexes[dim] = utils.safe_cast_to_index(indexers[dim]) 401 402 if dim in indexes:

~/dev/xarray/xarray/core/utils.py in safe_cast_to_index(array) 104 index = array 105 elif hasattr(array, "to_index"): --> 106 index = array.to_index() 107 else: 108 kwargs = {}

~/dev/xarray/xarray/core/dataarray.py in to_index(self) 545 arrays. 546 """ --> 547 return self.variable.to_index() 548 549 @property

~/dev/xarray/xarray/core/variable.py in to_index(self) 445 def to_index(self): 446 """Convert this variable to a pandas.Index""" --> 447 return self.to_index_variable().to_index() 448 449 def to_dict(self, data=True):

~/dev/xarray/xarray/core/variable.py in to_index_variable(self) 438 """Return this variable as an xarray.IndexVariable""" 439 return IndexVariable( --> 440 self.dims, self._data, self._attrs, encoding=self._encoding, fastpath=True 441 ) 442

~/dev/xarray/xarray/core/variable.py in init(self, dims, data, attrs, encoding, fastpath) 1941 super().init(dims, data, attrs, encoding, fastpath) 1942 if self.ndim != 1: -> 1943 raise ValueError("%s objects must be 1-dimensional" % type(self).name) 1944 1945 # Unlike in Variable, always eagerly load values into memory

ValueError: IndexVariable objects must be 1-dimensional ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3252/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
309136602 MDU6SXNzdWUzMDkxMzY2MDI= 2019 Appending to an existing netCDF file fails with scipy==1.0.1 shoyer 1217238 closed 0     5 2018-03-27T21:15:05Z 2020-03-09T07:18:07Z 2020-03-09T07:18:07Z MEMBER      

https://travis-ci.org/pydata/xarray/builds/359093748

Example failure: ``` ___ ScipyFilePathTest.test_append_write ____ self = <xarray.tests.test_backends.ScipyFilePathTest testMethod=test_append_write> def test_append_write(self): # regression for GH1215 data = create_test_data()

  with self.roundtrip_append(data) as actual:

xarray/tests/test_backends.py:786:


../../../miniconda/envs/test_env/lib/python3.6/contextlib.py:81: in enter return next(self.gen) xarray/tests/test_backends.py:155: in roundtrip_append self.save(data[[key]], path, mode=mode, save_kwargs) xarray/tests/test_backends.py:162: in save kwargs) xarray/core/dataset.py:1131: in to_netcdf unlimited_dims=unlimited_dims) xarray/backends/api.py:657: in to_netcdf unlimited_dims=unlimited_dims) xarray/core/dataset.py:1068: in dump_to_store unlimited_dims=unlimited_dims) xarray/backends/common.py:363: in store unlimited_dims=unlimited_dims) xarray/backends/common.py:402: in set_variables self.writer.add(source, target) xarray/backends/common.py:265: in add target[...] = source xarray/backends/scipy_.py:61: in setitem data[key] = value


self = <scipy.io.netcdf.netcdf_variable object at 0x7fe3eb3ec6a0> index = Ellipsis, data = array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. ]) def setitem(self, index, data): if self.maskandscale: missing_value = ( self._get_missing_value() or getattr(data, 'fill_value', 999999)) self._attributes.setdefault('missing_value', missing_value) self._attributes.setdefault('_FillValue', missing_value) data = ((data - self._attributes.get('add_offset', 0.0)) / self._attributes.get('scale_factor', 1.0)) data = np.ma.asarray(data).filled(missing_value) if self._typecode not in 'fd' and data.dtype.kind == 'f': data = np.round(data)

    # Expand data for record vars?
    if self.isrec:
        if isinstance(index, tuple):
            rec_index = index[0]
        else:
            rec_index = index
        if isinstance(rec_index, slice):
            recs = (rec_index.start or 0) + len(data)
        else:
            recs = rec_index + 1
        if recs > len(self.data):
            shape = (recs,) + self._shape[1:]
            # Resize in-place does not always work since
            # the array might not be single-segment
            try:
                self.data.resize(shape)
            except ValueError:
                self.__dict__['data'] = np.resize(self.data, shape).astype(self.data.dtype)
  self.data[index] = data

E ValueError: assignment destination is read-only ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2019/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
304630814 MDU6SXNzdWUzMDQ2MzA4MTQ= 1986 Doc build in Travis-CI should fail when IPython encounters unexpected error shoyer 1217238 closed 0     2 2018-03-13T05:15:03Z 2020-01-13T20:33:46Z 2020-01-13T17:43:36Z MEMBER      

We don't want to release docs in a broken state.

Ideally, we would simply fail the build when Sphinx encounters a warning (e.g., by adding the -W flag). I attempted to do this in https://github.com/pydata/xarray/pull/1984. However, there are two issues with this: 1. We currently issue a very long list of warnings as part of a sphinx-build (see below), most of these due to failed references or a formatting issue with docstrings for numpydoc. Fixing this will be non-trivial. 2. IPython's sphinx directive currently does not even issue warnings/errors, due a bug in recent versions of IPython. This has been fixed on master, but not in a released version yet. We should be able to fix this when pandas removes their versioned version of the IPython directive (https://github.com/pandas-dev/pandas/pull/19657).

Expand for warnings from sphinx:

/Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.all:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.any:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.argmax:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.argmin:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.assign_attrs:4: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.assign_attrs:4: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.count:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.cumprod:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.cumsum:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.groupby_bins:66: WARNING: duplicate citation R3, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.apply_ufunc.rst /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.interpolate_na:15: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.max:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.mean:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.median:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.min:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.pipe:2: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.pipe:2: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.prod:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.quantile:44: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.resample:54: WARNING: duplicate citation R4, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.arcsinh.rst /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.std:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.sum:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.to_netcdf:22: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.to_netcdf:58: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.to_netcdf:55: WARNING: Inline literal start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.var:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.assign_attrs:4: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.assign_attrs:4: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.cumprod:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.cumsum:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.groupby_bins:66: WARNING: duplicate citation R7, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.arctanh.rst /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.interpolate_na:15: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.merge:28: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.pipe:2: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.pipe:2: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.resample:54: WARNING: duplicate citation R8, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.exp.rst /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.to_netcdf:59: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.to_netcdf:56: WARNING: Inline literal start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/alignment.py:docstring of xarray.align:25: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/alignment.py:docstring of xarray.align:45: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/computation.py:docstring of xarray.apply_ufunc:147: WARNING: duplicate citation R9, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.exp.rst /Users/shoyer/dev/xarray/xarray/core/combine.py:docstring of xarray.auto_combine:36: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/combine.py:docstring of xarray.concat:35: WARNING: Definition list ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:11: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:13: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:26: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:28: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:30: WARNING: Enumerated list ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.apply:11: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.apply:13: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/merge.py:docstring of xarray.merge:15: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_mfdataset:11: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_mfdataset:38: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.angle:13: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arccos:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arccosh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arcsin:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arcsinh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctan:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctan2:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctanh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.ceil:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.conj:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.copysign:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.cos:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.cosh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.deg2rad:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.degrees:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.exp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.expm1:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fabs:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fix:16: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.floor:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmod:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.frexp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.hypot:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isfinite:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isinf:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.ldexp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log10:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log1p:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log2:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logaddexp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logaddexp2:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_and:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_not:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_or:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_xor:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.nextafter:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.rad2deg:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.radians:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.rint:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sign:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.signbit:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sin:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sinh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sqrt:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.square:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.tan:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.tanh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.trunc:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/doc/README.rst: WARNING: document isn't included in any toctree /Users/shoyer/dev/xarray/doc/api-hidden.rst: WARNING: document isn't included in any toctree done checking consistency... done preparing documents... done /Users/shoyer/dev/xarray/doc/api.rst:165: WARNING: py:attr reference target not found: Dataset.astype /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: py:obj reference target not found: xarray.DataArray.isel_points /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: py:obj reference target not found: xarray.DataArray.sel_points /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: py:obj reference target not found: xarray.DataArray.dt /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.chunk:29: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.compute:26: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.DataArray.conj:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.DataArray.conjugate:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.identical:16: WARNING: py:obj reference target not found: DataArray.equal /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.interpolate_na:53: WARNING: py:obj reference target not found: scipy.interpolate /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.load:25: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.persist:23: WARNING: py:obj reference target not found: dask.persist /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.astype /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.dump_to_store /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.get /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.isel_points /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.keys /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.load_store /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.sel_points /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.chunk:29: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.compute:26: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.Dataset.conj:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.Dataset.conjugate:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.info:20: WARNING: py:obj reference target not found: netCDF /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.interpolate_na:53: WARNING: py:obj reference target not found: scipy.interpolate /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.load:25: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.persist:25: WARNING: py:obj reference target not found: dask.persist /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.all /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.any /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.argmax /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.argmin /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.argsort /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.astype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.broadcast_equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.chunk /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.clip /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.compute /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.concat /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.conj /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.conjugate /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.copy /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.count /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.cumprod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.cumsum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.expand_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.fillna /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.get_axis_num /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.get_level_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.identical /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.isel /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.isnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.item /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.load /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.max /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.mean /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.median /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.min /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.no_conflicts /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.notnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.pad_with_fill_value /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.prod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.quantile /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.rank /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.reduce /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.roll /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.rolling_window /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.round /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.searchsorted /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.set_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.shift /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.squeeze /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.stack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.std /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.sum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_base_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_coord /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_index /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_index_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.transpose /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.unstack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.var /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.where /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.T /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.attrs /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.chunks /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.data /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.dtype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.encoding /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.imag /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.level_names /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.name /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.nbytes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.ndim /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.real /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.shape /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.size /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.sizes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.values /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.all /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.any /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.argmax /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.argmin /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.argsort /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.astype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.broadcast_equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.chunk /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.clip /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.compute /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.concat /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.conj /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.conjugate /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.copy /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.count /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.cumprod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.cumsum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.expand_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.fillna /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.get_axis_num /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.identical /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.isel /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.isnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.item /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.load /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.max /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.mean /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.median /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.min /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.no_conflicts /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.notnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.pad_with_fill_value /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.prod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.quantile /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.rank /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.reduce /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.roll /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.rolling_window /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.round /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.searchsorted /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.set_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.shift /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.squeeze /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.stack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.std /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.sum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_base_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_coord /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_index /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_index_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.transpose /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.unstack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.var /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.where /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.T /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.attrs /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.chunks /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.data /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.dtype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.encoding /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.imag /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.nbytes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.ndim /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.real /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.shape /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.size /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.sizes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.values /Users/shoyer/dev/xarray/xarray/core/computation.py:docstring of xarray.apply_ufunc:59: WARNING: py:func reference target not found: numpy.vectorize /Users/shoyer/dev/xarray/xarray/core/computation.py:docstring of xarray.apply_ufunc:197: WARNING: py:func reference target not found: numpy.vectorize /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.assert_open /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.close /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.encode /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.encode_attribute /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.encode_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.ensure_open /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.items /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.keys /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.load /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.prepare_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_attribute /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_attributes /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_dimension /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_dimensions /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_variables /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.store /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.store_dataset /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.sync /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.values /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.attrs /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.ds /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.variables /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.assert_open /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.close /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.encode /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.encode_attribute /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.encode_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.ensure_open /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.items /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.keys /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.load /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.open /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.prepare_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_attribute /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_attributes /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_dimension /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_dimensions /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_variables /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.store /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.store_dataset /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.sync /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.values /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.attrs /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.ds /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.variables /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.close /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.items /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.keys /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.load /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.open /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.values /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.attrs /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.variables /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.assert_open /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.close /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.encode /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.encode_attribute /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.encode_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.ensure_open /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.items /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.keys /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.load /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.prepare_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_attribute /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_attributes /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_dimension /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_dimensions /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_variables /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.store /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.store_dataset /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.sync /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.values /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.attrs /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.ds /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.variables /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.all /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.any /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.argmax /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.argmin /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.count /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.max /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.mean /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.median /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.min /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.prod generating indices.../Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.std /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.sum /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.var /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.groups /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.assign_coords:15: WARNING: py:obj reference target not found: Dataset.assign_coords /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.fillna:29: WARNING: py:obj reference target not found: Dataset.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.fillna:29: WARNING: py:obj reference target not found: DataArray.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.where:30: WARNING: py:obj reference target not found: Dataset.where /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.all /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.any /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.argmax /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.argmin /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.count /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.max /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.mean /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.median /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.min /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.prod /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.std /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.sum /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.var /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.groups /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.assign:15: WARNING: py:obj reference target not found: Dataset.assign genindex/Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.assign_coords:15: WARNING: py:obj reference target not found: Dataset.assign_coords /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.fillna:29: WARNING: py:obj reference target not found: Dataset.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.fillna:29: WARNING: py:obj reference target not found: DataArray.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.where:30: WARNING: py:obj reference target not found: Dataset.where /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.argmax /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.argmin /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.count /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.max /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.mean /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.median /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.min /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.prod /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.std /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.sum /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.var /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.argmax /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.argmin /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.count /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.max /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.mean /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.median /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.min /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.prod /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.std /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.sum /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.var /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_dataarray:73: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_dataset:72: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_mfdataset:68: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/backends/rasterio_.py:docstring of xarray.open_rasterio:48: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.add_colorbar /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.set_axis_labels /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.set_xlabels /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.set_ylabels /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_equal:29: WARNING: py:obj reference target not found: Dataset.equals /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_equal:29: WARNING: py:obj reference target not found: DataArray.equals /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_identical:26: WARNING: py:obj reference target not found: Dataset.equals /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_identical:26: WARNING: py:obj reference target not found: DataArray.equals /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arccos:53: WARNING: py:obj reference target not found: emath.arccos /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arcsin:49: WARNING: py:obj reference target not found: emath.arcsin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctanh:47: WARNING: py:obj reference target not found: emath.arctanh /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.deg2rad:50: WARNING: py:obj reference target not found: unwrap /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.exp:50: WARNING: py:obj reference target not found: exp2 /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fabs:52: WARNING: py:obj reference target not found: absolute /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fix:33: WARNING: py:obj reference target not found: around /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:63: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:66: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:68: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:68: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:63: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:66: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:68: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:68: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmod:57: WARNING: py:obj reference target not found: remainder /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmod:59: WARNING: py:obj reference target not found: divide /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.iscomplex:31: WARNING: py:obj reference target not found: iscomplexobj /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isfinite:57: WARNING: py:obj reference target not found: isneginf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isfinite:57: WARNING: py:obj reference target not found: isposinf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isinf:61: WARNING: py:obj reference target not found: isneginf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isinf:61: WARNING: py:obj reference target not found: isposinf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:53: WARNING: py:obj reference target not found: isneginf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:53: WARNING: py:obj reference target not found: isposinf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:53: WARNING: py:obj reference target not found: isnat /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isreal:31: WARNING: py:obj reference target not found: isrealobj /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log:51: WARNING: py:obj reference target not found: emath.log /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log10:48: WARNING: py:obj reference target not found: emath.log10 /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log2:47: WARNING: py:obj reference target not found: emath.log2 /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_and:48: WARNING: py:obj reference target not found: bitwise_and /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_or:49: WARNING: py:obj reference target not found: bitwise_or /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_xor:50: WARNING: py:obj reference target not found: bitwise_xor /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:63: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:66: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:68: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:68: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:63: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:66: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:68: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:68: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.rad2deg:50: WARNING: py:obj reference target not found: unwrap /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sqrt:52: WARNING: py:obj reference target not found: lib.scimath.sqrt /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.square:48: WARNING: py:obj reference target not found: power /Users/shoyer/dev/xarray/doc/whats-new.rst:51: WARNING: py:func reference target not found: np.einsum /Users/shoyer/dev/xarray/doc/whats-new.rst:80: WARNING: py:func reference target not found: xarray.DataArrayRolling /Users/shoyer/dev/xarray/doc/whats-new.rst:80: WARNING: py:func reference target not found: xarray.DataArrayRolling.construct /Users/shoyer/dev/xarray/doc/whats-new.rst:153: WARNING: py:func reference target not found: plot /Users/shoyer/dev/xarray/doc/whats-new.rst:236: WARNING: py:meth reference target not found: DataArray.__dask_scheduler__ /Users/shoyer/dev/xarray/doc/whats-new.rst:238: WARNING: py:meth reference target not found: DataArray.plot.imshow /Users/shoyer/dev/xarray/doc/whats-new.rst:415: WARNING: py:func reference target not found: xarray.show_versions /Users/shoyer/dev/xarray/doc/whats-new.rst:428: WARNING: py:func reference target not found: xarray.conventions.decode_cf_datetime /Users/shoyer/dev/xarray/doc/whats-new.rst:446: WARNING: py:func reference target not found: xarray.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:487: WARNING: py:meth reference target not found: xarray.backends.PydapDataStore.open /Users/shoyer/dev/xarray/doc/whats-new.rst:856: WARNING: py:meth reference target not found: DataArray.rolling(...).count /Users/shoyer/dev/xarray/doc/whats-new.rst:994: WARNING: py:meth reference target not found: xarray.Variable.to_base_variable /Users/shoyer/dev/xarray/doc/whats-new.rst:994: WARNING: py:meth reference target not found: xarray.Variable.to_index_variable /Users/shoyer/dev/xarray/doc/whats-new.rst:1047: WARNING: py:meth reference target not found: Variable.compute /Users/shoyer/dev/xarray/doc/whats-new.rst:1073: WARNING: py:class reference target not found: FacetGrid /Users/shoyer/dev/xarray/doc/whats-new.rst:1089: WARNING: py:attr reference target not found: xray.Dataset.encoding /Users/shoyer/dev/xarray/doc/whats-new.rst:1089: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:1170: WARNING: py:meth reference target not found: xarray.Dataset.isel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1170: WARNING: py:meth reference target not found: xarray.Dataset.sel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1282: WARNING: py:meth reference target not found: resample /Users/shoyer/dev/xarray/doc/whats-new.rst:1287: WARNING: py:meth reference target not found: sel /Users/shoyer/dev/xarray/doc/whats-new.rst:1287: WARNING: py:meth reference target not found: loc /Users/shoyer/dev/xarray/doc/whats-new.rst:1307: WARNING: py:meth reference target not found: filter_by_attrs /Users/shoyer/dev/xarray/doc/whats-new.rst:1434: WARNING: py:class reference target not found: pd.Series /Users/shoyer/dev/xarray/doc/whats-new.rst:1453: WARNING: py:meth reference target not found: xarray.Dataset.from_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1504: WARNING: py:class reference target not found: xray.DataArray /Users/shoyer/dev/xarray/doc/whats-new.rst:1541: WARNING: py:meth reference target not found: xray.DataArray.to_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1611: WARNING: py:meth reference target not found: xray.Dataset.shift /Users/shoyer/dev/xarray/doc/whats-new.rst:1611: WARNING: py:meth reference target not found: xray.Dataset.roll /Users/shoyer/dev/xarray/doc/whats-new.rst:1626: WARNING: py:func reference target not found: xray.broadcast /Users/shoyer/dev/xarray/doc/whats-new.rst:1683: WARNING: py:meth reference target not found: xray.DataArray.plot /Users/shoyer/dev/xarray/doc/whats-new.rst:1692: WARNING: py:class reference target not found: xray.plot.FacetGrid /Users/shoyer/dev/xarray/doc/whats-new.rst:1692: WARNING: py:meth reference target not found: xray.plot.plot /Users/shoyer/dev/xarray/doc/whats-new.rst:1695: WARNING: py:meth reference target not found: xray.Dataset.sel /Users/shoyer/dev/xarray/doc/whats-new.rst:1695: WARNING: py:meth reference target not found: xray.Dataset.reindex /Users/shoyer/dev/xarray/doc/whats-new.rst:1712: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:1715: WARNING: py:attr reference target not found: xray.Dataset.real /Users/shoyer/dev/xarray/doc/whats-new.rst:1715: WARNING: py:attr reference target not found: xray.Dataset.imag /Users/shoyer/dev/xarray/doc/whats-new.rst:1717: WARNING: py:meth reference target not found: xray.Dataset.from_dataframe /Users/shoyer/dev/xarray/doc/whats-new.rst:1732: WARNING: py:meth reference target not found: xray.DataArray.name /Users/shoyer/dev/xarray/doc/whats-new.rst:1734: WARNING: py:meth reference target not found: xray.DataArray.where /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.isel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.sel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.where /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.diff /Users/shoyer/dev/xarray/doc/whats-new.rst:1768: WARNING: py:meth reference target not found: xray.DataArray.plot /Users/shoyer/dev/xarray/doc/whats-new.rst:1773: WARNING: undefined label: copies vs views (if the link has no caption the label must precede a section header) /Users/shoyer/dev/xarray/doc/whats-new.rst:1778: WARNING: py:meth reference target not found: xray.Dataset.isel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1778: WARNING: py:meth reference target not found: xray.Dataset.sel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1823: WARNING: py:meth reference target not found: xray.Dataset.where /Users/shoyer/dev/xarray/doc/whats-new.rst:1834: WARNING: py:meth reference target not found: xray.DataArray.diff /Users/shoyer/dev/xarray/doc/whats-new.rst:1834: WARNING: py:meth reference target not found: xray.Dataset.diff /Users/shoyer/dev/xarray/doc/whats-new.rst:1838: WARNING: py:meth reference target not found: xray.DataArray.to_masked_array /Users/shoyer/dev/xarray/doc/whats-new.rst:1847: WARNING: py:meth reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1876: WARNING: py:func reference target not found: xray.concat /Users/shoyer/dev/xarray/doc/whats-new.rst:1886: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1890: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1890: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1895: WARNING: py:func reference target not found: xray.save_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1914: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1914: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1931: WARNING: py:meth reference target not found: xray.Dataset.pipe /Users/shoyer/dev/xarray/doc/whats-new.rst:1933: WARNING: py:meth reference target not found: xray.Dataset.assign /Users/shoyer/dev/xarray/doc/whats-new.rst:1933: WARNING: py:meth reference target not found: xray.Dataset.assign_coords /Users/shoyer/dev/xarray/doc/whats-new.rst:1953: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1969: WARNING: py:func reference target not found: xray.concat /Users/shoyer/dev/xarray/doc/whats-new.rst:2005: WARNING: py:meth reference target not found: xray.Dataset.to_array /Users/shoyer/dev/xarray/doc/whats-new.rst:2005: WARNING: py:meth reference target not found: xray.DataArray.to_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2016: WARNING: py:meth reference target not found: xray.Dataset.fillna /Users/shoyer/dev/xarray/doc/whats-new.rst:2028: WARNING: py:meth reference target not found: xray.Dataset.assign /Users/shoyer/dev/xarray/doc/whats-new.rst:2028: WARNING: py:meth reference target not found: xray.Dataset.assign_coords /Users/shoyer/dev/xarray/doc/whats-new.rst:2040: WARNING: py:meth reference target not found: xray.Dataset.sel /Users/shoyer/dev/xarray/doc/whats-new.rst:2040: WARNING: py:meth reference target not found: xray.Dataset.reindex /Users/shoyer/dev/xarray/doc/whats-new.rst:2078: WARNING: py:class reference target not found: xray.set_options /Users/shoyer/dev/xarray/doc/whats-new.rst:2103: WARNING: py:meth reference target not found: xray.Dataset.load /Users/shoyer/dev/xarray/doc/whats-new.rst:2117: WARNING: py:meth reference target not found: xray.Dataset.resample /Users/shoyer/dev/xarray/doc/whats-new.rst:2155: WARNING: py:meth reference target not found: xray.Dataset.swap_dims /Users/shoyer/dev/xarray/doc/whats-new.rst:2165: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2165: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2198: WARNING: py:func reference target not found: xray.align /Users/shoyer/dev/xarray/doc/whats-new.rst:2198: WARNING: py:meth reference target not found: xray.Dataset.reindex_like /Users/shoyer/dev/xarray/doc/whats-new.rst:2251: WARNING: py:class reference target not found: xray.Dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2290: WARNING: py:meth reference target not found: xray.Dataset.reindex /Users/shoyer/dev/xarray/doc/whats-new.rst:2303: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2305: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2308: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2308: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2311: WARNING: py:meth reference target not found: xray.Dataset.drop /Users/shoyer/dev/xarray/doc/whats-new.rst:2311: WARNING: py:meth reference target not found: xray.DataArray.drop /Users/shoyer/dev/xarray/doc/whats-new.rst:2325: WARNING: py:meth reference target not found: xray.Dataset.broadcast_equals /Users/shoyer/dev/xarray/doc/whats-new.rst:2350: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2352: WARNING: py:meth reference target not found: xray.Dataset.drop /Users/shoyer/dev/xarray/doc/whats-new.rst:2482: WARNING: py:meth reference target not found: xray.Dataset.count /Users/shoyer/dev/xarray/doc/whats-new.rst:2482: WARNING: py:meth reference target not found: xray.Dataset.dropna /Users/shoyer/dev/xarray/doc/whats-new.rst:2485: WARNING: py:meth reference target not found: xray.DataArray.to_pandas /Users/shoyer/dev/xarray/doc/whats-new.rst:2518: WARNING: py:class reference target not found: xray.Dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2532: WARNING: py:meth reference target not found: xray.Dataset.equals /Users/shoyer/dev/xarray/doc/whats-new.rst:2542: WARNING: py:meth reference target not found: xray.DataArray.reset_coords /Users/shoyer/dev/xarray/doc/whats-new.rst:2551: WARNING: unknown document: tutorial /Users/shoyer/dev/xarray/doc/whats-new.rst:2554: WARNING: py:class reference target not found: xray.Dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2562: WARNING: py:meth reference target not found: xray.Dataset.load_data /Users/shoyer/dev/xarray/doc/whats-new.rst:2562: WARNING: py:meth reference target not found: xray.Dataset.close
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1986/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
291405750 MDU6SXNzdWUyOTE0MDU3NTA= 1855 swap_dims should support dimension names that are not existing variables shoyer 1217238 closed 0     3 2018-01-25T00:08:26Z 2020-01-08T18:27:29Z 2020-01-08T18:27:29Z MEMBER      

Code Sample, a copy-pastable example if possible

python input_ds = xarray.Dataset({'foo': ('x', [1, 2])}, {'x': [0, 1]}) input_ds.swap_dims({'x': 'z'})

Problem description

Currently this results in the error KeyError: 'z'

Expected Output

We now support dimensions without associated coordinate variables. So swap_dims() should be able to create new dimensions (e.g., z in this example) even if there isn't already a coordinate variable.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1855/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
346823063 MDU6SXNzdWUzNDY4MjMwNjM= 2337 Test for warnings fail when using old version of pytest shoyer 1217238 closed 0     2 2018-08-02T01:09:37Z 2019-11-12T19:38:07Z 2019-11-12T19:37:48Z MEMBER      

Some of our tests for warnings currently fail when run using an old version of pytest. The problem appears to be that we rely on pytest.warns() accepting subclasses rather exact matches.

This was fixed upstream in pytest (https://github.com/pytest-dev/pytest/pull/2166), but we still should specify the more specific warning types in xarray.

``` =================================== FAILURES =================================== __ TestEncodeCFVariable.testmissing_fillvalue ____

self = <xarray.tests.test_conventions.TestEncodeCFVariable testMethod=test_missing_fillvalue>

def test_missing_fillvalue(self):
    v = Variable(['x'], np.array([np.nan, 1, 2, 3]))
    v.encoding = {'dtype': 'int16'}
    with pytest.warns(Warning, match='floating point data as an integer'):
      conventions.encode_cf_variable(v)

E Failed: DID NOT WARN

tests/test_conventions.py:89: Failed

_____ TestAlias.test _____

self = <xarray.tests.test_utils.TestAlias testMethod=test>

def test(self):
    def new_method():
        pass
    old_method = utils.alias(new_method, 'old_method')
    assert 'deprecated' in old_method.__doc__
    with pytest.warns(Warning, match='deprecated'):
      old_method()

E Failed: DID NOT WARN

tests/test_utils.py:28: Failed

___ TestIndexVariable.test_coordinate_alias ______

self = <xarray.tests.test_variable.TestIndexVariable testMethod=test_coordinate_alias>

def test_coordinate_alias(self):
    with pytest.warns(Warning, match='deprecated'):
      x = Coordinate('x', [1, 2, 3])

E Failed: DID NOT WARN

tests/test_variable.py:1752: Failed ____ TestAccessor.test_register ______

self = <xarray.tests.test_extensions.TestAccessor testMethod=test_register>

def test_register(self):

    @xr.register_dataset_accessor('demo')
    @xr.register_dataarray_accessor('demo')
    class DemoAccessor(object):
        """Demo accessor."""

        def __init__(self, xarray_obj):
            self._obj = xarray_obj

        @property
        def foo(self):
            return 'bar'

    ds = xr.Dataset()
    assert ds.demo.foo == 'bar'

    da = xr.DataArray(0)
    assert da.demo.foo == 'bar'

    # accessor is cached
    assert ds.demo is ds.demo

    # check descriptor
    assert ds.demo.__doc__ == "Demo accessor."
    assert xr.Dataset.demo.__doc__ == "Demo accessor."
    assert isinstance(ds.demo, DemoAccessor)
    assert xr.Dataset.demo is DemoAccessor

    # ensure we can remove it
    del xr.Dataset.demo
    assert not hasattr(xr.Dataset, 'demo')

    with pytest.warns(Warning, match='overriding a preexisting attribute'):
        @xr.register_dataarray_accessor('demo')
      class Foo(object):

E Failed: DID NOT WARN

tests/test_extensions.py:60: Failed

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2337/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
511651492 MDU6SXNzdWU1MTE2NTE0OTI= 3440 Build failure with pandas master shoyer 1217238 closed 0     0 2019-10-24T01:27:07Z 2019-11-08T15:33:07Z 2019-11-08T15:33:07Z MEMBER      

See https://dev.azure.com/xarray/d5e7a686-a114-4b8c-a2d8-4b5b11efd896/_build/results?buildId=1218&view=logs&jobId=41d90575-019f-5cfd-d78e-c2adebf9a30b for a log.

Appears to be due to https://github.com/pandas-dev/pandas/pull/29062, which adds a .attrs attribute to pandas objects. We copy this attribute in the DataArray constructor.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3440/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
489270698 MDU6SXNzdWU0ODkyNzA2OTg= 3280 Deprecation cycles to finish for xarray 0.13 shoyer 1217238 closed 0     9 2019-09-04T16:37:26Z 2019-09-17T18:50:05Z 2019-09-17T18:50:05Z MEMBER      

Clean-ups we should definitely do: - [x] remove deprecated options from xarray.concat (deprecated back in July 2015!): https://github.com/pydata/xarray/blob/79dc7dc461c7540cc0b84a98543c6f7796c05268/xarray/core/concat.py#L114-L144 (edit by @max-sixty ) - [x] argument order in DataArray.to_dataset (also from July 2015) https://github.com/pydata/xarray/blob/41fecd8658ba50ddda0a52e04c21cec5e53415ac/xarray/core/dataarray.py#L491 (edit by @max-sixty ) - [x] remove the warning back reindex with variables with different dimensions (from 2017). This could either be replaced by replacing dimensions like sel or by simply raising an error for now and leaving replacing dimensions for later: https://github.com/pydata/xarray/pull/1639): https://github.com/pydata/xarray/blob/79dc7dc461c7540cc0b84a98543c6f7796c05268/xarray/core/alignment.py#L389-L398 (edit by @max-sixty ) - [x] remove xarray.broadcast_array, deprecated back in 2016 in https://github.com/pydata/xarray/commit/52ee95f8ae6b9631ac381b5b889de47e41f2440e (edit by @max-sixty ) - [x] remove Variable.expand_dims (deprecated back in August 2017), whose implementation actually looks like it's already broken: https://github.com/pydata/xarray/blob/41fecd8658ba50ddda0a52e04c21cec5e53415ac/xarray/core/variable.py#L1232-L1237 (edit by @max-sixty ) - [x] stop supporting a list of colors in the cmap argument (dating back to at least v0.7.0): https://github.com/pydata/xarray/blob/d089df385e737f71067309ff7abae15994d581ec/xarray/plot/utils.py#L737-L745 (edit by @max-sixty ) - [x] push the removal of the compat and encoding arguments from Dataset/DataArray back to 0.14. These were only deprecated 7 months ago in https://github.com/pydata/xarray/pull/2703. (edit by @max-sixty )

Clean-ups to consider: - [x] switch the default reduction dimension of groupby and resample? (https://github.com/pydata/xarray/pull/2366) This has been giving a FutureWarning since v0.11.0, released back in November 2018. We could also potentially push this back to 0.14, but these warnings are a little annoying... - [x] deprecate auto_combine (#2616) only since 29 June 2019, so that should be pushed.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3280/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
57254455 MDU6SXNzdWU1NzI1NDQ1NQ== 319 Add head(), tail() and thin() methods? shoyer 1217238 closed 0     10 2015-02-10T23:28:15Z 2019-09-05T04:22:24Z 2019-09-05T04:22:24Z MEMBER      

These would be shortcuts for isel/slice syntax: - ds.head(time=5) -> ds.isel(time=slice(5)): select the first five time values - ds.tail(time=5) -> ds.isel(time=slice(-5, None)): select the last five time values - ds.thin(time=5) -> ds.isel(time=slice(None, None, 5)): select every 5th time value

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/319/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
435339263 MDU6SXNzdWU0MzUzMzkyNjM= 2910 Keyword argument support for drop() shoyer 1217238 closed 0     1 2019-04-20T00:45:09Z 2019-08-18T17:42:45Z 2019-08-18T17:42:45Z MEMBER      

Currently, to drop labels along an existing dimension, you need to write something like: ds.drop(['a', 'b'], dim='x).

It would be nice if keyword arguments were supported, e.g., ds.drop(x=['a', 'b']). This would make drop() more symmetric with sel().

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2910/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
464793626 MDU6SXNzdWU0NjQ3OTM2MjY= 3083 test_rasterio_vrt_network is failing in continuous integration tests shoyer 1217238 closed 0     3 2019-07-05T23:13:25Z 2019-07-31T00:28:46Z 2019-07-31T00:28:46Z MEMBER      

``` @network def test_rasterio_vrt_network(self): import rasterio

    url = 'https://storage.googleapis.com/\
    gcp-public-data-landsat/LC08/01/047/027/\
    LC08_L1TP_047027_20130421_20170310_01_T1/\
    LC08_L1TP_047027_20130421_20170310_01_T1_B4.TIF'
    env = rasterio.Env(GDAL_DISABLE_READDIR_ON_OPEN='EMPTY_DIR',
                       CPL_VSIL_CURL_USE_HEAD=False,
                       CPL_VSIL_CURL_ALLOWED_EXTENSIONS='TIF')
    with env:
      with rasterio.open(url) as src:

xarray/tests/test_backends.py:3734:


/usr/share/miniconda/envs/test_env/lib/python3.6/site-packages/rasterio/env.py:430: in wrapper return f(args, kwds) /usr/share/miniconda/envs/test_env/lib/python3.6/site-packages/rasterio/init.py:216: in open s = DatasetReader(path, driver=driver, sharing=sharing, *kwargs)


??? E rasterio.errors.RasterioIOError: HTTP response code: 400 - Failed writing header ``` https://dev.azure.com/xarray/xarray/_build/results?buildId=150&view=ms.vss-test-web.build-test-results-tab&runId=2358&resultId=101228&paneView=debug

I'm not sure what's going on here -- the tiff file is still available at the given URL.

@scottyhq any idea?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3083/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
246386102 MDU6SXNzdWUyNDYzODYxMDI= 1495 DOC: combining datasets with different coordinates shoyer 1217238 closed 0     2 2017-07-28T15:45:07Z 2019-07-12T19:20:44Z 2019-07-12T19:20:44Z MEMBER      

It would be nice to have documentation recipe showing how to combine datasets with different latitude/longitude arrays, as often occurs due to numerical precision issues. It's a little more complicated than just using xarray.open_mfdataset(), and comes up with some regularity.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1495/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
440233667 MDU6SXNzdWU0NDAyMzM2Njc= 2940 test_rolling_wrapped_dask is failing with dask-master shoyer 1217238 closed 0     5 2019-05-03T21:44:23Z 2019-06-28T16:49:04Z 2019-06-28T16:49:04Z MEMBER      

The test_rolling_wrapped_dask tests in test_dataarray.py are failing with dask master, e.g., as seen here: https://travis-ci.org/pydata/xarray/jobs/527936531

I reproduced this locally. git bisect identified the culprit as https://github.com/dask/dask/pull/4756.

The source of this issue on the xarray side appears to be these lines: https://github.com/pydata/xarray/blob/dd99b7d7d8576eefcef4507ae9eb36a144b60adf/xarray/core/rolling.py#L287-L291

In particular, we are currently padded as an xarray.DataArray object, not a dask array. Changing this to padded.data shows that passing an actual dask array to dask_array_ops.rolling_window results in failing tests.

@fujiisoup @jhamman any idea what's going on here?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2940/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
454168102 MDU6SXNzdWU0NTQxNjgxMDI= 3009 Xarray test suite failing with dask-master shoyer 1217238 closed 0     8 2019-06-10T13:21:50Z 2019-06-23T16:49:23Z 2019-06-23T16:49:23Z MEMBER      

There are a wide variety of failures, mostly related to backends and indexing, e.g., AttributeError: 'tuple' object has no attribute 'tuple'. By the looks of it, something is going wrong with xarray's internal ExplicitIndexer objects, which are getting converted into something else.

I'm pretty sure this is due to the recent merge of the Array._meta pull request: https://github.com/dask/dask/pull/4543

There are 81 test failures, but my guess is that there that probably only a handful (at most) of underlying causes.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3009/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
325436508 MDU6SXNzdWUzMjU0MzY1MDg= 2170 keepdims=True for xarray reductions shoyer 1217238 closed 0     3 2018-05-22T19:44:17Z 2019-06-23T09:18:33Z 2019-06-23T09:18:33Z MEMBER      

For operations where arrays are aggregated but then combined, the keepdims=True option for NumPy aggregations is convenient.

We should consider supporting this in xarray as well. Aggregating a DataArray/Dataset with keepdims=True (or maybe keep_dims=True) would remove all original coordinates along aggregated dimensions and return a result with a dimension of size 1 without any coordinates, e.g., ```

array = xr.DataArray([1, 2, 3], dims='x', coords={'x': ['a', 'b', 'c']}) array.mean(keepdims=True) <xarray.DataArray (x: 1)> array([2.]) Dimensions without coordinates: x ```

In case, array.mean(keepdims=True() is equivalent to array.mean().expand_dims('x') but in general this equivalent does not hold, because the location of the original dimension is lost.

Implementation-wise, we have two options: 1. Pass on keepdims=True to NumPy functions like numpy.mean(), or 2. Implement keepdims=True ourselves, in Variable.reduce().

I think I like option 2 a little better, because it places fewer requirements on aggregation functions. For example, functions like bottleneck.nanmean() don't accept a keepdims argument.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2170/reactions",
    "total_count": 10,
    "+1": 9,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
  completed xarray 13221727 issue
430203605 MDU6SXNzdWU0MzAyMDM2MDU= 2876 Custom fill value for align, reindex and reindex_like shoyer 1217238 closed 0     2 2019-04-07T23:08:17Z 2019-05-05T00:20:55Z 2019-05-05T00:20:55Z MEMBER      

It would be nice to be able to specify a custom fill value other than NaN for alignment/reindexing, e.g., ```

xr.DataArray([0, 0], [('x', [1, 2])]).reindex(x=[0, 1, 2, 3], fill_value=-1) <xarray.DataArray (x: 4)> array([-1, 0, 0, -1]) Coordinates: * x (x) int64 1 2 3 4 ```

This should be pretty straightforward, simplify a matter of adding a fill_value keyword argument to the various interfaces and passing it on to Variable.__getitem_with_mask inside xarray.core.alignment.reindex_variables().

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2876/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
435876863 MDU6SXNzdWU0MzU4NzY4NjM= 2914 Behavior of da.expand_dims(da.coords) changed in 0.12.1 shoyer 1217238 closed 0     1 2019-04-22T20:23:47Z 2019-04-22T20:26:32Z 2019-04-22T20:25:34Z MEMBER      
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2914/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
29453809 MDU6SXNzdWUyOTQ1MzgwOQ== 66 HDF5 backend for xray shoyer 1217238 closed 0     15 2014-03-14T17:17:47Z 2019-04-21T23:55:02Z 2017-10-22T01:01:54Z MEMBER      

The obvious libraries to wrap are pytables or h5py: http://www.pytables.org http://h5py.org/

Both provide at least some support for in-memory operations (though I'm not sure if they can pass around HDF5 file objects without dumping them to disk).

From a cursory look at the documentation for both projects, the h5py appears to offer a simpler API that would be easier to map to our existing data model.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/66/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
430189759 MDU6SXNzdWU0MzAxODk3NTk= 2874 xarray/tests/test_cftimeindex_resample.py::test_resampler is way too slow shoyer 1217238 closed 0     1 2019-04-07T20:38:55Z 2019-04-11T11:42:09Z 2019-04-11T11:42:09Z MEMBER      

Some profiling results from pytest: $ pytest -k cftime --durations=50 ... ============================================= slowest 50 test durations ============================================== 7.92s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-700T] 7.45s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-700T] 7.17s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-700T] 7.12s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-700T] 7.12s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-700T] 7.03s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-700T] 6.88s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-700T] 6.70s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-700T] 5.88s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-12H] 5.69s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-12H] 5.55s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-12H] 5.44s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-12H] 5.44s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-12H] 5.32s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-12H] 5.21s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-12H] 5.08s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-12H] 1.56s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-None-700T] 1.36s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-right-700T] 1.22s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-right-700T] 1.19s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-None-700T] 1.16s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-None-700T] 1.15s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-right-700T] 1.11s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-None-700T] 1.09s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-right-700T] 0.96s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-None-12H] 0.93s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-right-12H] 0.93s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-None-12H] 0.91s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-None-12H] 0.91s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-right-12H] 0.89s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-None-12H] 0.88s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-right-12H] 0.86s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-right-12H] 0.69s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-8001T] 0.69s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-8001T] 0.66s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-8001T] 0.65s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-8001T] 0.64s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-8D] 0.62s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-8D] 0.62s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-8001T] 0.60s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-8001T] 0.59s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-8D] 0.59s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-8001T] 0.57s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-8D] 0.57s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-8001T] 0.38s call xarray/tests/test_cftimeindex_resample.py::test_resampler[41987T-31-None-None-700T] 0.36s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-8D] 0.36s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-8D] 0.34s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-8D] 0.33s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-8D] 0.33s call xarray/tests/test_cftimeindex_resample.py::test_resampler[41987T-31-right-right-700T]

This is a heavily parametrized test, and many of these test cases take 5+ seconds to run! Are there ways we could simplify these tests to make them faster?

On my laptop, this test alone roughly doubles the runtime of our entire test suite, increasing it from about 2 minutes to 4 minutes.

@jwenfai @spencerkclark Any ideas?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2874/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
220278600 MDU6SXNzdWUyMjAyNzg2MDA= 1360 Document that aggregation functions like .mean() pass on **kwargs to dask shoyer 1217238 closed 0     2 2017-04-07T17:29:47Z 2019-04-07T19:58:58Z 2019-04-07T19:15:49Z MEMBER      

We should also add tests to verify that invocations like ds.mean(split_every=2) work.

xref https://github.com/dask/dask/issues/874#issuecomment-292597973

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1360/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
278713328 MDU6SXNzdWUyNzg3MTMzMjg= 1756 Deprecate inplace methods shoyer 1217238 closed 0   0.11 2856429 6 2017-12-02T20:09:00Z 2019-03-25T19:19:10Z 2018-11-03T21:24:13Z MEMBER      

The following methods have an inplace argument: DataArray.reset_coords DataArray.set_index DataArray.reset_index DataArray.reorder_levels Dataset.set_coords Dataset.reset_coords Dataset.rename Dataset.swap_dims Dataset.set_index Dataset.reset_index Dataset.reorder_levels Dataset.update Dataset.merge

As proposed in https://github.com/pydata/xarray/issues/1755#issuecomment-348682403, let's deprecate all of these at the next major release (v0.11). They add unnecessary complexity to methods and promote confusing about xarray's data model.

Practically, we would change all of the default values to inplace=None and issue either a DeprecationWarning or FutureWarning (see PEP 565 for more details on that choice).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1756/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
411738552 MDU6SXNzdWU0MTE3Mzg1NTI= 2776 0.12.0 release shoyer 1217238 closed 0     10 2019-02-19T04:21:35Z 2019-03-17T01:51:55Z 2019-03-16T04:18:21Z MEMBER      

We have quite a few nice new features merged into master.

Is anything holding up making a 0.12 release shortly?

cc @pydata/xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2776/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
59565277 MDU6SXNzdWU1OTU2NTI3Nw== 353 Some sort of API for regrouping transformed data? shoyer 1217238 closed 0     1 2015-03-02T22:52:29Z 2019-03-05T02:34:53Z 2019-03-05T02:34:53Z MEMBER      

What is the right way to calculate a climatology and repeat the values over the original axis?

The best I could come up with is:

python def expand_climatology(xray_obj): climatology = xray_obj.groupby('time.dayofyear').mean('time') repeat_proxy = (0 * xray_obj.isnull()).groupby('time.dayofyear') return repeat_proxy + climatology

Possibly the right solution involves something like assign (https://github.com/xray/xray/issues/314)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/353/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
60852933 MDU6SXNzdWU2MDg1MjkzMw== 371 Add a keyword argument to control how attrs are merged in concat/merge? shoyer 1217238 closed 0     1 2015-03-12T16:56:53Z 2019-03-04T18:34:53Z 2019-03-04T18:34:53Z MEMBER      

The idea would be you could do xray.concat(datasets, dim='time', join_attrs='outer') to ensure that variables end up with the union of attributes rather than the intersection. Or likewise, ds.merge(other, join_attrs='inner') to not ignore the attributes of the second dataset in the merge.

CC @aykuznetsova

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/371/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
112085412 MDU6SXNzdWUxMTIwODU0MTI= 628 Remove the encoding attribute from xray.DataArray? shoyer 1217238 closed 0     4 2015-10-19T07:11:36Z 2019-03-01T18:00:08Z 2019-03-01T18:00:08Z MEMBER      

As described in the dev version of our documentation on encoding, we now support a keyword argument for controlling how netCDF files are written to disk with to_netcdf: http://xray.readthedocs.org/en/latest/io.html#reading-encoded-data

We still retain the feature that there is an "encoding" dictionary that sticks around with xray Variable objects, which stores how they were compressed/encoded on disk. This can be occasionally handy. It means, for example, that we always write out netCDF files with the same time units as the files we read from disk.

It might make sense to eliminate this feature for the sake of significantly simplifying xray's internal data model. For cases where it really matters, users can now use the encoding keyword argument to to_netcdf instead. This would leave three attributes on the Variable object: dims, _data and attrs.

Thoughts?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/628/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
166982621 MDU6SXNzdWUxNjY5ODI2MjE= 916 Consider adding Dataset.filter shoyer 1217238 closed 0     2 2016-07-22T07:01:53Z 2019-02-26T04:28:23Z 2019-02-26T04:28:23Z MEMBER      

I discovered pandas.DataFrame.filter today, which is a surprisingly convenient way to filter columns with the like and regex arguments. It might make sense to add this for xarray.Dataset, too.

Note that the first argument of DataFrame.filter, items, is redundant with indexing.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/916/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
170084618 MDU6SXNzdWUxNzAwODQ2MTg= 954 DataArray.copy() should create a mutable version of index coordinates shoyer 1217238 closed 0     1 2016-08-09T05:25:54Z 2019-02-26T02:28:23Z 2019-02-26T02:28:23Z MEMBER      

It currently does not, which makes it tricky to mutate coordinates:

``` In [38]: ds = xr.Dataset(coords={'x': range(3)})

In [40]: ds.x.copy() Out[40]: <xarray.DataArray 'x' (x: 3)> array([0, 1, 2]) Coordinates: * x (x) int64 0 1 2

In [41]: other = ds.x.copy()

In [42]: other[0] = 999 TypeError: Coordinate values cannot be modified ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/954/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
172498620 MDU6SXNzdWUxNzI0OTg2MjA= 981 Split xarray.concat into two functions: xarray.stack and xarray.concat? shoyer 1217238 closed 0     2 2016-08-22T16:38:47Z 2019-02-25T17:28:23Z 2019-02-25T17:28:23Z MEMBER      

Currently, xarray.concat can do two different things: - It can concatenate along an existing dimension. - It can stack along a new dimension.

This seemed convenient when I wrote concat, but now seems too automagical.

For example, we need rules to decide which coordinates to expand when stacking (the confusingly named coords and data_vars arguments), but not when concatenating. Currently we use the heuristics in both cases, which leads to undesirable behavior (#899).

Even if we don't split the public API function, we should split it internally.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/981/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
167970086 MDU6SXNzdWUxNjc5NzAwODY= 920 Plotting methods for categorical data shoyer 1217238 closed 0     1 2016-07-27T22:18:08Z 2019-02-25T09:28:23Z 2019-02-25T09:28:23Z MEMBER      

It would be nice if we had built-in support for creating plotting 2-dimensional categorical data. Using the levels keyword argument for this results in mismatched labels: http://stackoverflow.com/questions/38609648/how-to-move-color-scale-labels-to-middle-of-colored-fields-in-matplotlib-xarray

This plot from a Seaborn PR (https://github.com/mwaskom/seaborn/pull/629) provides an example of what legends might look like:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/920/reactions",
    "total_count": 9,
    "+1": 9,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
127068208 MDU6SXNzdWUxMjcwNjgyMDg= 719 Follow-ups on MultIndex support shoyer 1217238 closed 0     7 2016-01-17T01:42:59Z 2019-02-23T09:47:00Z 2019-02-23T09:47:00Z MEMBER      

xref #702 - [ ] Serialization to NetCDF - [x] Better repr, showing level names/dtypes? - [x] Indexing a scalar at a particular level should drop that level from the MultiIndex (#767) - [x] Make levels accessible as coordinate variables (e.g., ds['time'] can pull out the 'time' level of a multi-index) - [x] Support indexing with levels, e.g., ds.sel(time='2000-01'). - [x] ~~Make isel_points/sel_points return objects with a MultiIndex? (probably after the previous TODO, so we can preserve basic backwards compatibility)~~ (differed until we figure out #974) - [x] Add set_index/reset_index/swaplevel to make it easier to create and manipulate multi-indexes

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/719/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
215602915 MDU6SXNzdWUyMTU2MDI5MTU= 1314 xarray.Dataset.__array__ should raise TypeError shoyer 1217238 closed 0     2 2017-03-21T01:27:19Z 2019-02-19T04:23:15Z 2019-02-19T04:23:14Z MEMBER      

This would stop NumPy from converting Dataset into an array of variable names: https://github.com/pandas-dev/pandas/pull/12400#issuecomment-287948828

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1314/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
406634780 MDU6SXNzdWU0MDY2MzQ3ODA= 2744 Drop cyordereddict from optional dependencies shoyer 1217238 closed 0     0 2019-02-05T05:21:28Z 2019-02-07T18:30:01Z 2019-02-07T18:30:01Z MEMBER      

Now that we're Python 3.5+ only, there's no reason to bother with using cyordereddict, which is slower than Python's builtin OrderedDict: https://github.com/shoyer/cyordereddict

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2744/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
101517700 MDU6SXNzdWUxMDE1MTc3MDA= 535 test_cross_engine_read_write_netcdf4 is broken on Python 3 due to some sort of upstream change shoyer 1217238 closed 0     2 2015-08-17T21:50:03Z 2019-02-04T14:50:16Z 2019-02-04T14:50:16Z MEMBER      

This has started causing test failures on master.

I had trouble tracking this down, but it seems to be related to a build of hdf5, netCDF4 or the underlying libraries which just became available on conda.

In case anyone else has time for it: This build worked: https://travis-ci.org/xray/xray/jobs/75633366 This build failed: https://travis-ci.org/xray/xray/jobs/75777788

For now, let's just skip the test and come back to this later.

Possibly related: https://github.com/Unidata/netcdf4-python/issues/448

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/535/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
313040371 MDU6SXNzdWUzMTMwNDAzNzE= 2050 test_cross_engine_read_write_netcdf3 is now failing on master shoyer 1217238 closed 0     3 2018-04-10T18:31:58Z 2019-02-04T04:42:17Z 2019-02-04T04:42:17Z MEMBER      

Only on Python 3.5 and 3.6 for now: ``` =================================== FAILURES =================================== _ GenericNetCDFDataTest.testcross_engine_read_write_netcdf3 __ self = <xarray.tests.test_backends.GenericNetCDFDataTest testMethod=test_cross_engine_read_write_netcdf3> def test_cross_engine_read_write_netcdf3(self): data = create_test_data() valid_engines = set() if has_netCDF4: valid_engines.add('netcdf4') if has_scipy: valid_engines.add('scipy')

    for write_engine in valid_engines:
        for format in ['NETCDF3_CLASSIC', 'NETCDF3_64BIT']:
            with create_tmp_file() as tmp_file:
                data.to_netcdf(tmp_file, format=format,
                               engine=write_engine)
                for read_engine in valid_engines:
                    with open_dataset(tmp_file,
                                    engine=read_engine) as actual:

xarray/tests/test_backends.py:1596:


xarray/backends/api.py:299: in open_dataset autoclose=autoclose) xarray/backends/netCDF4_.py:280: in open ds = opener() xarray/backends/netCDF4_.py:204: in _open_netcdf4_group ds = nc4.Dataset(filename, mode=mode, **kwargs) netCDF4/_netCDF4.pyx:2015: in netCDF4._netCDF4.Dataset.init ???


??? E OSError: [Errno -36] NetCDF: Invalid argument: b'/tmp/tmpu5no_wbf/temp-1157.nc' netCDF4/_netCDF4.pyx:1636: OSError ___ GenericNetCDFDataTestAutocloseTrue.test_cross_engine_read_write_netcdf3 ____ self = <xarray.tests.test_backends.GenericNetCDFDataTestAutocloseTrue testMethod=test_cross_engine_read_write_netcdf3> def test_cross_engine_read_write_netcdf3(self): data = create_test_data() valid_engines = set() if has_netCDF4: valid_engines.add('netcdf4') if has_scipy: valid_engines.add('scipy')

    for write_engine in valid_engines:
        for format in ['NETCDF3_CLASSIC', 'NETCDF3_64BIT']:
            with create_tmp_file() as tmp_file:
                data.to_netcdf(tmp_file, format=format,
                               engine=write_engine)
                for read_engine in valid_engines:
                    with open_dataset(tmp_file,
                                    engine=read_engine) as actual:

xarray/tests/test_backends.py:1596:


xarray/backends/api.py:299: in open_dataset autoclose=autoclose) xarray/backends/netCDF4_.py:280: in open ds = opener() xarray/backends/netCDF4_.py:204: in _open_netcdf4_group ds = nc4.Dataset(filename, mode=mode, **kwargs) netCDF4/_netCDF4.pyx:2015: in netCDF4._netCDF4.Dataset.init ???


??? E OSError: [Errno -36] NetCDF: Invalid argument: b'/tmp/tmp9ak1v4wj/temp-1238.nc' netCDF4/_netCDF4.pyx:1636: OSError ```

Here's the diff of conda packages: diff --- broken.txt 2018-04-10 11:22:39.400835307 -0700 +++ works.txt 2018-04-10 11:23:12.840755416 -0700 @@ -9,2 +9,2 @@ -boto3 1.7.2 py_0 conda-forge -botocore 1.10.2 py_0 conda-forge +boto3 1.7.0 py_0 conda-forge +botocore 1.10.1 py_0 conda-forge @@ -23 +23 @@ -curl 7.59.0 1 conda-forge +curl 7.59.0 0 conda-forge @@ -29 +29 @@ -distributed 1.21.6 py36_0 conda-forge +distributed 1.21.5 py36_0 conda-forge @@ -62 +62 @@ -libgdal 2.2.4 1 conda-forge +libgdal 2.2.4 0 conda-forge @@ -66 +66 @@ -libnetcdf 4.5.0 3 conda-forge +libnetcdf 4.4.1.1 10 conda-forge @@ -83 +83 @@ -netcdf4 1.3.1 py36_2 conda-forge +netcdf4 1.3.1 py36_1 conda-forge @@ -85 +85 @@ -numcodecs 0.5.5 py36_0 conda-forge +numcodecs 0.5.4 py36_0 conda-forge @@ -131 +131 @@ -tornado 5.0.2 py36_0 conda-forge +tornado 5.0.1 py36_1 conda-forge

The culprit is almost certainly libnetcdf 4.4.1.1 -> 4.5.0

It looks like it's basically this issue again: https://github.com/Unidata/netcdf-c/issues/657

We could fix this either by skipping the tests in xarray's CI or upgrading netCDF-C on conda forge to 4.6.0 or 4.6.1.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2050/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
188817665 MDU6SXNzdWUxODg4MTc2NjU= 1109 Fix "allowed failures" Travis-CI builds for pydap and dev versions of netCDF4 and pandas shoyer 1217238 closed 0     2 2016-11-11T18:09:10Z 2019-02-03T03:32:21Z 2019-02-03T03:32:21Z MEMBER      

These are useful for catching changes upstream before they turn into bugs for xarray users, but they are currently failing for spurious reasons.

If someone has time to investigate and fix these, that would be great!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1109/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
110737724 MDU6SXNzdWUxMTA3Mzc3MjQ= 616 Support arithmetic with pandas.Timedelta objects shoyer 1217238 closed 0     2 2015-10-09T21:19:56Z 2019-02-02T06:25:54Z 2019-02-02T06:25:54Z MEMBER      

This currently doesn't work, as reported in #615:

``` python from datetime import datetime, timedelta import xray import pandas as pd

a = xray.Dataset({'time': [datetime(2000, 1, 1)]}) a['time'] -= pd.to_timedelta(timedelta(hours=6)) ```

```

TypeError Traceback (most recent call last) <ipython-input-7-655c9cabcf6d> in <module>() 4 5 a = xray.Dataset({'time': [datetime(2000, 1, 1)]}) ----> 6 a['time'] -= pd.to_timedelta(timedelta(hours=6))

/Users/shoyer/dev/xray/xray/core/dataarray.pyc in func(self, other) 1089 other_variable = getattr(other, 'variable', other) 1090 with self.coords._merge_inplace(other_coords): -> 1091 f(self.variable, other_variable) 1092 return self 1093 return func

/Users/shoyer/dev/xray/xray/core/variable.pyc in func(self, other) 797 raise ValueError('dimensions cannot change for in-place ' 798 'operations') --> 799 self.values = f(self_data, other_data) 800 return self 801 return func

TypeError: ufunc subtract cannot use operands with types dtype('<M8[ns]') and dtype('O') ```

We could fix this by adding some sort of coercion logic into Variable._binary_op.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/616/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
68759727 MDU6SXNzdWU2ODc1OTcyNw== 392 Non-aggregating grouped operations on dask arrays are painfully slow to construct shoyer 1217238 closed 0     7 2015-04-15T18:45:28Z 2019-02-01T23:06:35Z 2019-02-01T23:06:35Z MEMBER      

These are both entirely lazy operations:

```

%time res = ds.groupby('time.month').mean('time') CPU times: user 142 ms, sys: 20.3 ms, total: 162 ms Wall time: 159 ms %time res = ds.groupby('time.month').apply(lambda x: x - x.mean()) CPU times: user 46.1 s, sys: 4.9 s, total: 51 s Wall time: 50.4 s ```

I suspect the issue (in part) is that _interleaved_concat_slow indexes out single elements from each dask array along the grouped axis prior to concatenating them together (unit tests for interleaved_concat can be found here). So we end up creating way too many small dask arrays.

Profiling results on slightly smaller data are in this gist.

It would be great if we could figure out a way to make this faster, because these sort of operations are a really nice show case for xray + dask.

CC @mrocklin in case you have any ideas.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/392/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
148757289 MDU6SXNzdWUxNDg3NTcyODk= 824 Disable lock=True in open_mfdataset when reading netCDF3 files shoyer 1217238 closed 0     7 2016-04-15T20:14:07Z 2019-01-30T04:37:50Z 2019-01-30T04:37:36Z MEMBER      

This slows things down unnecessarily.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/824/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
403467195 MDU6SXNzdWU0MDM0NjcxOTU= 2717 Test failures with pandas 0.24.0 shoyer 1217238 closed 0     0 2019-01-26T18:16:15Z 2019-01-27T21:02:03Z 2019-01-27T21:02:03Z MEMBER      

From a recent build on Travis-CI: ``` =================================== FAILURES =================================== ___ test_cf_timedelta[timedeltas7-days-nan] ______ timedeltas = numpy.datetime64('NaT'), units = 'days', numbers = array(nan) @pytest.mark.parametrize( ['timedeltas', 'units', 'numbers'], [('1D', 'days', np.int64(1)), (['1D', '2D', '3D'], 'days', np.array([1, 2, 3], 'int64')), ('1h', 'hours', np.int64(1)), ('1ms', 'milliseconds', np.int64(1)), ('1us', 'microseconds', np.int64(1)), (['NaT', '0s', '1s'], None, [np.nan, 0, 1]), (['30m', '60m'], 'hours', [0.5, 1.0]), (np.timedelta64('NaT', 'ns'), 'days', np.nan), (['NaT', 'NaT'], 'days', [np.nan, np.nan])]) def test_cf_timedelta(timedeltas, units, numbers): timedeltas = pd.to_timedelta(timedeltas, box=False) numbers = np.array(numbers)

    expected = numbers
  actual, _ = coding.times.encode_cf_timedelta(timedeltas, units)

xarray/tests/test_coding_times.py:550:


timedeltas = numpy.datetime64('NaT'), units = 'days' def encode_cf_timedelta(timedeltas, units=None): if units is None: units = infer_timedelta_units(timedeltas)

    np_unit = _netcdf_to_numpy_timeunit(units)
  num = 1.0 * timedeltas / np.timedelta64(1, np_unit)

E TypeError: ufunc multiply cannot use operands with types dtype('float64') and dtype('<M8[ns]') xarray/coding/times.py:379: TypeError ___ TestDataArray.test_struct_array_dims ___ self = <xarray.tests.test_dataarray.TestDataArray object at 0x7fb508944a90> def test_struct_array_dims(self): """ This test checks subraction of two DataArrays for the case when dimension is a structured array. """ # GH837, GH861 # checking array subraction when dims are the same p_data = np.array([('John', 180), ('Stacy', 150), ('Dick', 200)], dtype=[('name', '|S256'), ('height', object)])

    p_data_1 = np.array([('John', 180), ('Stacy', 150), ('Dick', 200)],
                        dtype=[('name', '|S256'), ('height', object)])

    p_data_2 = np.array([('John', 180), ('Dick', 200)],
                        dtype=[('name', '|S256'), ('height', object)])

    weights_0 = DataArray([80, 56, 120], dims=['participant'],
                          coords={'participant': p_data})

    weights_1 = DataArray([81, 52, 115], dims=['participant'],
                          coords={'participant': p_data_1})

    actual = weights_1 - weights_0

    expected = DataArray([1, -4, -5], dims=['participant'],
                         coords={'participant': p_data})

    assert_identical(actual, expected)

    # checking array subraction when dims are not the same
    p_data_1 = np.array([('John', 180), ('Stacy', 151), ('Dick', 200)],
                        dtype=[('name', '|S256'), ('height', object)])

    weights_1 = DataArray([81, 52, 115], dims=['participant'],
                          coords={'participant': p_data_1})

    actual = weights_1 - weights_0

    expected = DataArray([1, -5], dims=['participant'],
                         coords={'participant': p_data_2})
  assert_identical(actual, expected)

E AssertionError: Left and right DataArray objects are not identical E
E Differing values: E L E array([-5, 1]) E R E array([ 1, -5]) E Differing coordinates: E L * participant (participant) object (b'Dick', 200) (b'John', 180) E R * participant (participant) [('name', 'S256'), ('height', 'O')] (b'John', 180) (b'Dick', 200) ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2717/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
213426608 MDU6SXNzdWUyMTM0MjY2MDg= 1306 xarray vs Xarray vs XArray shoyer 1217238 closed 0     12 2017-03-10T19:12:48Z 2019-01-27T01:37:53Z 2019-01-27T01:36:35Z MEMBER      

Yes, this is a little silly, but do we have a preferred capitalization for the proper name?

We mostly stick to "xarray" in the docs but "Xarray" or "XArray" is arguably a little more readable and grammatically correct.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1306/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 2,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
402811922 MDU6SXNzdWU0MDI4MTE5MjI= 2705 Docs are failing on ReadTheDocs shoyer 1217238 closed 0     0 2019-01-24T17:17:04Z 2019-01-26T18:14:50Z 2019-01-26T18:14:50Z MEMBER      

Example failing build: https://readthedocs.org/projects/xray/builds/8443648/

I'm pretty sure the issue is the recent conda-forge issues with gdal/rasterio. https://github.com/pydata/xarray/pull/2691 fixed the doc build on Travis, but we still have the issue on ReadTheDocs.

This is potentially a blocker for new xarray releases.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2705/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
168243768 MDU6SXNzdWUxNjgyNDM3Njg= 921 Inconsistency between the types of Dataset.dims and DataArray.dims shoyer 1217238 closed 0     10 2016-07-29T03:30:08Z 2019-01-25T22:01:47Z 2019-01-25T22:01:46Z MEMBER      

DataArray.dims is currently a tuple, whereas Dataset.dims is a dict. This results in ugly code like this, taken from xarray/core/group.py:

python try: # Dataset expected_size = obj.dims[group_dim] except TypeError: # DataArray expected_size = obj.shape[obj.get_axis_num(group_dim)]

One way to resolve this inconsistency would be switch DataArray.dims to a (frozen) OrderedDict. The downside is that code like x.dims[0] wouldn't work anymore (unless we add some terrible fallback logic). You'd have to write x.dims.keys()[0], which is pretty ugly. On the plus side, x.dims['time'] would always return the size of the time dimension, regardless of whether x is a DataArray or Dataset.

Another option would be to add an attribute dim_shape (or some other name) that serves as an alias to dims on Dataset and an alias to OrderedDict(zip(dims, shape)) on DataArray. This would be fully backwards compatible, but further pollute the namespace on Dataset and DataArray.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/921/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
203297784 MDU6SXNzdWUyMDMyOTc3ODQ= 1231 Consider accepting a positional dict argument in place of keyword arguments shoyer 1217238 closed 0     2 2017-01-26T05:57:43Z 2019-01-24T14:02:19Z 2019-01-24T14:02:19Z MEMBER      

Using **kwargs in xarray operations is incredibly convenient, but it means there is no way to use a keyword argument that conflicts with an existing argument name. For example, there is no way to use .sel to select along 'method', 'tolerance' or 'drop' dimensions.

For interactive use, this is fine -- these reserved names are rarely used and it's nice to save the keystrokes it takes to write dict(). But it's a problem for writing reliable complex software, which may be calling xarray methods from somewhere very different. In fact, knowing how this works, it's possible to trigger bugs without even using **kwargs, e.g., by indexing a DataArray: ``` In [34]: array = xr.DataArray([1, 2, 3], dims='drop')

result should be a scalar!

In [35]: array[0] Out[35]: <xarray.DataArray (drop: 3)> array([1, 2, 3]) Unindexed dimensions: drop ```

One option to resolve this is to make the first argument to every function like this (including sel, isel, sel_points, isel_points, stack, ...) accept a dictionary, which is interpreted exactly like **kwargs. In fact, we already do this in a highly inconsistent fashion for Dataset.reindex only. For all cases where axis names are set dynamically (and certainly within xarray), we could encourage using the dictionary form, e.g., array.isel(kwargs, drop=True) rather than array.isel(drop=True, **kwargs).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1231/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
207584994 MDU6SXNzdWUyMDc1ODQ5OTQ= 1268 Add isin method to Dataset and DataArray shoyer 1217238 closed 0     2 2017-02-14T17:37:13Z 2019-01-23T06:59:27Z 2019-01-23T06:59:27Z MEMBER      

This pandas method is pretty handy sometimes.

We could basically wrap the proposed numpy.isin function: https://github.com/numpy/numpy/pull/8423

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1268/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
171957028 MDU6SXNzdWUxNzE5NTcwMjg= 976 Document performance tips for using dask with xarray shoyer 1217238 closed 0     2 2016-08-18T17:32:14Z 2019-01-22T20:24:15Z 2019-01-22T20:24:15Z MEMBER      

A few dask array limitations lead to frequent performance issues for xarray users: https://github.com/dask/dask/issues/746 https://github.com/dask/dask/issues/874

Since these are non-trivial to solve on the dask side, we should document (in our page on dask) how to work around them for xarray users. This mailing list post has most of the relevant advice: https://groups.google.com/forum/#!topic/xarray/11lDGSeza78

CC @mrocklin @jcrist

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/976/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
379472634 MDU6SXNzdWUzNzk0NzI2MzQ= 2554 open_mfdataset crashes with segfault shoyer 1217238 closed 0     10 2018-11-10T23:34:04Z 2019-01-17T22:16:44Z 2019-01-17T22:16:44Z MEMBER      

Copied from the report on the xarray mailing list:


This crashes with SIGSEGV: ```

foo.py

import xarray as xr ds = xr.open_mfdataset('/tmp/nam/bufr.701940/bufr201012011.nc', data_vars='minimal', parallel=True) print(ds) ```

Traceback: ``` [gtrojan@asok precip]$ gdb python3 GNU gdb (GDB) Fedora 8.1.1-3.fc28 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/. Find the GDB manual and other documentation resources online at: http://www.gnu.org/software/gdb/documentation/. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from python3...done. (gdb) r Starting program: /mnt/sdc1/local/Python-3.6.5/bin/python3 foo.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". [New Thread 0x7fffe6dfb700 (LWP 11176)] [New Thread 0x7fffe4dfa700 (LWP 11177)] [New Thread 0x7fffdedf9700 (LWP 11178)] [New Thread 0x7fffdadf8700 (LWP 11179)] [New Thread 0x7fffd6df7700 (LWP 11180)] [New Thread 0x7fffd2df6700 (LWP 11181)] [New Thread 0x7fffcedf5700 (LWP 11182)] warning: Loadable section ".note.gnu.property" outside of ELF segments [Thread 0x7fffdadf8700 (LWP 11179) exited] [Thread 0x7fffd2df6700 (LWP 11181) exited] [Thread 0x7fffcedf5700 (LWP 11182) exited] [Thread 0x7fffd6df7700 (LWP 11180) exited] [Thread 0x7fffdedf9700 (LWP 11178) exited] [Thread 0x7fffe4dfa700 (LWP 11177) exited] [Thread 0x7fffe6dfb700 (LWP 11176) exited] Detaching after fork from child process 11183. [New Thread 0x7fffcedf5700 (LWP 11184)] [New Thread 0x7fffe56f1700 (LWP 11185)] [New Thread 0x7fffdedf9700 (LWP 11186)] [New Thread 0x7fffdadf8700 (LWP 11187)] [New Thread 0x7fffd6df7700 (LWP 11188)] [New Thread 0x7fffd2df6700 (LWP 11189)] [New Thread 0x7fffa7fff700 (LWP 11190)] [New Thread 0x7fff9bfff700 (LWP 11191)] [New Thread 0x7fff93fff700 (LWP 11192)] [New Thread 0x7fff8bfff700 (LWP 11193)] [New Thread 0x7fff83fff700 (LWP 11194)] warning: Loadable section ".note.gnu.property" outside of ELF segments warning: Loadable section ".note.gnu.property" outside of ELF segments

Thread 9 "python3" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffcedf5700 (LWP 11184)] 0x00007fffbd95cca9 in H5SL_insert_common () from /usr/lib64/libhdf5.so.10 ```

This happens with the most recent dask and xarray:

INSTALLED VERSIONS

commit: None python: 3.6.5.final.0 python-bits: 64 OS: Linux OS-release: 4.18.14-200.fc28.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8

xarray: 0.11.0 pandas: 0.23.0 numpy: 1.15.2 scipy: 1.1.0 netCDF4: 1.4.0 h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.0b1 PseudonetCDF: None rasterio: None iris: None bottleneck: 1.3.0.dev0 cyordereddict: None dask: 0.20.1 distributed: 1.22.1 matplotlib: 3.0.0 cartopy: None seaborn: 0.9.0 setuptools: 39.0.1 pip: 18.1 conda: None pytest: 3.6.3 IPython: 6.3.1 sphinx: 1.8.1

When I change the code in open_mfdataset to use parallel scheduler, the code runs as expected.

``` Line 619 in api.py:

datasets, file_objs = dask.compute(datasets, file_objs)

datasets, file_objs = dask.compute(datasets, file_objs, scheduler='processes') ```

The file sizes are about 300kB, my example reads only 2 files.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2554/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
60766810 MDU6SXNzdWU2MDc2NjgxMA== 368 Default reading netCDF3 files with scipy.io instead of netCDF4? shoyer 1217238 closed 0     4 2015-03-12T03:44:41Z 2019-01-15T20:10:10Z 2019-01-15T20:10:10Z MEMBER      

In my microbenchmarks, scipy.io appears to be ~3x faster than netCDF4 for reading netCDF3 files:

``` python ds = xray.Dataset({'foo': (['x', 'y'], np.random.randn(10000, 10000).astype(np.float32))}) ds.to_netcdf('test.nc', engine='scipy') ds_scipy = xray.open_dataset('test.nc', engine='scipy') ds_nc4 = xray.open_dataset('test.nc', engine='netcdf4')

%timeit ds_scipy.isel(x=slice(5000)).load_data()

10 loops, best of 3: 123 ms per loop

%timeit ds_nc4.isel(x=slice(5000)).load_data()

1 loops, best of 3: 319 ms per loop

```

We might want to switch the default engine to use scipy for reading netCDF3 files. Note that netCDF4 does seem to be a bit faster for writing.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/368/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
44492906 MDU6SXNzdWU0NDQ5MjkwNg== 241 Add example showing how to sample gridded data at points shoyer 1217238 closed 0     4 2014-09-30T19:58:27Z 2019-01-15T20:08:50Z 2019-01-15T20:08:49Z MEMBER      

It would be good to show how to do this, even if we don't have an efficient implementation yet (#214).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/241/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
59564663 MDU6SXNzdWU1OTU2NDY2Mw== 352 Write a decorator like np.vectorize to make a DataArray functions handle Dataset objects shoyer 1217238 closed 0     1 2015-03-02T22:47:15Z 2019-01-14T21:11:54Z 2019-01-14T21:11:53Z MEMBER      

For example, suppose I write a function to compare two DataArrays. I should be able to decorate it with something like xray.vectorize so that it now automatically handles Datasets by looping over data variables (of course, it should still work just fine when one or more arguments is a DataArray).

This will make it easier for users to extend write functions to manipulate xray objects in their own code.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/352/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
187560717 MDU6SXNzdWUxODc1NjA3MTc= 1082 Issue a warning when overwriting attributes with accessors instead of erroring shoyer 1217238 closed 0     1 2016-11-06T13:11:52Z 2019-01-08T21:59:36Z 2019-01-08T21:59:36Z MEMBER      

On the mailing list, @rabernat wrote:

Also, how can I interactively develop an accessor? If I try to re-register under the same name, I get the error AccessorRegistrationError: cannot register accessor <class '__main__.ExchAccessor'> under name 'exch' for type <class 'xarray.core.dataset.Dataset'> because an attribute with that name already exists.

In #1080, @smartass101 suggests:

Btw, perhaps it might be better to (perhaps optionally) issue a warning when overriding an existing class attribute during registering instead of completely refusing to do so.

I think this is a good idea, and would nicely solve @rabernat's problem (which might be your problem, too). We could add a new keyword argument (e.g., allow_override=True or warn=True to register_*_accessor) which switches to this new mode.

Should it be the default behavior? It is also possible that warnings instead of errors are enough in general.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1082/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
395328437 MDU6SXNzdWUzOTUzMjg0Mzc= 2641 pytest-runner should not be a hard-requirement in setup.py shoyer 1217238 closed 0     0 2019-01-02T17:57:15Z 2019-01-03T01:14:38Z 2019-01-03T01:14:38Z MEMBER      

https://github.com/pydata/xarray/pull/2573 (by @horta) added pytest-runner to setup_requires in our setup.py file.

This had the unintentional consequence of making pytest-runner a requirement for installing xarray. This is causing our conda-forge deployment to fail: https://github.com/conda-forge/xarray-feedstock/pull/42

We really want a "conditional requirement" only for testing as described in the pytest-runner docs: https://pypi.org/project/pytest-runner/

My plan is to do the conditional requirement, and back-port the commit to make a new 0.11.2 release.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2641/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
393903950 MDU6SXNzdWUzOTM5MDM5NTA= 2631 Last call for v0.11.1 shoyer 1217238 closed 0     3 2018-12-24T16:01:22Z 2018-12-31T16:07:49Z 2018-12-31T16:07:48Z MEMBER      

@pydata/xarray I'm going to issue v0.11.1 in a day or two, unless there's anything else we really want to squeeze in. This is the last release with planned Python 2.7 support (but we could conceivably still do backports for nasty bugs).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2631/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
373574495 MDU6SXNzdWUzNzM1NzQ0OTU= 2505 xarray 0.11 release shoyer 1217238 closed 0     21 2018-10-24T16:40:51Z 2018-11-07T19:38:40Z 2018-11-07T16:29:53Z MEMBER      

We should really get a release candidate out soon for xarray 0.11, which will fix a lot of IO issues with dask (e.g., https://github.com/pydata/xarray/issues/2503).

Deprecation cycles to finish first:

  • [x] Iterating over a Dataset iterates only over its data_vars #884
  • [x] "in" operator does not work as expected on DataArray dimensions #1267
  • [x] remove .T as an alias for .transpose() #1232
  • [x] remove old resample syntax

Deprecation cycles to start (optional)

  • [x] Deprecate inplace methods #1756

These were everything tagged with the "0.11" milestone.

@pydata/xarray anything else to add?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2505/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
98815388 MDU6SXNzdWU5ODgxNTM4OA== 511 auto_combine/open_mfdataset can be very slow sometimes if concat_dim is not provided shoyer 1217238 closed 0     2 2015-08-03T18:52:35Z 2018-11-02T23:55:01Z 2018-11-02T23:55:00Z MEMBER      

Through an unfortunate series of events, when concat_dim is not provided, it loads into memory the data for the coordinate being concatenated for every dataset in a serial process. This can be very slow if the files are being loaded on a network drive with some latency.

cc @ToddSmall

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/511/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
374704277 MDU6SXNzdWUzNzQ3MDQyNzc= 2521 test_infer_cftime_datetime_units failing on Windows shoyer 1217238 closed 0     0 2018-10-28T00:40:31Z 2018-10-30T01:00:43Z 2018-10-30T01:00:43Z MEMBER      

I don't know why, but this test is now failing on Python 2.7 / Windows: https://ci.appveyor.com/project/shoyer/xray/builds/19850608 ``` ================================== FAILURES =================================== ____ test_infer_cftime_datetime_units _____ @pytest.mark.skipif(not has_cftime_or_netCDF4, reason='cftime not installed') def test_infer_cftime_datetime_units(): date_types = _all_cftime_date_types() for date_type in date_types.values(): for dates, expected in [ ([date_type(1900, 1, 1), date_type(1900, 1, 2)], 'days since 1900-01-01 00:00:00.000000'), ([date_type(1900, 1, 1, 12), date_type(1900, 1, 1, 13)], 'seconds since 1900-01-01 12:00:00.000000'), ([date_type(1900, 1, 1), date_type(1900, 1, 2), date_type(1900, 1, 2, 0, 0, 1)], 'seconds since 1900-01-01 00:00:00.000000'), ([date_type(1900, 1, 1), date_type(1900, 1, 2, 0, 0, 0, 5)], 'days since 1900-01-01 00:00:00.000000'), ([date_type(1900, 1, 1), date_type(1900, 1, 8), date_type(1900, 1, 16)], 'days since 1900-01-01 00:00:00.000000')]:

          assert expected == coding.times.infer_datetime_units(dates)

E AssertionError: assert 'seconds sinc...:00:00.000000' == 'hours since 1...:00:00.000000' E - seconds since 1900-01-01 12:00:00.000000 E ? ------ E + hours since 1900-01-01 12:00:00.000000 E ? ++++ ```

@spencerkclark please take a look (or we can xfail this if necessary)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2521/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
224200714 MDU6SXNzdWUyMjQyMDA3MTQ= 1384 tutorial.load_dataset() should prevent against partial downloads shoyer 1217238 closed 0     1 2017-04-25T16:39:50Z 2018-10-17T17:16:59Z 2018-10-17T17:16:59Z MEMBER      

To avoid issues like this one: https://groups.google.com/forum/#!topic/xarray/F5UKkZZsMec

The easiest way to do this is to write to a temporary file and only move it to the correct location when the download completes.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1384/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
369310993 MDU6SXNzdWUzNjkzMTA5OTM= 2480 test_apply_dask_new_output_dimension is broken on master with dask-dev shoyer 1217238 closed 0     6 2018-10-11T21:24:33Z 2018-10-12T16:26:17Z 2018-10-12T16:26:17Z MEMBER      

Example build failure: https://travis-ci.org/pydata/xarray/jobs/439949937 ``` =================================== FAILURES =================================== ___ test_apply_dask_new_output_dimension ___ @requires_dask def test_apply_dask_new_output_dimension(): import dask.array as da

    array = da.ones((2, 2), chunks=(1, 1))
    data_array = xr.DataArray(array, dims=('x', 'y'))

    def stack_negative(obj):
        def func(x):
            return np.stack([x, -x], axis=-1)
        return apply_ufunc(func, obj, output_core_dims=[['sign']],
                           dask='parallelized', output_dtypes=[obj.dtype],
                           output_sizes={'sign': 2})

    expected = stack_negative(data_array.compute())

    actual = stack_negative(data_array)
    assert actual.dims == ('x', 'y', 'sign')
    assert actual.shape == (2, 2, 2)
    assert isinstance(actual.data, da.Array)
  assert_identical(expected, actual)

xarray/tests/test_computation.py:737:


xarray/tests/test_computation.py:24: in assert_identical assert a.identical(b), msg xarray/core/dataarray.py:1923: in identical self._all_compat(other, 'identical')) xarray/core/dataarray.py:1875: in _all_compat compat(self, other)) xarray/core/dataarray.py:1872: in compat return getattr(x.variable, compat_str)(y.variable) xarray/core/variable.py:1461: in identical self.equals(other)) xarray/core/variable.py:1439: in equals equiv(self.data, other.data))) xarray/core/duck_array_ops.py:144: in array_equiv arr1, arr2 = as_like_arrays(arr1, arr2) xarray/core/duck_array_ops.py:128: in as_like_arrays return tuple(np.asarray(d) for d in data) xarray/core/duck_array_ops.py:128: in <genexpr> return tuple(np.asarray(d) for d in data) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/array/core.py:1118: in array x = self.compute() ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/base.py:156: in compute (result,) = compute(self, traverse=False, kwargs) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/base.py:390: in compute dsk = collections_to_dsk(collections, optimize_graph, kwargs) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/base.py:194: in collections_to_dsk for opt, (dsk, keys) in groups.items()])) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/base.py:194: in <listcomp> for opt, (dsk, keys) in groups.items()])) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/array/optimization.py:41: in optimize dsk = ensure_dict(dsk) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/utils.py:830: in ensure_dict result.update(dd) ../../../miniconda/envs/test_env/lib/python3.6/_collections_abc.py:720: in iter yield from self._mapping ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/array/top.py:168: in iter return iter(self._dict) ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/array/top.py:160: in _dict concatenate=self.concatenate ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/array/top.py:305: in top keytups = list(itertools.product(*[range(dims[i]) for i in out_indices]))


.0 = <tuple_iterator object at 0x7f606ba84fd0>

keytups = list(itertools.product(*[range(dims[i]) for i in out_indices])) E KeyError: '.0' ../../../miniconda/envs/test_env/lib/python3.6/site-packages/dask/array/top.py:305: KeyError ```

My guess is that this is somehow related to @mrocklin's recent refactor of dask.array.atop: https://github.com/dask/dask/pull/3998

If the cause isn't obvious, I'll try to come up with a simple dask only example that reproduces it.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2480/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
276241193 MDU6SXNzdWUyNzYyNDExOTM= 1738 Windows/Python 2.7 tests of dask-distributed failing on master/v0.10.0 shoyer 1217238 closed 0     12 2017-11-23T00:42:29Z 2018-10-09T04:13:41Z 2018-10-09T04:13:41Z MEMBER      

Python 2.7 builds on Windows are failing: https://ci.appveyor.com/project/shoyer/xray/build/1.0.3018

The tests that are failing are all variations of test_dask_distributed_integration_test. Example error message: ``` =================================== ERRORS ==================================== _ ERROR at teardown of test_dask_distributed_integration_test[scipy] ____ @pytest.fixture def loop(): with pristine_loop() as loop: # Monkey-patch IOLoop.start to wait for loop stop orig_start = loop.start is_stopped = threading.Event() is_stopped.set() def start(): is_stopped.clear() try: orig_start() finally: is_stopped.set() loop.start = start

        yield loop
        # Stop the loop in case it's still running
        try:
            loop.add_callback(loop.stop)
        except RuntimeError as e:
            if not re.match("IOLoop is clos(ed|ing)", str(e)):
                raise
        else:
          is_stopped.wait()

C:\Python27-conda64\envs\test_env\lib\site-packages\distributed\utils_test.py:102:


C:\Python27-conda64\envs\test_env\lib\contextlib.py:24: in exit self.gen.next() C:\Python27-conda64\envs\test_env\lib\site-packages\distributed\utils_test.py:139: in pristine_loop loop.close(all_fds=True) C:\Python27-conda64\envs\test_env\lib\site-packages\tornado\ioloop.py:716: in close self.remove_handler(self._waker.fileno()) C:\Python27-conda64\envs\test_env\lib\site-packages\tornado\platform\common.py:91: in fileno return self.reader.fileno() C:\Python27-conda64\envs\test_env\lib\socket.py:228: in meth return getattr(self._sock,name)(*args)


args = (<socket._closedsocket object at 0x00000000131F27F0>, 'fileno') def _dummy(*args):

  raise error(EBADF, 'Bad file descriptor')

E error: [Errno 9] Bad file descriptor C:\Python27-conda64\envs\test_env\lib\socket.py:174: error ---------------------------- Captured stderr call ----------------------------- distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:1094 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:1096 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:1095 distributed.worker - INFO - Listening to: tcp://127.0.0.1:1096 distributed.worker - INFO - Listening to: tcp://127.0.0.1:1095 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:1094 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:1094 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 2.00 GB distributed.worker - INFO - Memory: 2.00 GB distributed.worker - INFO - Local Directory: C:\projects\xray_test_worker-4043f797-3668-459a-9d5b-017dbc092ad5\worker-ozlw8t distributed.worker - INFO - Local Directory: C:\projects\xray_test_worker-0b2d640d-07ba-493f-967c-f8d8de38e3b5\worker-_xbrz6 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register tcp://127.0.0.1:1096 distributed.worker - INFO - Registered to: tcp://127.0.0.1:1094 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register tcp://127.0.0.1:1095 distributed.worker - INFO - Registered to: tcp://127.0.0.1:1094 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:1095 distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:1096 distributed.scheduler - INFO - Receive client connection: Client-06708a40-ce25-11e7-898c-00155d57f2dd distributed.scheduler - INFO - Connection to client Client-06708a40-ce25-11e7-898c-00155d57f2dd broken distributed.scheduler - INFO - Remove client Client-06708a40-ce25-11e7-898c-00155d57f2dd distributed.scheduler - INFO - Close client connection: Client-06708a40-ce25-11e7-898c-00155d57f2dd distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:1095 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:1096 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:1095 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:1096 distributed.scheduler - INFO - Lost all workers distributed.worker - INFO - Close compute stream distributed.worker - INFO - Close compute stream distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms ```

@mrocklin any guesses about what this could be?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1738/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
364148137 MDU6SXNzdWUzNjQxNDgxMzc= 2441 hypothesis tests are failing on master shoyer 1217238 closed 0     1 2018-09-26T18:08:15Z 2018-09-26T23:47:27Z 2018-09-26T23:47:27Z MEMBER      

Example failure: https://travis-ci.org/pydata/xarray/jobs/433231165

============================= test session starts ============================== platform linux -- Python 3.6.6, pytest-3.8.1, py-1.6.0, pluggy-0.7.1 hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/travis/build/pydata/xarray/.hypothesis/examples') rootdir: /home/travis/build/pydata/xarray, inifile: setup.cfg plugins: cov-2.6.0, hypothesis-3.73.1 collected 0 items / 1 errors ==================================== ERRORS ==================================== ______________ ERROR collecting properties/test_encode_decode.py _______________ properties/test_encode_decode.py:16: in <module> settings.deadline = None ../../../miniconda/envs/test_env/lib/python3.6/site-packages/hypothesis/_settings.py:127: in __setattr__ 'to decorate your test instead.' % (name, value) E AttributeError: Cannot assign hypothesis.settings.deadline=None - the settings class is immutable. You can change the global default settings with settings.load_profile, or use @settings(...) to decorate your test instead. !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.73 seconds ============================

cc @Zac-HD

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2441/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
361915770 MDU6SXNzdWUzNjE5MTU3NzA= 2424 0.10.9 release shoyer 1217238 closed 0     6 2018-09-19T20:31:29Z 2018-09-26T01:05:09Z 2018-09-22T15:14:48Z MEMBER      

It's now been two months since the 0.10.8 release, so we really ought to issue a new minor release.

I was initially thinking of skipping straight to 0.11.0 if we include https://github.com/pydata/xarray/pull/2261 (xarray.backends refactor), but it seems that will take a bit longer to review/test so it's probably worth issuing a 0.10.9 release first.

@pydata/xarray -- are there any PRs / bug-fixes in particular we should wait for before issuing the release?

I suppose it would be good to sort out https://github.com/pydata/xarray/issues/2422 (Plot2D no longer sorts coordinates before plotting)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2424/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
328572578 MDU6SXNzdWUzMjg1NzI1Nzg= 2209 Build timeouts on ReadTheDocs shoyer 1217238 closed 0     4 2018-06-01T15:53:48Z 2018-09-20T16:39:58Z 2018-09-20T16:39:58Z MEMBER      

A significant fraction of our doc builds have started running up against ReadTheDocs's 900 second timeout for builds:

Investigating a few of these, it seems that the main culprit is installing/downloading conda packages in the conda env create and conda install steps. In some cases, these take over 700 seconds combined, leaving little time for Sphinx, which on its own takes only a couple minutes (but RTD runs it three times for some reason).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2209/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
346313546 MDU6SXNzdWUzNDYzMTM1NDY= 2332 Test failures on master with DataArray.to_cdms2 shoyer 1217238 closed 0     3 2018-07-31T18:49:21Z 2018-09-05T15:18:45Z 2018-09-05T15:18:45Z MEMBER      

See https://travis-ci.org/pydata/xarray/jobs/410459646

Example failure: ``` =================================== FAILURES =================================== __ TestDataArray.testto_and_from_cdms2_classic ___ self = <xarray.tests.test_dataarray.TestDataArray testMethod=test_to_and_from_cdms2_classic> def test_to_and_from_cdms2_classic(self): """Classic with 1D axes""" pytest.importorskip('cdms2')

    original = DataArray(
        np.arange(6).reshape(2, 3),
        [('distance', [-2, 2], {'units': 'meters'}),
         ('time', pd.date_range('2000-01-01', periods=3))],
        name='foo', attrs={'baz': 123})
    expected_coords = [IndexVariable('distance', [-2, 2]),
                       IndexVariable('time', [0, 1, 2])]
    actual = original.to_cdms2()
  assert_array_equal(actual, original)

E ValueError: E error during assertion: E
E Traceback (most recent call last): E File "/home/travis/miniconda/envs/test_env/lib/python2.7/site-packages/numpy/testing/_private/utils.py", line 752, in assert_array_compare E x, y = x[~flagged], y[~flagged] E File "/home/travis/miniconda/envs/test_env/lib/python2.7/site-packages/cdms2/avariable.py", line 1177, in getitem E speclist = self._process_specs([key], {}) E File "/home/travis/miniconda/envs/test_env/lib/python2.7/site-packages/cdms2/avariable.py", line 938, in _process_specs E if Ellipsis in specs: E ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() E
E
E Arrays are not equal E x: TransientVariable([[0, 1, 2], E [3, 4, 5]]) E y: array([[0, 1, 2], E [3, 4, 5]]) ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2332/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
138504881 MDU6SXNzdWUxMzg1MDQ4ODE= 785 Add dictionary key-completions for xarray objects in IPython shoyer 1217238 closed 0     1 2016-03-04T15:38:16Z 2018-09-04T12:13:17Z 2018-09-04T12:13:17Z MEMBER      

This will work in the next IPython release if we add _ipython_key_completions_ methods: https://github.com/ipython/ipython/pull/9289

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/785/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 178.849ms · About: xarray-datasette