home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

324 rows where type = "issue" and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: milestone, comments, state_reason, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 279
  • open 45

type 1

  • issue · 324 ✖

repo 1

  • xarray 324
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2266174558 I_kwDOAMm_X86HExRe 8975 Xarray sponsorship guidelines shoyer 1217238 open 0     3 2024-04-26T17:05:01Z 2024-04-30T20:52:33Z   MEMBER      

At what level of support should Xarray acknowledge sponsors on our website?

I would like to surface this for open discussion because there are potential sponsoring organizations with conflicts of interest with members of Xarray's leadership team (e.g., Earthmover, which employs @jhamman, @rabernat and @dcherian).

My suggestion is to use NumPy's guidelines, with an adjustment down to 1/3 of the thresholds to account for the smaller size of the project:

  • $10,000/yr for unrestricted financial contributions (e.g., donations)
  • $20,000/yr for financial contributions for a particular purpose (e.g., grants)
  • $30,000/yr for in-kind contributions (e.g., time for employees to contribute)
  • 2 person-months/yr of paid work time for one or more Xarray maintainers or regular contributors to any Xarray team or activity

The NumPy guidelines also include a grace period of a minimum of 6 months for acknowledging support. I would suggest increasing this to a minimum of 1 year for Xarray.

I would greatly appreciate any feedback from members of the community, either in this issue or on the next team meeting.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8975/reactions",
    "total_count": 6,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
271043420 MDU6SXNzdWUyNzEwNDM0MjA= 1689 Roundtrip serialization of coordinate variables with spaces in their names shoyer 1217238 open 0     5 2017-11-03T16:43:20Z 2024-03-22T14:02:48Z   MEMBER      

If coordinates have spaces in their names, they get restored from netCDF files as data variables instead: ```

xarray.open_dataset(xarray.Dataset(coords={'name with spaces': 1}).to_netcdf()) <xarray.Dataset> Dimensions: () Data variables: name with spaces int32 1 ````

This happens because the CF convention is to indicate coordinates as a space separated string, e.g., coordinates='latitude longitude'.

Even though these aren't CF compliant variable names (which cannot have strings) It would be nice to have an ad-hoc convention for xarray that allows us to serialize/deserialize coordinates in all/most cases. Maybe we could use escape characters for spaces (e.g., coordinates='name\ with\ spaces') or quote names if they have spaces (e.g., coordinates='"name\ with\ spaces"'?

At the very least, we should issue a warning in these cases.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1689/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
267542085 MDU6SXNzdWUyNjc1NDIwODU= 1647 Representing missing values in string arrays on disk shoyer 1217238 closed 0     3 2017-10-23T05:01:10Z 2024-02-06T13:03:40Z 2024-02-06T13:03:40Z MEMBER      

This came up as part of my clean-up of serializing unicode strings in https://github.com/pydata/xarray/pull/1648.

There are two ways to represent strings in netCDF files.

  • As character arrays (NC_CHAR), supported by both netCDF3 and netCDF4
  • As variable length unicode strings (NC_STRING), only supported by netCDF4/HDF5.

Currently, by default (if no _FillValue is set) we replace missing values (NaN) with an empty string when writing data to disk.

For character arrays, we could use the normal _FillValue mechanism to set a fill value and decode when data is read back from disk. In fact, this already currently works for dtype=bytes (though it isn't documented): ``` In [10]: ds = xr.Dataset({'foo': ('x', np.array([b'bar', np.nan], dtype=object), {}, {'_FillValue': b''})})

In [11]: ds Out[11]: <xarray.Dataset> Dimensions: (x: 2) Dimensions without coordinates: x Data variables: foo (x) object b'bar' nan

In [12]: ds.to_netcdf('foobar.nc')

In [13]: xr.open_dataset('foobar.nc').load() Out[13]: <xarray.Dataset> Dimensions: (x: 2) Dimensions without coordinates: x Data variables: foo (x) object b'bar' nan ```

For variable length strings, it currently isn't possible to set a fill-value. So there's no good way to indicate missing values, though this may change if the future depending on the resolution of the netCDF-python issue.

It would obviously be nice to always automatically round-trip missing values, both for strings and bytes. I see two possible ways to do this: 1. Require setting an explicit _FillValue when a string contains missing values, by raising an error if this isn't done. We need an explicit choice because there aren't any extra unused characters left over, at least for character arrays. (NetCDF explicitly allows arbitrary bytes to be stored in NC_CHAR, even though this maps to an HDF5 fixed-width string with ASCII encoding.) For variable length strings, we could potentially set a non-character unicode symbol like U+FFFF, but again that isn't supported yet. 2. Treat empty strings as equivalent to a missing value (NaN). This has the advantage of not requiring an explicit choice of _FillValue, so we don't need to wait for any netCDF4 issues to be resolved. However, this does mean that empty strings would not round-trip. Still, given the relative prevalence of missing values vs empty strings in xarray/pandas, it's probably the lesser evil to not preserve empty string.

The default option is to adopt neither of these, and keep the current behavior where missing values are written as empty strings and not decoded at all.

Any opinions? I am leaning towards option (2).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1647/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
842436143 MDU6SXNzdWU4NDI0MzYxNDM= 5081 Lazy indexing arrays as a stand-alone package shoyer 1217238 open 0     6 2021-03-27T07:06:03Z 2023-12-15T13:20:03Z   MEMBER      

From @rabernat on Twitter:

"Xarray has some secret private classes for lazily indexing / wrapping arrays that are so useful I think they should be broken out into a standalone package. https://github.com/pydata/xarray/blob/master/xarray/core/indexing.py#L516"

The idea here is create a first-class "duck array" library for lazy indexing that could replace xarray's internal classes for lazy indexing. This would be in some ways similar to dask.array, but much simpler, because it doesn't have to worry about parallel computing.

Desired features:

  • Lazy indexing
  • Lazy transposes
  • Lazy concatenation (#4628) and stacking
  • Lazy vectorized operations (e.g., unary and binary arithmetic)
    • needed for decoding variables from disk (xarray.encoding) and
    • building lazy multi-dimensional coordinate arrays corresponding to map projections (#3620)
  • Maybe: lazy reshapes (#4113)

A common feature of these operations is they can (and almost always should) be fused with indexing: if N elements are selected via indexing, only O(N) compute and memory is required to produce them, regards of the size of the original arrays as long as the number of applied operations can be treated as a constant. Memory access is significantly slower than compute on modern hardware, so recomputing these operations on the fly is almost always a good idea.

Out of scope: lazy computation when indexing could require access to many more elements to compute the desired value than are returned. For example, mean() probably should not be lazy, because that could involve computation of a very large number of elements that one might want to cache.

This is valuable functionality for Xarray for two reasons:

  1. It allows for "previewing" small bits of data loaded from disk or remote storage, even if that data needs some form of cheap "decoding" from its form on disk.
  2. It allows for xarray to decode data in a lazy fashion that is compatible with full-featured systems for lazy computation (e.g., Dask), without requiring the user to choose dask when reading the data.

Related issues:

  • [Proposal] Expose Variable without Pandas dependency #3981
  • Lazy concatenation of arrays #4628
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5081/reactions",
    "total_count": 6,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 6,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
197939448 MDU6SXNzdWUxOTc5Mzk0NDg= 1189 Document using a spawning multiprocessing pool for multiprocessing with dask shoyer 1217238 closed 0     3 2016-12-29T01:21:50Z 2023-12-05T21:51:04Z 2023-12-05T21:51:04Z MEMBER      

This is a nice option for working with in-file HFD5/netCDF4 compression: https://github.com/pydata/xarray/pull/1128#issuecomment-261936849

Mixed multi-threading/multi-processing could also be interesting, if anyone wants to revive that: https://github.com/dask/dask/pull/457 (I think it would work now that xarray data stores are pickle-able)

CC @mrocklin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1189/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
430188626 MDU6SXNzdWU0MzAxODg2MjY= 2873 Dask distributed tests fail locally shoyer 1217238 closed 0     3 2019-04-07T20:26:53Z 2023-12-05T21:43:02Z 2023-12-05T21:43:02Z MEMBER      

I'm not sure why, but when I run the integration tests with dask-distributed locally (on my MacBook pro), they fail: ``` $ pytest xarray/tests/test_distributed.py --maxfail 1 ================================================ test session starts ================================================= platform darwin -- Python 3.7.2, pytest-4.0.1, py-1.7.0, pluggy-0.8.0 rootdir: /Users/shoyer/dev/xarray, inifile: setup.cfg plugins: repeat-0.7.0 collected 19 items

xarray/tests/test_distributed.py F

====================================================== FAILURES ====================================================== ____ test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF3_CLASSIC] _______

loop = <tornado.platform.asyncio.AsyncIOLoop object at 0x1c182da1d0> tmp_netcdf_filename = '/private/var/folders/15/qdcz0wqj1t9dg40m_ld0fjkh00b4kd/T/pytest-of-shoyer/pytest-3/test_dask_distributed_netcdf_r0/testfile.nc' engine = 'netcdf4', nc_format = 'NETCDF3_CLASSIC'

@pytest.mark.parametrize('engine,nc_format', ENGINES_AND_FORMATS)  # noqa
def test_dask_distributed_netcdf_roundtrip(
        loop, tmp_netcdf_filename, engine, nc_format):

    if engine not in ENGINES:
        pytest.skip('engine not available')

    chunks = {'dim1': 4, 'dim2': 3, 'dim3': 6}

    with cluster() as (s, [a, b]):
        with Client(s['address'], loop=loop):

            original = create_test_data().chunk(chunks)

            if engine == 'scipy':
                with pytest.raises(NotImplementedError):
                    original.to_netcdf(tmp_netcdf_filename,
                                       engine=engine, format=nc_format)
                return

            original.to_netcdf(tmp_netcdf_filename,
                               engine=engine, format=nc_format)

            with xr.open_dataset(tmp_netcdf_filename,
                                 chunks=chunks, engine=engine) as restored:
                assert isinstance(restored.var1.data, da.Array)
                computed = restored.compute()
              assert_allclose(original, computed)

xarray/tests/test_distributed.py:87:


../../miniconda3/envs/xarray-py37/lib/python3.7/contextlib.py:119: in exit next(self.gen)


nworkers = 2, nanny = False, worker_kwargs = {}, active_rpc_timeout = 1, scheduler_kwargs = {}

@contextmanager
def cluster(nworkers=2, nanny=False, worker_kwargs={}, active_rpc_timeout=1,
            scheduler_kwargs={}):
    ...  # trimmed
    start = time()
    while list(ws):
        sleep(0.01)
      assert time() < start + 1, 'Workers still around after one second'

E AssertionError: Workers still around after one second

../../miniconda3/envs/xarray-py37/lib/python3.7/site-packages/distributed/utils_test.py:721: AssertionError ------------------------------------------------ Captured stderr call ------------------------------------------------ distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:51715 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:51718 distributed.worker - INFO - Listening to: tcp://127.0.0.1:51718 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:51715 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 17.18 GB distributed.worker - INFO - Local Directory: /Users/shoyer/dev/xarray/_test_worker-5cabd1b7-4d9c-49eb-a79e-205c588f5dae/worker-n8uv72yx distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:51720 distributed.worker - INFO - Listening to: tcp://127.0.0.1:51720 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:51715 distributed.scheduler - INFO - Register tcp://127.0.0.1:51718 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 17.18 GB distributed.worker - INFO - Local Directory: /Users/shoyer/dev/xarray/_test_worker-71a426d4-bd34-4808-9d33-79cac2bb4801/worker-a70rlf4r distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:51718 distributed.core - INFO - Starting established connection distributed.worker - INFO - Registered to: tcp://127.0.0.1:51715 distributed.worker - INFO - ------------------------------------------------- distributed.core - INFO - Starting established connection distributed.scheduler - INFO - Register tcp://127.0.0.1:51720 distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:51720 distributed.core - INFO - Starting established connection distributed.worker - INFO - Registered to: tcp://127.0.0.1:51715 distributed.worker - INFO - ------------------------------------------------- distributed.core - INFO - Starting established connection distributed.scheduler - INFO - Receive client connection: Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.core - INFO - Starting established connection distributed.core - INFO - Event loop was unresponsive in Worker for 1.05s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. distributed.scheduler - INFO - Receive client connection: Client-worker-5a5c81de-5972-11e9-9136-8c85907bce57 distributed.core - INFO - Starting established connection distributed.core - INFO - Event loop was unresponsive in Worker for 1.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. distributed.scheduler - INFO - Receive client connection: Client-worker-5b2496d8-5972-11e9-9137-8c85907bce57 distributed.core - INFO - Starting established connection distributed.scheduler - INFO - Remove client Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.scheduler - INFO - Remove client Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.scheduler - INFO - Close client connection: Client-59a7918c-5972-11e9-912a-8c85907bce57 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:51720 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:51718 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:51720 distributed.core - INFO - Removing comms to tcp://127.0.0.1:51720 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:51718 distributed.core - INFO - Removing comms to tcp://127.0.0.1:51718 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Remove client Client-worker-5b2496d8-5972-11e9-9137-8c85907bce57 distributed.scheduler - INFO - Remove client Client-worker-5a5c81de-5972-11e9-9136-8c85907bce57 distributed.scheduler - INFO - Close client connection: Client-worker-5b2496d8-5972-11e9-9137-8c85907bce57 distributed.scheduler - INFO - Close client connection: Client-worker-5a5c81de-5972-11e9-9136-8c85907bce57 distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms ```

Version info: ``` In [2]: xarray.show_versions()

INSTALLED VERSIONS

commit: 2ce0639ee2ba9c7b1503356965f77d847d6cfcdf python: 3.7.2 (default, Dec 29 2018, 00:00:04) [Clang 4.0.1 (tags/RELEASE_401/final)] python-bits: 64 OS: Darwin OS-release: 18.2.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2

xarray: 0.12.1+4.g2ce0639e pandas: 0.24.0 numpy: 1.15.4 scipy: 1.1.0 netCDF4: 1.4.3.2 pydap: None h5netcdf: 0.7.0 h5py: 2.9.0 Nio: None zarr: 2.2.0 cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.1.5 distributed: 1.26.1 matplotlib: 3.0.2 cartopy: 0.17.0 seaborn: 0.9.0 setuptools: 40.0.0 pip: 18.0 conda: None pytest: 4.0.1 IPython: 6.5.0 sphinx: 1.8.2 ```

@mrocklin does this sort of error look familiar to you?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2873/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  not_planned xarray 13221727 issue
588105641 MDU6SXNzdWU1ODgxMDU2NDE= 3893 HTML repr in the online docs shoyer 1217238 open 0     3 2020-03-26T02:17:51Z 2023-09-11T17:41:59Z   MEMBER      

I noticed two minor issues in our online docs, now that we've switched to the hip new HTML repr by default.

  1. Most doc pages still show text, not HTML. I suspect this is a limitation of the IPython sphinx derictive we use for our snippets. We might be able to fix that by switching to jupyter-sphinx?

  2. The "attributes" part of the HTML repr in our notebook examples looks a little funny, with strange blue formatting around each attribute name. It looks like part of the outer style of our docs is leaking into the HTML repr:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3893/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1376109308 I_kwDOAMm_X85SBcL8 7045 Should Xarray stop doing automatic index-based alignment? shoyer 1217238 open 0     13 2022-09-16T15:31:03Z 2023-08-23T07:42:34Z   MEMBER      

What is your issue?

I am increasingly thinking that automatic index-based alignment in Xarray (copied from pandas) may have been a design mistake. Almost every time I work with datasets with different indexes, I find myself writing code to explicitly align them:

  1. Automatic alignment is hard to predict. The implementation is complicated, and the exact mode of automatic alignment (outer vs inner vs left join) depends on the specific operation. It's also no longer possible to predict the shape (or even the dtype) resulting from most Xarray operations purely from input shape/dtype.
  2. Automatic alignment brings unexpected performance penalty. In some domains (analytics) this is OK, but in others (e.g,. numerical modeling or deep learning) this is a complete deal-breaker.
  3. Automatic alignment is not useful for float indexes, because exact matches are rare. In practice, this makes it less useful in Xarray's usual domains than it for pandas.

Would it be insane to consider changing Xarray's behavior to stop doing automatic alignment? I imagine we could roll this out slowly, first with warnings and then with an option for disabling it.

If you think this is a good or bad idea, consider responding to this issue with a 👍 or 👎 reaction.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7045/reactions",
    "total_count": 13,
    "+1": 9,
    "-1": 2,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 2
}
    xarray 13221727 issue
479942077 MDU6SXNzdWU0Nzk5NDIwNzc= 3213 How should xarray use/support sparse arrays? shoyer 1217238 open 0     55 2019-08-13T03:29:42Z 2023-06-07T15:43:55Z   MEMBER      

I'm looking forward to being easily able to create sparse xarray objects from pandas: https://github.com/pydata/xarray/issues/3206

Are there other xarray APIs that could make good use of sparse arrays, or could make sparse arrays easier to use?

Some ideas: - to_sparse()/to_dense() methods for converting to/from sparse without requiring using .data - to_dataframe()/to_series() could grow options for skipping the fill-value in sparse arrays, so they can round-trip MultiIndex data back to pandas - Serialization to/from netCDF files, using some custom convention (see https://github.com/pydata/xarray/issues/1375#issuecomment-402699810)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3213/reactions",
    "total_count": 14,
    "+1": 14,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1465287257 I_kwDOAMm_X85XVoJZ 7325 Support reading Zarr data via TensorStore shoyer 1217238 open 0     1 2022-11-27T00:12:17Z 2023-05-11T01:24:27Z   MEMBER      

What is your issue?

TensorStore is another high performance API for reading distributed arrays in formats such as Zarr, written in C++.

It could be interesting to write an Xarray storage backend using TensorStore as an alternative way to read Zarr files.

As an exercise, I make a little demo of doing this: https://gist.github.com/shoyer/5b0c485979cc9c36a9685d8cf8e94565

I have not tested it for performance. The main annoyance is that TensorStore doesn't understand Zarr groups or Zarr array attributes, so I needed to write my own helpers for reading this metadata.

Also, there's a bit of an impedance mis-match between TensorStore (where everything returns futures) and Xarray (which assumes that indexing results in numpy arrays). This could likely be improved with some amount of effort -- in particular https://github.com/pydata/xarray/pull/6874/files should help.

CC @jbms who may have better ideas about how to use the TensorStore API.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7325/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
253395960 MDU6SXNzdWUyNTMzOTU5NjA= 1533 Index variables loaded from dask can be computed twice shoyer 1217238 closed 0     6 2017-08-28T17:18:27Z 2023-04-06T04:15:46Z 2023-04-06T04:15:46Z MEMBER      

as reported by @crusaderky in #1522

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1533/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
209653741 MDU6SXNzdWUyMDk2NTM3NDE= 1285 FAQ page could use some updating shoyer 1217238 open 0     1 2017-02-23T03:29:16Z 2023-03-26T16:32:44Z   MEMBER      

Along the same lines as https://github.com/pydata/xarray/issues/1282, we haven't done much updating for frequently asked questions -- it's mostly still the original handful of FAQ entries I wrote in the first version of the docs.

Topics worth addressing:

  • [ ] How xarray handles missing values
  • [x] File formats -- how can I read format X in xarray? (Maybe we should make a table with links to other packages?)

(please add suggestions for this list!)

StackOverflow may be a helpful reference here: http://stackoverflow.com/questions/tagged/python-xarray?sort=votes&pageSize=50

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1285/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
176805500 MDU6SXNzdWUxNzY4MDU1MDA= 1004 Remove IndexVariable.name shoyer 1217238 open 0     3 2016-09-14T03:27:43Z 2023-03-11T19:57:40Z   MEMBER      

As discussed in #947, we should remove the IndexVariable.name attribute. It should be fine to use an IndexVariable anywhere, regardless of whether or not it labels ticks along a dimension.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1004/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
98587746 MDU6SXNzdWU5ODU4Nzc0Ng== 508 Ignore missing variables when concatenating datasets? shoyer 1217238 closed 0     8 2015-08-02T06:03:57Z 2023-01-20T16:04:28Z 2023-01-20T16:04:28Z MEMBER      

Several users (@raj-kesavan, @richardotis, now myself) have wondered about how to concatenate xray Datasets with different variables.

With the current xray.concat, you need to awkwardly create dummy variables filled with NaN in datasets that don't have them (or drop mismatched variables entirely). Neither of these are great options -- concat should have an option (the default?) to take care of this for the user.

This would also be more consistent with pd.concat, which takes a more relaxed approach to matching dataframes with different variables (it does an outer join).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/508/reactions",
    "total_count": 6,
    "+1": 6,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
623804131 MDU6SXNzdWU2MjM4MDQxMzE= 4090 Error with indexing 2D lat/lon coordinates shoyer 1217238 closed 0     2 2020-05-24T06:19:45Z 2022-09-28T12:06:03Z 2022-09-28T12:06:03Z MEMBER      

``` filslp = "ChonghuaYinData/prmsl.mon.mean.nc" filtmp = "ChonghuaYinData/air.sig995.mon.mean.nc" filprc = "ChonghuaYinData/precip.mon.mean.nc"

ds_slp = xr.open_dataset(filslp).sel(time=slice(str(yrStrt)+'-01-01', str(yrLast)+'-12-31'))

ds_slp outputs: <xarray.Dataset> Dimensions: (nbnds: 2, time: 480, x: 349, y: 277) Coordinates: * time (time) datetime64[ns] 1979-01-01 ... 2018-12-01 lat (y, x) float32 ... lon (y, x) float32 ... * y (y) float32 0.0 32463.0 64926.0 ... 8927325.0 8959788.0 * x (x) float32 0.0 32463.0 64926.0 ... 11264660.0 11297120.0 Dimensions without coordinates: nbnds Data variables: Lambert_Conformal int32 ... prmsl (time, y, x) float32 ... time_bnds (time, nbnds) float64 ... Attributes: Conventions: CF-1.2 centerlat: 50.0 centerlon: -107.0 comments:
institution: National Centers for Environmental Prediction latcorners: [ 1.000001 0.897945 46.3544 46.63433 ] loncorners: [-145.5 -68.32005 -2.569891 148.6418 ] platform: Model standardpar1: 50.0 standardpar2: 50.000001 title: NARR Monthly Means dataset_title: NCEP North American Regional Reanalysis (NARR) history: created 2016/04/12 by NOAA/ESRL/PSD references: https://www.esrl.noaa.gov/psd/data/gridded/data.narr.html source: http://www.emc.ncep.noaa.gov/mmb/rreanl/index.html References:
```

``` yrStrt = 1950 # manually specify for convenience yrLast = 2018 # 20th century ends 2018

clStrt = 1950 # reference climatology for SOI clLast = 1979

yrStrtP = 1979 # 1st year GPCP yrLastP = yrLast # match 20th century

latT = -17.6 # Tahiti lonT = 210.75
latD = -12.5 # Darwin lonD = 130.83

select grids of T and D

T = ds_slp.sel(lat=latT, lon=lonT, method='nearest') D = ds_slp.sel(lat=latD, lon=lonD, method='nearest') outputs:


ValueError Traceback (most recent call last) <ipython-input-27-6702b30f473f> in <module> 1 # select grids of T and D ----> 2 T = ds_slp.sel(lat=latT, lon=lonT, method='nearest') 3 D = ds_slp.sel(lat=latD, lon=lonD, method='nearest')

~\Anaconda3\lib\site-packages\xarray\core\dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2004 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2005 pos_indexers, new_indexes = remap_label_indexers( -> 2006 self, indexers=indexers, method=method, tolerance=tolerance 2007 ) 2008 result = self.isel(indexers=pos_indexers, drop=drop)

~\Anaconda3\lib\site-packages\xarray\core\coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 378 379 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 380 obj, v_indexers, method=method, tolerance=tolerance 381 ) 382 # attach indexer's coordinate to pos_indexers

~\Anaconda3\lib\site-packages\xarray\core\indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 257 new_indexes = {} 258 --> 259 dim_indexers = get_dim_indexers(data_obj, indexers) 260 for dim, label in dim_indexers.items(): 261 try:

~\Anaconda3\lib\site-packages\xarray\core\indexing.py in get_dim_indexers(data_obj, indexers) 223 ] 224 if invalid: --> 225 raise ValueError("dimensions or multi-index levels %r do not exist" % invalid) 226 227 level_indexers = defaultdict(dict)

ValueError: dimensions or multi-index levels ['lat', 'lon'] do not exist ```

Does any know how fix to this problem?Thank you very much.

Originally posted by @JimmyGao0204 in https://github.com/pydata/xarray/issues/475#issuecomment-633172787

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4090/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1210147360 I_kwDOAMm_X85IIWIg 6504 test_weighted.test_weighted_operations_nonequal_coords should avoid depending on random number seed shoyer 1217238 closed 0 shoyer 1217238   0 2022-04-20T19:56:19Z 2022-08-29T20:42:30Z 2022-08-29T20:42:30Z MEMBER      

What happened?

In testing an upgrade to the latest version of xarray in our systems, I noticed this test failing: ``` def test_weighted_operations_nonequal_coords(): # There are no weights for a == 4, so that data point is ignored. weights = DataArray(np.random.randn(4), dims=("a",), coords=dict(a=[0, 1, 2, 3])) data = DataArray(np.random.randn(4), dims=("a",), coords=dict(a=[1, 2, 3, 4])) check_weighted_operations(data, weights, dim="a", skipna=None)

    q = 0.5
    result = data.weighted(weights).quantile(q, dim="a")
    # Expected value computed using code from [https://aakinshin.net/posts/weighted-quantiles/](https://www.google.com/url?q=https://aakinshin.net/posts/weighted-quantiles/&sa=D) with values at a=1,2,3
    expected = DataArray([0.9308707], coords={"quantile": [q]}).squeeze()
  assert_allclose(result, expected)

E AssertionError: Left and right DataArray objects are not close E
E Differing values: E L E array(0.919569) E R E array(0.930871) ```

It appears that this test is hard-coded to match a particular random number seed, which in turn would fix the resutls of np.random.randn().

What did you expect to happen?

Whenever possible, Xarray's own tests should avoid relying on particular random number generators, e.g., in this case we could specify random numbers instead.

A back-up option would be to explicitly set random seed locally inside the tests, e.g., by creating a np.random.RandomState() with a fixed seed and using that. The global random state used by np.random.randn() is sensitive to implementation details like the order in which tests are run.

Minimal Complete Verifiable Example

No response

Relevant log output

No response

Anything else we need to know?

No response

Environment

...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6504/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1210267320 I_kwDOAMm_X85IIza4 6505 Dropping a MultiIndex variable raises an error after explicit indexes refactor shoyer 1217238 closed 0     3 2022-04-20T22:07:26Z 2022-07-21T14:46:58Z 2022-07-21T14:46:58Z MEMBER      

What happened?

With the latest released version of Xarray, it is possible to delete all variables corresponding to a MultiIndex by simply deleting the name of the MultiIndex.

After the explicit indexes refactor (i.e,. using the "main" development branch) this now raises error about how this would "corrupt" index state. This comes up when using drop() and assign_coords() and possibly some other methods.

This is not hard to work around, but we may want to consider this bug a blocker for the next Xarray release. I found the issue surfaced in several projects when attempting to use the new version of Xarray inside Google's codebase.

CC @benbovy in case you have any thoughts to share.

What did you expect to happen?

For now, we should preserve the behavior of deleting the variables corresponding to MultiIndex levels, but should issue a deprecation warning encouraging users to explicitly delete everything.

Minimal Complete Verifiable Example

```Python import xarray

array = xarray.DataArray( [[1, 2], [3, 4]], dims=['x', 'y'], coords={'x': ['a', 'b']}, ) stacked = array.stack(z=['x', 'y']) print(stacked.drop('z')) print() print(stacked.assign_coords(z=[1, 2, 3, 4])) ```

Relevant log output

```Python ValueError Traceback (most recent call last) Input In [1], in <cell line: 9>() 3 array = xarray.DataArray( 4 [[1, 2], [3, 4]], 5 dims=['x', 'y'], 6 coords={'x': ['a', 'b']}, 7 ) 8 stacked = array.stack(z=['x', 'y']) ----> 9 print(stacked.drop('z')) 10 print() 11 print(stacked.assign_coords(z=[1, 2, 3, 4]))

File ~/dev/xarray/xarray/core/dataarray.py:2425, in DataArray.drop(self, labels, dim, errors, labels_kwargs) 2408 def drop( 2409 self, 2410 labels: Mapping = None, (...) 2414 labels_kwargs, 2415 ) -> DataArray: 2416 """Backward compatible method based on drop_vars and drop_sel 2417 2418 Using either drop_vars or drop_sel is encouraged (...) 2423 DataArray.drop_sel 2424 """ -> 2425 ds = self._to_temp_dataset().drop(labels, dim, errors=errors) 2426 return self._from_temp_dataset(ds)

File ~/dev/xarray/xarray/core/dataset.py:4590, in Dataset.drop(self, labels, dim, errors, **labels_kwargs) 4584 if dim is None and (is_scalar(labels) or isinstance(labels, Iterable)): 4585 warnings.warn( 4586 "dropping variables using drop will be deprecated; using drop_vars is encouraged.", 4587 PendingDeprecationWarning, 4588 stacklevel=2, 4589 ) -> 4590 return self.drop_vars(labels, errors=errors) 4591 if dim is not None: 4592 warnings.warn( 4593 "dropping labels using list-like labels is deprecated; using " 4594 "dict-like arguments with drop_sel, e.g. `ds.drop_sel(dim=[labels]).", 4595 DeprecationWarning, 4596 stacklevel=2, 4597 )

File ~/dev/xarray/xarray/core/dataset.py:4549, in Dataset.drop_vars(self, names, errors) 4546 if errors == "raise": 4547 self._assert_all_in_dataset(names) -> 4549 assert_no_index_corrupted(self.xindexes, names) 4551 variables = {k: v for k, v in self._variables.items() if k not in names} 4552 coord_names = {k for k in self._coord_names if k in variables}

File ~/dev/xarray/xarray/core/indexes.py:1394, in assert_no_index_corrupted(indexes, coord_names) 1392 common_names_str = ", ".join(f"{k!r}" for k in common_names) 1393 index_names_str = ", ".join(f"{k!r}" for k in index_coords) -> 1394 raise ValueError( 1395 f"cannot remove coordinate(s) {common_names_str}, which would corrupt " 1396 f"the following index built from coordinates {index_names_str}:\n" 1397 f"{index}" 1398 )

ValueError: cannot remove coordinate(s) 'z', which would corrupt the following index built from coordinates 'z', 'x', 'y': <xarray.core.indexes.PandasMultiIndex object at 0x148c95150> ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: 33cdabd261b5725ac357c2823bd0f33684d3a954 python: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:42:03) [Clang 12.0.1 ] python-bits: 64 OS: Darwin OS-release: 21.4.0 machine: arm64 processor: arm byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.18.3.dev137+g96c56836 pandas: 1.4.2 numpy: 1.22.3 scipy: 1.8.0 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.11.3 cftime: 1.6.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2022.04.1 distributed: 2022.4.1 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2022.3.0 cupy: None pint: None sparse: None setuptools: 62.1.0 pip: 22.0.4 conda: None pytest: 7.1.1 IPython: 8.2.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6505/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
711626733 MDU6SXNzdWU3MTE2MjY3MzM= 4473 Wrap numpy-groupies to speed up Xarray's groupby aggregations shoyer 1217238 closed 0     8 2020-09-30T04:43:04Z 2022-05-15T02:38:29Z 2022-05-15T02:38:29Z MEMBER      

Is your feature request related to a problem? Please describe.

Xarray's groupby aggregations (e.g., groupby(..).sum()) are very slow compared to pandas, as described in https://github.com/pydata/xarray/issues/659.

Describe the solution you'd like

We could speed things up considerably (easily 100x) by wrapping the numpy-groupies package.

Additional context

One challenge is how to handle dask arrays (and other duck arrays). In some cases it might make sense to apply the numpy-groupies function (using apply_ufunc), but in other cases it might be better to stick with the current indexing + concatenate solution. We could either pick some simple heuristics for choosing the algorithm to use on dask arrays, or could just stick with the current algorithm for now.

In particular, it might make sense to stick with the current algorithm if there are a many chunks in the arrays to aggregated along the "grouped" dimension (depending on the size of the unique group values).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4473/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
326205036 MDU6SXNzdWUzMjYyMDUwMzY= 2180 How should Dataset.update() handle conflicting coordinates? shoyer 1217238 open 0     16 2018-05-24T16:46:23Z 2022-04-30T13:40:28Z   MEMBER      

Recently, we updated Dataset.__setitem__ to drop conflicting coordinates from DataArray values being assigned if they conflict with existing coordinates (https://github.com/pydata/xarray/pull/2087). Because update and __setitem__ share the same code path, this inadvertently updated update as well. Is this something we want?

In v0.10.3, both __setitem__ and update prioritize coordinates from the assigned objects (e.g., value in dataset[key] = value).

In v0.10.4, both __setitem__ and update prioritize coordinates from the original object (e.g., dataset).

I'm not sure this is the right behavior. In particular, in the case of dataset.update(other) where other is also an xarray.Dataset, it seems like coordinates from other should take priority.

Note that one advantage of the current logic (which is violated by my current fix in https://github.com/pydata/xarray/pull/2162), is that we maintain the invariant that dataset[key] = value is equivalent to dataset.update({key: value}).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2180/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
612918997 MDU6SXNzdWU2MTI5MTg5OTc= 4034 Fix tight_layout warning on cartopy facetgrid docs example shoyer 1217238 open 0     1 2020-05-05T21:54:46Z 2022-04-30T12:37:50Z   MEMBER      

Per the fix in https://github.com/pydata/xarray/pull/4032, I'm pretty sure we will soon start seeing a warning message printed on ReadTheDocs in Cartopy FacetGrid example: http://xarray.pydata.org/en/stable/plotting.html#maps

This would be nice to fix for users, especially because it's likely users will see this warning when running code outside of our documentation, too.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4034/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
621123222 MDU6SXNzdWU2MjExMjMyMjI= 4081 Wrap "Dimensions" onto multiple lines in xarray.Dataset repr? shoyer 1217238 closed 0     4 2020-05-19T16:31:59Z 2022-04-29T19:59:24Z 2022-04-29T19:59:24Z MEMBER      

Here's an example dataset of a large dataset from @alimanfoo: https://nbviewer.jupyter.org/gist/alimanfoo/b74b08465727894538d5b161b3ced764 <xarray.Dataset> Dimensions: (__variants/BaseCounts_dim1: 4, __variants/MLEAC_dim1: 3, __variants/MLEAF_dim1: 3, alt_alleles: 3, ploidy: 2, samples: 1142, variants: 21442865) Coordinates: samples/ID (samples) object dask.array<chunksize=(1142,), meta=np.ndarray> variants/CHROM (variants) object dask.array<chunksize=(21442865,), meta=np.ndarray> variants/POS (variants) int32 dask.array<chunksize=(4194304,), meta=np.ndarray> Dimensions without coordinates: __variants/BaseCounts_dim1, __variants/MLEAC_dim1, __variants/MLEAF_dim1, alt_alleles, ploidy, samples, variants Data variables: variants/ABHet (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/ABHom (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/AC (variants, alt_alleles) int32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> variants/AF (variants, alt_alleles) float32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> ...

I know similarly large datasets with lots of dimensions come up in other contexts as well, e.g., with geophysical model output.

That's a very long first line! This would be easier to read as: <xarray.Dataset> Dimensions: (__variants/BaseCounts_dim1: 4, __variants/MLEAC_dim1: 3, __variants/MLEAF_dim1: 3, alt_alleles: 3, ploidy: 2, samples: 1142, variants: 21442865) Coordinates: samples/ID (samples) object dask.array<chunksize=(1142,), meta=np.ndarray> variants/CHROM (variants) object dask.array<chunksize=(21442865,), meta=np.ndarray> variants/POS (variants) int32 dask.array<chunksize=(4194304,), meta=np.ndarray> Dimensions without coordinates: __variants/BaseCounts_dim1, __variants/MLEAC_dim1, __variants/MLEAF_dim1, alt_alleles, ploidy, samples, variants Data variables: variants/ABHet (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/ABHom (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/AC (variants, alt_alleles) int32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> variants/AF (variants, alt_alleles) float32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> ...

or maybe: <xarray.Dataset> Dimensions: __variants/BaseCounts_dim1: 4 __variants/MLEAC_dim1: 3 __variants/MLEAF_dim1: 3 alt_alleles: 3 ploidy: 2 samples: 1142 variants: 21442865 Coordinates: samples/ID (samples) object dask.array<chunksize=(1142,), meta=np.ndarray> variants/CHROM (variants) object dask.array<chunksize=(21442865,), meta=np.ndarray> variants/POS (variants) int32 dask.array<chunksize=(4194304,), meta=np.ndarray> Dimensions without coordinates: __variants/BaseCounts_dim1, __variants/MLEAC_dim1, __variants/MLEAF_dim1, alt_alleles, ploidy, samples, variants Data variables: variants/ABHet (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/ABHom (variants) float32 dask.array<chunksize=(4194304,), meta=np.ndarray> variants/AC (variants, alt_alleles) int32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> variants/AF (variants, alt_alleles) float32 dask.array<chunksize=(4194304, 3), meta=np.ndarray> ...

Dimensions without coordinates could probably use some wrapping, too.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4081/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
205455788 MDU6SXNzdWUyMDU0NTU3ODg= 1251 Consistent naming for xarray's methods that apply functions shoyer 1217238 closed 0     13 2017-02-05T21:27:24Z 2022-04-27T20:06:25Z 2022-04-27T20:06:25Z MEMBER      

We currently have two types of methods that take a function to apply to xarray objects: - pipe (on DataArray and Dataset): apply a function to this entire object (array.pipe(func) -> func(array)) - apply (on Dataset and GroupBy): apply a function to each labeled object in this object (e.g., ds.apply(func) -> ds({k: func(v) for k, v in ds.data_vars.items()})).

And one more method that we want to add but isn't finalized yet -- currently named apply_ufunc: - Apply a function that acts on unlabeled (i.e., numpy) arrays to each array in the object

I'd like to have three distinct names that makes it clear what these methods do and how they are different. This has come up a few times recently, e.g., https://github.com/pydata/xarray/issues/1130

One proposal: rename apply to map, and then use apply only for methods that act on unlabeled arrays. This would require a deprecation cycle, but eventually it would let us add .apply methods for handling raw arrays to both Dataset and DataArray. (We could use a separate apply method from apply_ufunc to convert dim arguments to axis and not do automatic broadcasting.)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1251/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
342180429 MDU6SXNzdWUzNDIxODA0Mjk= 2298 Making xarray math lazy shoyer 1217238 open 0     7 2018-07-18T05:18:53Z 2022-04-19T15:38:59Z   MEMBER      

At SciPy, I had the realization that it would be relatively straightforward to make element-wise math between xarray objects lazy. This would let us support lazy coordinate arrays, a feature that has quite a few use-cases, e.g., for both geoscience and astronomy.

The trick would be to write a lazy array class that holds an element-wise vectorized function and passes indexers on to its arguments. I haven't thought too hard about this yet for vectorized indexing, but it could be quite efficient for outer indexing. I have some prototype code but no tests yet.

The question is how to hook this into xarray operations. In particular, supposing that the inputs to a function do no hold dask arrays: - Should we try to make every element-wise operation with vectorized functions (ufuncs) lazy by default? This might have negative performance implications and would be a little tricky to implement with xarray's current code, since we still implement binary operations like + with separate logic from apply_ufunc. - Should we make every element-wise operation that explicitly uses apply_ufunc() lazy by default? - Or should we only make element-wise operations lazy with apply_ufunc() if you use some special flag, e.g., apply_ufunc(..., lazy=True)?

I am leaning towards the last option for now but would welcome other opinions.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2298/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
902622057 MDU6SXNzdWU5MDI2MjIwNTc= 5381 concat() with compat='no_conflicts' on dask arrays has accidentally quadratic runtime shoyer 1217238 open 0     0 2021-05-26T16:12:06Z 2022-04-19T03:48:27Z   MEMBER      

This ends up calling fillna() in a loop inside xarray.core.merge.unique_variable(), something like: python out = variables[0] for var in variables[1:]: out = out.fillna(var) https://github.com/pydata/xarray/blob/55e5b5aaa6d9c27adcf9a7cb1f6ac3bf71c10dea/xarray/core/merge.py#L147-L149

This has quadratic behavior if the variables are stored in dask arrays (the dask graph gets one element larger after each loop iteration). This is OK for merge() (which typically only has two arguments) but is problematic for dealing with variables that shouldn't be concatenated inside concat(), which should be able to handle very long lists of arguments.

I encountered this because compat='no_conflicts' is the default for xarray.combine_nested().

I guess there's also the related issue which is that even if we produced the output dask graph by hand without a loop, it still wouldn't be easy to evaluate for a large number of elements. Ideally we would use some sort of tree-reduction to ensure the operation can be parallelized.

xref https://github.com/google/xarray-beam/pull/13

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5381/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
325439138 MDU6SXNzdWUzMjU0MzkxMzg= 2171 Support alignment/broadcasting with unlabeled dimensions of size 1 shoyer 1217238 open 0     5 2018-05-22T19:52:21Z 2022-04-19T03:15:24Z   MEMBER      

Sometimes, it's convenient to include placeholder dimensions of size 1, which allows for removing any ambiguity related to the order of output dimensions.

Currently, this is not supported with xarray: ```

xr.DataArray([1], dims='x') + xr.DataArray([1, 2, 3], dims='x') ValueError: arguments without labels along dimension 'x' cannot be aligned because they have different dimension sizes: {1, 3}

xr.Variable(('x',), [1]) + xr.Variable(('x',), [1, 2, 3]) ValueError: operands cannot be broadcast together with mismatched lengths for dimension 'x': (1, 3) ```

However, these operations aren't really ambiguous. With size 1 dimensions, we could logically do broadcasting like NumPy arrays, e.g., ```

np.array([1]) + np.array([1, 2, 3]) array([2, 3, 4]) ```

This would be particularly convenient if we add keepdims=True to xarray operations (#2170).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2171/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
665488672 MDU6SXNzdWU2NjU0ODg2NzI= 4267 CachingFileManager should not use __del__ shoyer 1217238 open 0     2 2020-07-25T01:20:52Z 2022-04-17T21:42:39Z   MEMBER      

__del__ is sometimes called after modules have been deallocated, which results in errors printed to stderr when Python exits. This manifests itself in the following bug: https://github.com/shoyer/h5netcdf/issues/50

Per https://github.com/shoyer/h5netcdf/issues/50#issuecomment-572191867, the right solution is probably to use weakref.finalize.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4267/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
469440752 MDU6SXNzdWU0Njk0NDA3NTI= 3139 Change the signature of DataArray to DataArray(data, dims, coords, ...)? shoyer 1217238 open 0     1 2019-07-17T20:54:57Z 2022-04-09T15:28:51Z   MEMBER      

Currently, the signature of DataArray is DataArray(data, coords, dims, ...): http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html

In the long term, I think DataArray(data, dims, coords, ...) would be more intuitive: dimensions are a more fundamental part of xarray's data model than coordinates. Certainly I find it much more common to omit coords than to omit dims when I create a DataArray.

My original reasoning for this argument order was that dims could be copied from coords, e.g., DataArray(new_data, old_dataarray.coords), and it was nice to be able to pass this sole argument by position instead of by name. But a cleaner way to write this now is old_dataarray.copy(data=new_data).

The challenge in making any change here would be to have a smooth deprecation process, and that ideally avoids requiring users to rewrite all of their code and avoids loads of pointless/extraneous warnings. I'm not entirely sure this is possible. We could likely use heuristics to distinguish between dims and coords arguments regardless of their order, but this probably isn't something we would want to preserve in the long term.

An alternative that might achieve some of the convenience of this change would be to allow for passing lists of strings in the coords argument by position, which are interpreted as dimensions, e.g., DataArray(data, ['x', 'y']). The downside of this alternative is that it would add even more special cases to the DataArray constructor , which would make it harder to understand.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3139/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
864249974 MDU6SXNzdWU4NjQyNDk5NzQ= 5202 Make creating a MultiIndex in stack optional shoyer 1217238 closed 0     7 2021-04-21T20:21:03Z 2022-03-17T17:11:42Z 2022-03-17T17:11:42Z MEMBER      

As @Hoeze notes in https://github.com/pydata/xarray/issues/5179, calling stack() can be "incredibly slow and memory-demanding, since it creates a MultiIndex of every possible coordinate in the array."

This is true with how stack() works currently, but I'm not sure this is necessary. I suspect it's a vestigial design choice from copying pandas, back from before Xarray had optional indexes. One benefit is that it's convenient for making unstack() the inverse of stack(), but isn't always required.

Regardless of how we define the semantics for boolean indexing (https://github.com/pydata/xarray/issues/1887), it seems like it could be a good idea to allow stack to skip creating a MultiIndex for the new dimension, via a new keyword argument such as ds.stack(index=False). This would be equivalent to calling reset_index() after stack() but would be cheaper because the MultiIndex is never created in the first place.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5202/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
237008177 MDU6SXNzdWUyMzcwMDgxNzc= 1460 groupby should still squeeze for non-monotonic inputs shoyer 1217238 open 0     5 2017-06-19T20:05:14Z 2022-03-04T21:31:41Z   MEMBER      

We can simply use argsort() to determine group_indices instead of np.arange(): https://github.com/pydata/xarray/blob/22ff955d53e253071f6e4fa849e5291d0005282a/xarray/core/groupby.py#L256

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1460/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
58117200 MDU6SXNzdWU1ODExNzIwMA== 324 Support multi-dimensional grouped operations and group_over shoyer 1217238 open 0   1.0 741199 12 2015-02-18T19:42:20Z 2022-02-28T19:03:17Z   MEMBER      

Multi-dimensional grouped operations should be relatively straightforward -- the main complexity will be writing an N-dimensional concat that doesn't involve repetitively copying data.

The idea with group_over would be to support groupby operations that act on a single element from each of the given groups, rather than the unique values. For example, ds.group_over(['lat', 'lon']) would let you iterate over or apply to 2D slices of ds, no matter how many dimensions it has.

Roughly speaking (it's a little more complex for the case of non-dimension variables), ds.group_over(dims) would get translated into ds.groupby([d for d in ds.dims if d not in dims]).

Related: #266

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/324/reactions",
    "total_count": 18,
    "+1": 18,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1090700695 I_kwDOAMm_X85BAsWX 6125 [Bug]: HTML repr does not display well in notebooks hosted on GitHub shoyer 1217238 open 0     0 2021-12-29T19:05:49Z 2021-12-29T19:36:25Z   MEMBER      

What happened?

We see both the raw text and a malformed version of the HTML (without CSS formatting).

Example (https://github.com/microsoft/PlanetaryComputerExamples/blob/main/quickstarts/reading-zarr-data.ipynb):

What did you expect to happen?

Either:

  1. Ideally, we only see the HTML repr, with CSS formatting applied.
  2. Or, if that isn't possible, we should figure out how to only show the raw text.

nbviewer gets this right:

Minimal Complete Verifiable Example

No response

Relevant log output

No response

Anything else we need to know?

No response

Environment

NA

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6125/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
874292512 MDU6SXNzdWU4NzQyOTI1MTI= 5251 Switch default for Zarr reading/writing to consolidated=True? shoyer 1217238 closed 0     4 2021-05-03T06:59:42Z 2021-08-30T15:21:11Z 2021-08-30T15:21:11Z MEMBER      

Consolidated metadata was a new feature in Zarr v2.3, which was released over two year ago (March 22, 2019).

Since then, I have used consolidated=True every time I've written or opened a Zarr store. As far as I can tell, this is almost always a good idea: - With local storage, it usually doesn't really matter. You spend a bit of time writing the consolidated metadata and have one extra file on disk, but the overhead is typically negligible. - With Cloud object stores or network filesystems, it can matter quite a large amount. Without consolidated metadata, these systems can be unusably slow for opening datasets. Cloud storage is of course the main use-case for Zarr. If you're using a local disk, you might as well stick with single files such as netCDF.

I wonder if consolidated metadata is mature enough now that we could consider switching the default behavior in Xarray. From my perspective, this is a big "gotcha" for getting good performance with Zarr. More than one of my colleagues has been unimpressed with the performance of Zarr until they learned to set consolidated=True.

I would suggest doing this in way is almost entirely backwards compatible, with only a minor performance costs for reading non-consolidated datasets: - to_zarr() switches the default to consolidated=True. The consolidate_metadata() will thus happen by default. - open_zarr() switches the default to consolidated=None, which means "Try reading consolidated metadata, and fall-back to non-consolidated if that fails." This will be slightly slower for non-consolidated metadata due to the extra file-lookup, but given that opening with non-consolidated metadata already requires a moderately large number of file look-ups, I doubt anyone will notice the difference.

CC @rabernat

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5251/reactions",
    "total_count": 11,
    "+1": 11,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
928402742 MDU6SXNzdWU5Mjg0MDI3NDI= 5516 Rename master branch -> main shoyer 1217238 closed 0     4 2021-06-23T15:45:57Z 2021-07-23T21:58:39Z 2021-07-23T21:58:39Z MEMBER      

This is a best practice for inclusive projects.

See https://github.com/github/renaming for guidance.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5516/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
890534794 MDU6SXNzdWU4OTA1MzQ3OTQ= 5295 Engine is no longer inferred for filenames not ending in ".nc" shoyer 1217238 closed 0     0 2021-05-12T22:28:46Z 2021-07-15T14:57:54Z 2021-05-14T22:40:14Z MEMBER      

This works with xarray=0.17.0: python import xarray xarray.Dataset({'x': [1, 2, 3]}).to_netcdf('tmp') xarray.open_dataset('tmp')

On xarray 0.18.0, it fails: ```


ValueError Traceback (most recent call last) <ipython-input-1-20e128a730aa> in <module>() 2 3 xarray.Dataset({'x': [1, 2, 3]}).to_netcdf('tmp') ----> 4 xarray.open_dataset('tmp')

/usr/local/lib/python3.7/dist-packages/xarray/backends/api.py in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, backend_kwargs, args, *kwargs) 483 484 if engine is None: --> 485 engine = plugins.guess_engine(filename_or_obj) 486 487 backend = plugins.get_backend(engine)

/usr/local/lib/python3.7/dist-packages/xarray/backends/plugins.py in guess_engine(store_spec) 110 warnings.warn(f"{engine!r} fails while guessing", RuntimeWarning) 111 --> 112 raise ValueError("cannot guess the engine, try passing one explicitly") 113 114

ValueError: cannot guess the engine, try passing one explicitly ```

I'm not entirely sure what changed. My guess is that we used to fall-back to trying to use SciPy, but don't do that anymore. A potential fix would be reading strings as filenames in xarray.backends.utils.read_magic_number.

Related: https://github.com/pydata/xarray/issues/5291

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5295/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
252707680 MDU6SXNzdWUyNTI3MDc2ODA= 1525 Consider setting name=False in Variable.chunk() shoyer 1217238 open 0     4 2017-08-24T19:34:28Z 2021-07-13T01:50:16Z   MEMBER      

@mrocklin writes:

The following will be slower: b = (a.chunk(...) + 1) + (a.chunk(...) + 1) In current operation this will be optimized to tmp = a.chunk(...) + 1 b = tmp + tmp So you'll lose that, but I suspect that in your case chunking the same dataset many times is somewhat rare.

See here for discussion: https://github.com/pydata/xarray/pull/1517#issuecomment-324722153

Whether this is worth doing really depends on on what people would find most useful -- and what is the most intuitive behavior.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1525/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
254888879 MDU6SXNzdWUyNTQ4ODg4Nzk= 1552 Flow chart for choosing indexing operations shoyer 1217238 open 0     2 2017-09-03T17:33:30Z 2021-07-11T22:26:17Z   MEMBER      

We have a lot of indexing operations, even though sel_points and isel_points are about to be deprecated (#1473).

A flow chart / decision tree to help users pick the right indexing operation might be helpful (e.g., like this skimage FlowChart). It would ask various questions (e.g., do you have labels or integer positions? do you want to select or impose coordinates?) and then suggest appropriate the indexer methods.

cc @fujiisoup

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1552/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
891281614 MDU6SXNzdWU4OTEyODE2MTQ= 5302 Suggesting specific IO backends to install when open_dataset() fails shoyer 1217238 closed 0     3 2021-05-13T18:45:28Z 2021-06-23T08:18:07Z 2021-06-23T08:18:07Z MEMBER      

Currently, Xarray's internal backends don't get registered unless the necessary dependencies are installed: https://github.com/pydata/xarray/blob/1305d9b624723b86050ca5b2d854e5326bbaa8e6/xarray/backends/netCDF4_.py#L567-L568

In order to facilitating suggesting a specific backend to install (e.g., to improve error messages from opening tutorial datasets https://github.com/pydata/xarray/issues/5291), I would suggest that Xarray always registers its own backend entrypoints. Then we make the following changes to the plugin protocol:

  • guess_can_open() should work regardless of whether the underlying backend is installed
  • installed() returns a boolean reporting whether backend is installed. The default method in the base class would return True, for backwards compatibility.
  • open_dataset() of course should error if the backend is not installed.

This will let us leverage the existing guess_can_open() functionality to suggest specific optional dependencies to install. E.g., if you supply a netCDF3 file: Xarray cannot find a matching installed backend for this file in the installed backends ["h5netcdf"]. Consider installing one of the following backends which reports a match: ["scipy", "netcdf4"]

Does this reasonable and worthwhile?

CC @aurghs @alexamici

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5302/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
340733448 MDU6SXNzdWUzNDA3MzM0NDg= 2283 Exact alignment should allow missing dimension coordinates shoyer 1217238 open 0     2 2018-07-12T17:40:24Z 2021-06-15T09:52:29Z   MEMBER      

Code Sample, a copy-pastable example if possible

python import xarray as xr xr.align(xr.DataArray([1, 2, 3], dims='x'), xr.DataArray([1, 2, 3], dims='x', coords=[[0, 1, 2]]), join='exact')

Problem description

This currently results in an error, but a missing index of size 3 does not actually conflict: ```python-traceback


ValueError Traceback (most recent call last) <ipython-input-15-1d63d3512fb6> in <module>() 1 xr.align(xr.DataArray([1, 2, 3], dims='x'), 2 xr.DataArray([1, 2, 3], dims='x', coords=[[0, 1, 2]]), ----> 3 join='exact')

/usr/local/lib/python3.6/dist-packages/xarray/core/alignment.py in align(objects, *kwargs) 129 raise ValueError( 130 'indexes along dimension {!r} are not equal' --> 131 .format(dim)) 132 index = joiner(matching_indexes) 133 joined_indexes[dim] = index

ValueError: indexes along dimension 'x' are not equal ```

This surfaced as an issue on StackOverflow: https://stackoverflow.com/questions/51308962/computing-matrix-vector-multiplication-for-each-time-point-in-two-dataarrays

Expected Output

Both output arrays should end up with the x coordinate from the input that has it, like the output of the above expression if join='inner': (<xarray.DataArray (x: 3)> array([1, 2, 3]) Coordinates: * x (x) int64 0 1 2, <xarray.DataArray (x: 3)> array([1, 2, 3]) Coordinates: * x (x) int64 0 1 2)

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.3.final.0 python-bits: 64 OS: Linux OS-release: 4.14.33+ machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.7 pandas: 0.22.0 numpy: 1.14.5 scipy: 0.19.1 netCDF4: None h5netcdf: None h5py: 2.8.0 Nio: None zarr: None bottleneck: None cyordereddict: None dask: None distributed: None matplotlib: 2.1.2 cartopy: None seaborn: 0.7.1 setuptools: 39.1.0 pip: 10.0.1 conda: None pytest: None IPython: 5.5.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2283/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
842438533 MDU6SXNzdWU4NDI0Mzg1MzM= 5082 Move encoding from xarray.Variable to duck arrays? shoyer 1217238 open 0     2 2021-03-27T07:21:55Z 2021-06-13T01:34:00Z   MEMBER      

The encoding property on Variable has always been an awkward part of Xarray's API, and an example of poor separation of concerns. It add conceptual overhead to all uses of xarray.Variable, but exists only for the (somewhat niche) benefit of Xarray's backend IO functionality. This is particularly problematic if we consider the possible separation of xarray.Variable into a separate package to remove the pandas dependency (https://github.com/pydata/xarray/issues/3981).

I think a cleaner way to handle encoding would be to move it from Variable onto array objects, specifically duck array objects that Xarray creates when loading data from disk. As long as these duck arrays don't "propagate" themselves under array operations but rather turn into raw numpy arrays (or whatever is wrapped), this would automatically resolve all issues around propagating encoding attributes (e.g., https://github.com/pydata/xarray/pull/5065, https://github.com/pydata/xarray/issues/1614). And users who don't care about encoding because they don't use Xarray's IO functionality would never need to think about it.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5082/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
416554477 MDU6SXNzdWU0MTY1NTQ0Nzc= 2797 Stalebot is being overly aggressive shoyer 1217238 closed 0     7 2019-03-03T19:37:37Z 2021-06-03T21:31:46Z 2021-06-03T21:22:48Z MEMBER      

E.g., see https://github.com/pydata/xarray/issues/1151 where stalebot closed an issue even after another comment.

Is this something we need to reconfigure or just a bug?

cc @pydata/xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2797/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
276241764 MDU6SXNzdWUyNzYyNDE3NjQ= 1739 Utility to restore original dimension order after apply_ufunc shoyer 1217238 open 0     11 2017-11-23T00:47:57Z 2021-05-29T07:39:33Z   MEMBER      

This seems to be coming up quite a bit for wrapping functions that apply an operation along an axis, e.g., for interpolate in #1640 or rank in #1733.

We should either write a utility function to do this or consider adding an option to apply_ufunc.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1739/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
901047466 MDU6SXNzdWU5MDEwNDc0NjY= 5372 Consider revising the _repr_inline_ protocol shoyer 1217238 open 0     0 2021-05-25T16:18:31Z 2021-05-25T16:18:31Z   MEMBER      

_repr_inline_ looks like an IPython special method but is actually includes some xarray specific details: the result should not include shape or dtype.

As I wrote in https://github.com/pydata/xarray/pull/5352, I would suggest revising it in one of two ways:

  1. Giving it a name like _xarray_repr_inline_ to make it clearer that it's Xarray specific
  2. Include some more generic way of indicating that shape/dtype is redundant, e.g,. call it like obj._repr_ndarray_inline_(dtype=False, shape=False)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5372/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
46049691 MDU6SXNzdWU0NjA0OTY5MQ== 255 Add Dataset.to_pandas() method shoyer 1217238 closed 0   0.5 987654 2 2014-10-17T00:01:36Z 2021-05-04T13:56:00Z 2021-05-04T13:56:00Z MEMBER      

This would be the complement of the DataArray constructor, converting an xray.DataArray into a 1D series, 2D DataFrame or 3D panel, whichever is appropriate.

to_pandas would also makes sense for Dataset, if it could convert 0d datasets to series, e.g., pd.Series({k: v.item() for k, v in ds.items()}) (there is currently no direct way to do this), and revert to to_dataframe for higher dimensional input. - [x] DataArray method - [ ] Dataset method

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/255/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
294241734 MDU6SXNzdWUyOTQyNDE3MzQ= 1887 Boolean indexing with multi-dimensional key arrays shoyer 1217238 open 0     13 2018-02-04T23:28:45Z 2021-04-22T21:06:47Z   MEMBER      

Originally from https://github.com/pydata/xarray/issues/974

For boolean indexing: - da[key] where key is a boolean labelled array (with any number of dimensions) is made equivalent to da.where(key.reindex_like(ds), drop=True). This matches the existing behavior if key is a 1D boolean array. For multi-dimensional arrays, even though the result is now multi-dimensional, this coupled with automatic skipping of NaNs means that da[key].mean() gives the same result as in NumPy. - da[key] = value where key is a boolean labelled array can be made equivalent to da = da.where(*align(key.reindex_like(da), value.reindex_like(da))) (that is, the three argument form of where). - da[key_0, ..., key_n] where all of key_i are boolean arrays gets handled in the usual way. It is an IndexingError to supply multiple labelled keys if any of them are not already aligned with as the corresponding index coordinates (and share the same dimension name). If they want alignment, we suggest users simply write da[key_0 & ... & key_n].

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1887/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
346822633 MDU6SXNzdWUzNDY4MjI2MzM= 2336 test_88_character_filename_segmentation_fault should not try to write to the current working directory shoyer 1217238 closed 0     2 2018-08-02T01:06:41Z 2021-04-20T23:38:53Z 2021-04-20T23:38:53Z MEMBER      

This files in cases where the current working directory does not support writes, e.g., as seen here ``` def test_88_character_filename_segmentation_fault(self): # should be fixed in netcdf4 v1.3.1 with mock.patch('netCDF4.version', '1.2.4'): with warnings.catch_warnings(): message = ('A segmentation fault may occur when the ' 'file path has exactly 88 characters') warnings.filterwarnings('error', message) with pytest.raises(Warning): # Need to construct 88 character filepath

              xr.Dataset().to_netcdf('a' * (88 - len(os.getcwd()) - 1))

tests/test_backends.py:1234:


core/dataset.py:1150: in to_netcdf compute=compute) backends/api.py:715: in to_netcdf autoclose=autoclose, lock=lock) backends/netCDF4_.py:332: in open ds = opener() backends/netCDF4_.py:231: in _open_netcdf4_group ds = nc4.Dataset(filename, mode=mode, **kwargs) third_party/py/netCDF4/_netCDF4.pyx:2111: in netCDF4._netCDF4.Dataset.init ???


??? E IOError: [Errno 13] Permission denied ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2336/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
843996137 MDU6SXNzdWU4NDM5OTYxMzc= 5092 Concurrent loading of coordinate arrays from Zarr shoyer 1217238 open 0     0 2021-03-30T02:19:50Z 2021-04-19T02:43:31Z   MEMBER      

When you open a dataset with Zarr, xarray loads coordinate arrays corresponding to indexes in serial. This can be slow (multiple seconds) even with only a handful of such arrays if they are stored in a remote filesystem (e.g., cloud object stores). This is similar to the use-cases for consolidated metadata.

In principle, we could speed up loading datasets from Zarr into Xarray significantly by reading the data corresponding to these arrays in parallel (e.g., in multiple threads).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5092/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
621082480 MDU6SXNzdWU2MjEwODI0ODA= 4080 Most arguments to open_dataset should be keyword only shoyer 1217238 closed 0     1 2020-05-19T15:38:51Z 2021-03-16T10:56:09Z 2021-03-16T10:56:09Z MEMBER      

open_dataset has a long list of arguments: xarray.open_dataset(filename_or_obj, group=None, decode_cf=True, mask_and_scale=None, decode_times=True, autoclose=None, concat_characters=True, decode_coords=True, engine=None, chunks=None, lock=None, cache=None, drop_variables=None, backend_kwargs=None, use_cftime=None)

Similarly to the case for pandas (https://github.com/pandas-dev/pandas/issues/27544), it would be nice to make most of these arguments keyword-only, e.g., def open_dataset(filename_or_obj, group, *, ...). For consistency, this would also apply to open_dataarray, decode_cf, open_mfdataset, etc.

This would encourage writing readable code when calling open_dataset() and would allow us to use better organization when adding new arguments (e.g., decode_timedelta in https://github.com/pydata/xarray/pull/4071).

To make this change, we could make use of the deprecate_nonkeyword_arguments decorator from https://github.com/pandas-dev/pandas/pull/27573

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4080/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
645154872 MDU6SXNzdWU2NDUxNTQ4NzI= 4179 Consider revising our minimum dependency version policy shoyer 1217238 closed 0     7 2020-06-25T05:04:38Z 2021-02-22T05:02:25Z 2021-02-22T05:02:25Z MEMBER      

Our current policy is that xarray supports "the minor version (X.Y) initially published no more than N months ago" where N is:

  • Python: 42 months (NEP 29)
  • numpy: 24 months (NEP 29)
  • pandas: 12 months
  • scipy: 12 months
  • sparse, pint and other libraries that rely on NEP-18 for integration: very latest available versions only,
  • all other libraries: 6 months

I think this policy is too aggressive, particularly for pandas, SciPy and other libraries. Some of these projects can go 6+ months between minor releases. For example, version 2.3 of zarr is currently more than 6 months old. So if zarr released 2.4 today and xarray issued a new release tomorrow, and then our policy would dictate that we could ask users to upgrade to the new version.

In https://github.com/pydata/xarray/pull/4178, I misinterpreted our policy as supporting "the most recent minor version (X.Y) initially published more than N months ago". This version makes a bit more sense to me: users only need to upgrade dependencies at least every N months to use the latest xarray release.

I understand that NEP-29 chose its language intentionally, so that distributors know ahead of time when they can drop support for a Python or NumPy version. But this seems like a (very) poor fit for projects without regular releases. At the very least we should adjust the specific time windows.

I'll see if I can gain some understanding of the motivation for this particular language over on the NumPy tracker...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4179/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
267927402 MDU6SXNzdWUyNjc5Mjc0MDI= 1652 Resolve warnings issued in the xarray test suite shoyer 1217238 closed 0     10 2017-10-24T07:36:55Z 2021-02-21T23:06:35Z 2021-02-21T23:06:34Z MEMBER      

82 warnings are currently issued in the process of running our test suite: https://gist.github.com/shoyer/db0b2c82efd76b254453216e957c4345

Some of can probably be safely ignored, but others are likely noticed by users, e.g., https://stackoverflow.com/questions/41130138/why-is-invalid-value-encountered-in-greater-warning-thrown-in-python-xarray-fo/41147570#41147570

It would be nice to clean up all of these, either by catching the appropriate upstream warning (if irrelevant) or changing our usage to avoid the warning. There may very well be a lurking FutureWarning in there somewhere that could cause issues when another library updates.

Probably the easiest way to get started here is to get the test suite running locally, and use py.test -W error to turn all warnings into errors.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1652/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
777327298 MDU6SXNzdWU3NzczMjcyOTg= 4749 Option for combine_attrs with conflicting values silently dropped shoyer 1217238 closed 0     0 2021-01-01T18:04:49Z 2021-02-10T19:50:17Z 2021-02-10T19:50:17Z MEMBER      

merge() currently supports four options for merging attrs: combine_attrs : {"drop", "identical", "no_conflicts", "override"}, \ default: "drop" String indicating how to combine attrs of the objects being merged: - "drop": empty attrs on returned Dataset. - "identical": all attrs must be the same on every object. - "no_conflicts": attrs from all objects are combined, any that have the same name must also have the same value. - "override": skip comparing and copy attrs from the first dataset to the result.

It would be nice to have an option to combine attrs from all objects like "no_conflicts", but that drops attributes with conflicting values rather than raising an error. We might call this combine_attrs="drop_conflicts" or combine_attrs="matching".

This is similar to how xarray currently handles conflicting values for DataArray.name and would be more suitable to consider for the default behavior of merge and other functions/methods that merge coordinates (e.g., apply_ufunc, concat, where, binary arithmetic).

cc @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4749/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
264098632 MDU6SXNzdWUyNjQwOTg2MzI= 1618 apply_raw() for a simpler version of apply_ufunc() shoyer 1217238 open 0     4 2017-10-10T04:51:38Z 2021-01-01T17:14:43Z   MEMBER      

apply_raw() would work like apply_ufunc(), but without the hard to understand broadcasting behavior and core dimensions.

The rule for apply_raw() would be that it directly unwraps its arguments and passes them on to the wrapped function, without any broadcasting. We would also include a dim argument that is automatically converted into the appropriate axis argument when calling the wrapped function.

Output dimensions would be determined from a simple rule of some sort: - Default output dimensions would either be copied from the first argument, or would take on the ordered union on all input dimensions. - Custom dimensions could either be set by adding a drop_dims argument (like dask.array.map_blocks), or require an explicit override output_dims.

This also could be suitable for defining as a method instead of a separate function. See https://github.com/pydata/xarray/issues/1251 and https://github.com/pydata/xarray/issues/1130 for related issues.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1618/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
269700511 MDU6SXNzdWUyNjk3MDA1MTE= 1672 Append along an unlimited dimension to an existing netCDF file shoyer 1217238 open 0     8 2017-10-30T18:09:54Z 2020-11-29T17:35:04Z   MEMBER      

This would be a nice feature to have for some use cases, e.g., for writing simulation time-steps: https://stackoverflow.com/questions/46951981/create-and-write-xarray-dataarray-to-netcdf-in-chunks

It should be relatively straightforward to add, too, building on support for writing files with unlimited dimensions. User facing API would probably be a new keyword argument to to_netcdf(), e.g., extend='time' to indicate the extended dimension.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1672/reactions",
    "total_count": 21,
    "+1": 21,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
314444743 MDU6SXNzdWUzMTQ0NDQ3NDM= 2059 How should xarray serialize bytes/unicode strings across Python/netCDF versions? shoyer 1217238 open 0     5 2018-04-15T19:36:55Z 2020-11-19T10:08:16Z   MEMBER      

netCDF string types

We have several options for storing strings in netCDF files: - NC_CHAR: netCDF's legacy character type. The closest match is NumPy 'S1' dtype. In principle, it's supposed to be able to store arbitrary bytes. On HDF5, it uses an UTF-8 encoded string with a fixed-size of 1 (but note that HDF5 does not complain about storing arbitrary bytes). - NC_STRING: netCDF's newer variable length string type. It's only available on netCDF4 (not netCDF3). It corresponds to an HDF5 variable-length string with UTF-8 encoding. - NC_CHAR with an _Encoding attribute: xarray and netCDF4-Python support an ad-hoc convention for storing unicode strings in NC_CHAR data-types, by adding an attribute {'_Encoding': 'UTF-8'}. The data is still stored as fixed width strings, but xarray (and netCDF4-Python) can decode them as unicode.

NC_STRING would seem like a clear win in cases where it's supported, but as @crusaderky points out in https://github.com/pydata/xarray/issues/2040, it actually results in much larger netCDF files in many cases than using character arrays, which are more easily compressed. Nonetheless, we currently default to storing unicode strings in NC_STRING, because it's the most portable option -- every tool that handles HDF5 and netCDF4 should be able to read it properly as unicode strings.

NumPy/Python string types

On the Python side, our options are perhaps even more confusing: - NumPy's dtype=np.string_ corresponds to fixed-length bytes. This is the default dtype for strings on Python 2, because on Python 2 strings are the same as bytes. - NumPy's dtype=np.unicode_ corresponds to fixed-length unicode. This is the default dtype for strings on Python 3, because on Python 3 strings are the same as unicode. - Strings are also commonly stored in numpy arrays with dtype=np.object_, as arrays of either bytes or unicode objects. This is a pragmatic choice, because otherwise NumPy has no support for variable length strings. We also use this (like pandas) to mark missing values with np.nan.

Like pandas, we are pretty liberal with converting back and forth between fixed-length (np.string/np.unicode_) and variable-length (object dtype) representations of strings as necessary. This works pretty well, though converting from object arrays in particular has downsides, since it cannot be done lazily with dask.

Current behavior of xarray

Currently, xarray uses the same behavior on Python 2/3. The priority was faithfully round-tripping data from a particular version of Python to netCDF and back, which the current serialization behavior achieves:

| Python version | NetCDF version | NumPy datatype | NetCDF datatype | | --------- | ---------- | -------------- | ------------ | | Python 2 | NETCDF3 | np.string_ / str | NC_CHAR | | Python 2 | NETCDF4 | np.string_ / str | NC_CHAR | | Python 3 | NETCDF3 | np.string_ / bytes | NC_CHAR | | Python 3 | NETCDF4 | np.string_ / bytes | NC_CHAR | | Python 2 | NETCDF3 | np.unicode_ / unicode | NC_CHAR with UTF-8 encoding | | Python 2 | NETCDF4 | np.unicode_ / unicode | NC_STRING | | Python 3 | NETCDF3 | np.unicode_ / str | NC_CHAR with UTF-8 encoding | | Python 3 | NETCDF4 | np.unicode_ / str | NC_STRING | | Python 2 | NETCDF3 | object bytes/str | NC_CHAR | | Python 2 | NETCDF4 | object bytes/str | NC_CHAR | | Python 3 | NETCDF3 | object bytes | NC_CHAR | | Python 3 | NETCDF4 | object bytes | NC_CHAR | | Python 2 | NETCDF3 | object unicode | NC_CHAR with UTF-8 encoding | | Python 2 | NETCDF4 | object unicode | NC_STRING | | Python 3 | NETCDF3 | object unicode/str | NC_CHAR with UTF-8 encoding | | Python 3 | NETCDF4 | object unicode/str | NC_STRING |

This can also be selected explicitly for most data-types by setting dtype in encoding: - 'S1' for NC_CHAR (with or without encoding) - str for NC_STRING (though I'm not 100% sure it works properly currently when given bytes)

Script for generating table:

```python from __future__ import print_function import xarray as xr import uuid import netCDF4 import numpy as np import sys for dtype_name, value in [ ('np.string_ / ' + type(b'').__name__, np.array([b'abc'])), ('np.unicode_ / ' + type(u'').__name__, np.array([u'abc'])), ('object bytes/' + type(b'').__name__, np.array([b'abc'], dtype=object)), ('object unicode/' + type(u'').__name__, np.array([u'abc'], dtype=object)), ]: for format in ['NETCDF3_64BIT', 'NETCDF4']: filename = str(uuid.uuid4()) + '.nc' xr.Dataset({'data': value}).to_netcdf(filename, format=format) with netCDF4.Dataset(filename) as f: var = f.variables['data'] disk_dtype = var.dtype has_encoding = hasattr(var, '_Encoding') disk_dtype_name = (('NC_CHAR' if disk_dtype == 'S1' else 'NC_STRING') + (' with UTF-8 encoding' if has_encoding else '')) print('|', 'Python %i' % sys.version_info[0], '|', format[:7], '|', dtype_name, '|', disk_dtype_name, '|') ```

Potential alternatives

The main option I'm considering is switching to default to NC_CHAR with UTF-8 encoding for np.string_ / str and object bytes/str on Python 2. The current behavior could be explicitly toggled by setting an encoding of {'_Encoding': None}.

This would imply two changes: 1. Attempting to serialize arbitrary bytes (on Python 2) would start raising an error -- anything that isn't ASCII would require explicitly disabling _Encoding. 2. Strings read back from disk on Python 2 would come back as unicode instead of bytes.

This implicit conversion would be consistent with Python 2's general handling of bytes/unicode, and facilitate reading netCDF files on Python 3 that were written with Python 2.

The counter-argument is that it may not be worth changing this at this late point, given that we will be sunsetting Python 2 support by year's end.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2059/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
124809636 MDU6SXNzdWUxMjQ4MDk2MzY= 703 Document xray internals / advanced API shoyer 1217238 closed 0     2 2016-01-04T18:12:30Z 2020-11-03T17:33:32Z 2020-11-03T17:33:32Z MEMBER      

It would be useful to document the internal Variable class and the internal structure of Dataset and DataArray. This would be helpful for both new contributors and expert users, who might find Variable helpful as an advanced API.

I had some notes in an earlier version of the docs that could be adapted. Note, however, that the internal structure of DataArray changed in #648: http://xray.readthedocs.org/en/v0.2/tutorial.html#notes-on-xray-s-internals

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/703/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
715374721 MDU6SXNzdWU3MTUzNzQ3MjE= 4490 Group together decoding options into a single argument shoyer 1217238 open 0     6 2020-10-06T06:15:18Z 2020-10-29T04:07:46Z   MEMBER      

Is your feature request related to a problem? Please describe.

open_dataset() currently has a very long function signature. This makes it hard to keep track of everything it can do, and is particularly problematic for the authors of new backends (e.g., see https://github.com/pydata/xarray/pull/4477), which might need to know how to handle all these arguments.

Describe the solution you'd like

To simple the interface, I propose to group together all the decoding options into a new DecodingOptions class. I'm thinking something like: ```python from dataclasses import dataclass, field, asdict from typing import Optional, List

@dataclass(frozen=True) class DecodingOptions: mask: Optional[bool] = None scale: Optional[bool] = None datetime: Optional[bool] = None timedelta: Optional[bool] = None use_cftime: Optional[bool] = None concat_characters: Optional[bool] = None coords: Optional[bool] = None drop_variables: Optional[List[str]] = None

@classmethods
def disabled(cls):
    return cls(mask=False, scale=False, datetime=False, timedelta=False,
              concat_characters=False, coords=False)

def non_defaults(self):
    return {k: v for k, v in asdict(self).items() if v is not None}

# add another method for creating default Variable Coder() objects,
# e.g., those listed in encode_cf_variable()

```

The signature of open_dataset would then become: python def open_dataset( filename_or_obj, group=None, * engine=None, chunks=None, lock=None, cache=None, backend_kwargs=None, decode: Union[DecodingOptions, bool] = None, **deprecated_kwargs ): if decode is None: decode = DecodingOptions() if decode is False: decode = DecodingOptions.disabled() # handle deprecated_kwargs... ...

Question: are decode and DecodingOptions the right names? Maybe these should still include the name "CF", e.g., decode_cf and CFDecodingOptions, given that these are specific to CF conventions?

Note: the current signature is open_dataset(filename_or_obj, group=None, decode_cf=True, mask_and_scale=None, decode_times=True, autoclose=None, concat_characters=True, decode_coords=True, engine=None, chunks=None, lock=None, cache=None, drop_variables=None, backend_kwargs=None, use_cftime=None, decode_timedelta=None)

Usage with the new interface would look like xr.open_dataset(filename, decode=False) or xr.open_dataset(filename, decode=xr.DecodingOptions(mask=False, scale=False)).

This requires a little bit more typing than what we currently have, but it has a few advantages:

  1. It's easier to understand the role of different arguments. Now there is a function with ~8 arguments and a class with ~8 arguments rather than a function with ~15 arguments.
  2. It's easier to add new decoding arguments (e.g., for more advanced CF conventions), because they don't clutter the open_dataset interface. For example, I separated out mask and scale arguments, versus the current mask_and_scale argument.
  3. If a new backend plugin for open_dataset() needs to handle every option supported by open_dataset(), this makes that task significantly easier. The only decoding options they need to worry about are non-default options that were explicitly set, i.e., those exposed by the non_defaults() method. If another decoding option wasn't explicitly set and isn't recognized by the backend, they can just ignore it.

Describe alternatives you've considered

For the overall approach:

  1. We could keep the current design, with separate keyword arguments for decoding options, and just be very careful about passing around these arguments. This seems pretty painful for the backend refactor, though.
  2. We could keep the current design only for the user facing open_dataset() interface, and then internally convert into the DecodingOptions() struct for passing to backend constructors. This would provide much needed flexibility for backend authors, but most users wouldn't benefit from the new interface. Perhaps this would make sense as an intermediate step?
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4490/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
169274464 MDU6SXNzdWUxNjkyNzQ0NjQ= 939 Consider how to deal with the proliferation of decoder options on open_dataset shoyer 1217238 closed 0     8 2016-08-04T01:57:26Z 2020-10-06T15:39:11Z 2020-10-06T15:39:11Z MEMBER      

There are already lots of keyword arguments, and users want even more! (#843)

Maybe we should use some sort of object to encapsulate desired options?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/939/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
253107677 MDU6SXNzdWUyNTMxMDc2Nzc= 1527 Binary operations with ds.groupby('time.dayofyear') errors out, but ds.groupby('time.month') works shoyer 1217238 open 0     10 2017-08-26T16:54:53Z 2020-09-29T10:05:42Z   MEMBER      

Reported on the mailing list:

Original datasets: ```

ds_xr <xarray.DataArray (time: 12775)> array([-0.01, -0.01, -0.01, ..., -0.27, -0.27, -0.27]) Coordinates: * time (time) datetime64[ns] 1979-01-01 1979-01-02 1979-01-03 ...

slope_itcp_ds <xarray.Dataset> Dimensions: (lat: 73, level: 2, lon: 144, time: 366) Coordinates: * lon (lon) float32 0.0 2.5 5.0 7.5 10.0 12.5 ... * lat (lat) float32 90.0 87.5 85.0 82.5 80.0 ... * level (level) float64 0.0 1.0 * time (time) datetime64[ns] 2010-01-01 ... Data variables: xarray_dataarray_variable (time, level, lat, lon) float64 -0.8795 ... Attributes: CDI: Climate Data Interface version 1.7.1 (http://mpimet.mpg.de/... Conventions: CF-1.4 history: Fri Aug 25 18:55:50 2017: cdo -inttime,2010-01-01,00:00:00,... CDO: Climate Data Operators version 1.7.1 (http://mpimet.mpg.de/... ```

Issue: Grouping by month works and outputs this: ```

ds_xr.groupby('time.month') - slope_itcp_ds.groupby('time.month').mean('time') <xarray.Dataset> Dimensions: (lat: 73, level: 2, lon: 144, time: 12775) Coordinates: * lon (lon) float32 0.0 2.5 5.0 7.5 10.0 12.5 ... * lat (lat) float32 90.0 87.5 85.0 82.5 80.0 ... * level (level) float64 0.0 1.0 month (time) int64 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ... * time (time) datetime64[ns] 1979-01-01 ... Data variables: xarray_dataarray_variable (time, level, lat, lon) float64 1.015 ... ```

Grouping by dayofyear doesn't work and gives this traceback: ```

ds_xr.groupby('time.dayofyear') - slope_itcp_ds.groupby('time.dayofyear').mean('time') KeyError Traceback (most recent call last) <ipython-input-10-01c0cf4c980a> in <module>() ----> 1 ds_xr.groupby('time.dayofyear') - slope_itcp_ds.groupby('time.dayofyear').mean('time')

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/groupby.py in func(self, other) 316 g = f if not reflexive else lambda x, y: f(y, x) 317 applied = self._yield_binary_applied(g, other) --> 318 combined = self._combine(applied) 319 return combined 320 return func

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/groupby.py in _combine(self, applied, shortcut) 532 combined = self._concat_shortcut(applied, dim, positions) 533 else: --> 534 combined = concat(applied, dim) 535 combined = _maybe_reorder(combined, dim, positions) 536

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in concat(objs, dim, data_vars, coords, compat, positions, indexers, mode, concat_over) 118 raise TypeError('can only concatenate xarray Dataset and DataArray ' 119 'objects, got %s' % type(first_obj)) --> 120 return f(objs, dim, data_vars, coords, compat, positions) 121 122

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in _dataset_concat(datasets, dim, data_vars, coords, compat, positions) 210 datasets = align(*datasets, join='outer', copy=False, exclude=[dim]) 211 --> 212 concat_over = _calc_concat_over(datasets, dim, data_vars, coords) 213 214 def insert_result_variable(k, v):

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in _calc_concat_over(datasets, dim, data_vars, coords) 190 if dim in v.dims) 191 concat_over.update(process_subset_opt(data_vars, 'data_vars')) --> 192 concat_over.update(process_subset_opt(coords, 'coords')) 193 if dim in datasets[0]: 194 concat_over.add(dim)

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in process_subset_opt(opt, subset) 165 for ds in datasets[1:]) 166 # all nonindexes that are not the same in each dataset --> 167 concat_new = set(k for k in getattr(datasets[0], subset) 168 if k not in concat_over and differs(k)) 169 elif opt == 'all':

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in <genexpr>(.0) 166 # all nonindexes that are not the same in each dataset 167 concat_new = set(k for k in getattr(datasets[0], subset) --> 168 if k not in concat_over and differs(k)) 169 elif opt == 'all': 170 concat_new = (set(getattr(datasets[0], subset)) -

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in differs(vname) 163 v = datasets[0].variables[vname] 164 return any(not ds.variables[vname].equals(v) --> 165 for ds in datasets[1:]) 166 # all nonindexes that are not the same in each dataset 167 concat_new = set(k for k in getattr(datasets[0], subset)

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in <genexpr>(.0) 163 v = datasets[0].variables[vname] 164 return any(not ds.variables[vname].equals(v) --> 165 for ds in datasets[1:]) 166 # all nonindexes that are not the same in each dataset 167 concat_new = set(k for k in getattr(datasets[0], subset)

/data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/utils.py in getitem(self, key) 288 289 def getitem(self, key): --> 290 return self.mapping[key] 291 292 def iter(self):

KeyError: 'lon' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1527/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
644821435 MDU6SXNzdWU2NDQ4MjE0MzU= 4176 Pre-expand data and attributes in DataArray/Variable HTML repr? shoyer 1217238 closed 0     7 2020-06-24T18:22:35Z 2020-09-21T20:10:26Z 2020-06-28T17:03:40Z MEMBER      

Proposal

Given that a major purpose for plotting an array is to look at data or attributes, I wonder if we should expand these sections by default? - I worry that clicking on icons to expand sections may not be easy to discover - This would also be consistent with the text repr, which shows these sections by default (the Dataset repr is already consistent by default between text and HTML already)

Context

Currently the HTML repr for DataArray/Variable looks like this:

To see array data, you have to click on the icon:

(thanks to @max-sixty for making this a little bit more manageably sized in https://github.com/pydata/xarray/pull/3905!)

There's also a really nice repr for nested dask arrays:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4176/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
417542619 MDU6SXNzdWU0MTc1NDI2MTk= 2803 Test failure with TestValidateAttrs.test_validating_attrs shoyer 1217238 closed 0     6 2019-03-05T23:03:02Z 2020-08-25T14:29:19Z 2019-03-14T15:59:13Z MEMBER      

This is due to setting multi-dimensional attributes being an error, as of the latest netCDF4-Python release: https://github.com/Unidata/netcdf4-python/blob/master/Changelog

E.g., as seen on Appveyor: https://ci.appveyor.com/project/shoyer/xray/builds/22834250/job/9q0ip6i3cchlbkw2 ``` ================================== FAILURES =================================== ___ TestValidateAttrs.test_validating_attrs _____ self = <xarray.tests.test_backends.TestValidateAttrs object at 0x00000096BE5FAFD0> def test_validating_attrs(self): def new_dataset(): return Dataset({'data': ('y', np.arange(10.0))}, {'y': np.arange(10)})

    def new_dataset_and_dataset_attrs():
        ds = new_dataset()
        return ds, ds.attrs

    def new_dataset_and_data_attrs():
        ds = new_dataset()
        return ds, ds.data.attrs

    def new_dataset_and_coord_attrs():
        ds = new_dataset()
        return ds, ds.coords['y'].attrs

    for new_dataset_and_attrs in [new_dataset_and_dataset_attrs,
                                  new_dataset_and_data_attrs,
                                  new_dataset_and_coord_attrs]:
        ds, attrs = new_dataset_and_attrs()

        attrs[123] = 'test'
        with raises_regex(TypeError, 'Invalid name for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs[MiscObject()] = 'test'
        with raises_regex(TypeError, 'Invalid name for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs[''] = 'test'
        with raises_regex(ValueError, 'Invalid name for attr'):
            ds.to_netcdf('test.nc')

        # This one should work
        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = 'test'
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = {'a': 5}
        with raises_regex(TypeError, 'Invalid value for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = MiscObject()
        with raises_regex(TypeError, 'Invalid value for attr'):
            ds.to_netcdf('test.nc')

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = 5
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = 3.14
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = [1, 2, 3, 4]
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = (1.9, 2.5)
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = np.arange(5)
        with create_tmp_file() as tmp_file:
            ds.to_netcdf(tmp_file)

        ds, attrs = new_dataset_and_attrs()
        attrs['test'] = np.arange(12).reshape(3, 4)
        with create_tmp_file() as tmp_file:
          ds.to_netcdf(tmp_file)

xarray\tests\test_backends.py:3450:


xarray\core\dataset.py:1323: in to_netcdf compute=compute) xarray\backends\api.py:767: in to_netcdf unlimited_dims=unlimited_dims) xarray\backends\api.py:810: in dump_to_store unlimited_dims=unlimited_dims) xarray\backends\common.py:262: in store self.set_attributes(attributes) xarray\backends\common.py:278: in set_attributes self.set_attribute(k, v) xarray\backends\netCDF4_.py:418: in set_attribute set_nc_attribute(self.ds, key, value) xarray\backends\netCDF4.py:294: in _set_nc_attribute obj.setncattr(key, value) netCDF4_netCDF4.pyx:2781: in netCDF4._netCDF4.Dataset.setncattr ???


??? E ValueError: multi-dimensional array attributes not supported netCDF4_netCDF4.pyx:1514: ValueError ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2803/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
676306518 MDU6SXNzdWU2NzYzMDY1MTg= 4331 Support explicitly setting a dimension order with to_dataframe() shoyer 1217238 closed 0     0 2020-08-10T17:45:17Z 2020-08-14T18:28:26Z 2020-08-14T18:28:26Z MEMBER      

As discussed in https://github.com/pydata/xarray/issues/2346, it would be nice to support explicitly setting the desired order of dimensions when calling Dataset.to_dataframe() or DataArray.to_dataframe().

There is nice precedent for this in the to_dask_dataframe method: http://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_dask_dataframe.html

I imagine we could copy the exact same API for `to_dataframe.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4331/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
671019427 MDU6SXNzdWU2NzEwMTk0Mjc= 4295 We shouldn't require a recent version of setuptools to install xarray shoyer 1217238 closed 0     33 2020-08-01T16:49:57Z 2020-08-14T09:52:42Z 2020-08-14T09:52:42Z MEMBER      

@canol reports on our mailing that our setuptools 41.2 (released 21 August 2019) install requirement is making it hard to install recent versions of xarray at his company: https://groups.google.com/g/xarray/c/HS_xcZDEEtA/m/GGmW-3eMCAAJ

Hello, this is just a feedback about an issue we experienced which caused our internal tools stack to stay with xarray 0.15 version instead of a newer versions.

We are a company using xarray in our internal frameworks and at the beginning we didn't have any restrictions on xarray version in our requirements file, so that new installations of our framework were using the latest version of xarray. But a few months ago we started to hear complaints from users who were having problems with installing our framework and the installation was failing because of xarray's requirement to use at least setuptools 41.2 which is released on 21th of August last year. So it hasn't been a year since it got released which might be considered relatively new.

During the installation of our framework, pip was failing to update setuptools by saying that some other process is already using setuptools files so it cannot update setuptools. The people who are using our framework are not software developers so they didn't know how to solve this problem and it became so overwhelming for us maintainers that we set the xarray requirement to version >=0.15 <0.16. We also share our internal framework with customers of our company so we didn't want to bother the customers with any potential problems.

You can see some other people having having similar problem when trying to update setuptools here (although not related to xarray): https://stackoverflow.com/questions/49338652/pip-install-u-setuptools-fail-windows-10

It is not a big deal but I just wanted to give this as a feedback. I don't know how much xarray depends on setuptools' 41.2 version.

I was surprised to see this in our setup.cfg file, added by @crusaderky in #3628. The version requirement is not documented in our docs.

Given that setuptools may be challenging to upgrade, would it be possible to relax this version requirement?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4295/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
290593053 MDU6SXNzdWUyOTA1OTMwNTM= 1850 xarray contrib module shoyer 1217238 closed 0     25 2018-01-22T19:50:08Z 2020-07-23T16:34:10Z 2020-07-23T16:34:10Z MEMBER      

Over in #1288 @nbren12 wrote:

Overall, I think the xarray community could really benefit from some kind of centralized contrib package which has a low barrier to entry for these kinds of functions.

Yes, I agree that we should explore this. There are a lot of interesting projects building on xarray now but not great ways to discover them.

Are there other open source projects with a good model we should copy here? - Scikit-Learn has a separate GitHub org/repositories for contrib projects: https://github.com/scikit-learn-contrib. - TensorFlow has a contrib module within the TensorFlow namespace: tensorflow.contrib

This gives us two different models to consider. The first "separate repository" model might be easier/flexible from a maintenance perspective. Any preferences/thoughts?

There's also some nice overlap with the Pangeo project.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1850/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
35682274 MDU6SXNzdWUzNTY4MjI3NA== 158 groupby should work with name=None shoyer 1217238 closed 0     2 2014-06-13T15:38:00Z 2020-05-30T13:15:56Z 2020-05-30T13:15:56Z MEMBER      
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/158/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
612772669 MDU6SXNzdWU2MTI3NzI2Njk= 4030 Doc build on Azure is timing out on master shoyer 1217238 closed 0     1 2020-05-05T17:30:16Z 2020-05-05T21:49:26Z 2020-05-05T21:49:26Z MEMBER      

I don't know what's going on, but it currently times out after 1 hour: https://dev.azure.com/xarray/xarray/_build/results?buildId=2767&view=logs&j=7e620c85-24a8-5ffa-8b1f-642bc9b1fc36&t=68484831-0a19-5145-bfe9-6309e5f7691d

Is it possible to login to Azure to debug this stuff?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4030/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
598567792 MDU6SXNzdWU1OTg1Njc3OTI= 3966 HTML repr is slightly broken in Google Colab shoyer 1217238 closed 0     1 2020-04-12T20:44:51Z 2020-04-16T20:14:37Z 2020-04-16T20:14:32Z MEMBER      

The "data" toggles are pre-expanded and don't work.

See https://github.com/googlecolab/colabtools/issues/1145 for a full description.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3966/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
479434052 MDU6SXNzdWU0Nzk0MzQwNTI= 3206 DataFrame with MultiIndex -> xarray with sparse array shoyer 1217238 closed 0     1 2019-08-12T00:46:16Z 2020-04-06T20:41:26Z 2019-08-27T08:54:26Z MEMBER      

Now that we have preliminary support for sparse arrays in xarray, one really cool feature we could explore is creating sparse arrays from MultiIndexed pandas DataFrames.

Right now, xarray's methods for creating objects from pandas always create dense arrays, but the size of these dense arrays can get big really quickly if the MultiIndex is sparsely populated, e.g., python import pandas as pd import numpy as np import xarray df = pd.DataFrame({ 'w': range(10), 'x': list('abcdefghij'), 'y': np.arange(0, 100, 10), 'z': np.ones(10), }).set_index(['w', 'x', 'y']) print(xarray.Dataset.from_dataframe(df)) This length 10 DataFrame turned into a dense array with 1000 elements (only 10 of which are not NaN): <xarray.Dataset> Dimensions: (w: 10, x: 10, y: 10) Coordinates: * w (w) int64 0 1 2 3 4 5 6 7 8 9 * x (x) object 'a' 'b' 'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' * y (y) int64 0 10 20 30 40 50 60 70 80 90 Data variables: z (w, x, y) float64 1.0 nan nan nan nan nan ... nan nan nan nan 1.0

We can imagine xarray.Dataset.from_dataframe(df, sparse=True) would make the same Dataset, but with sparse array (with a NaN fill value) instead of dense arrays.

Once sparse arrays work pretty well, this could actually obviate most of the use cases for MultiIndex in arrays. Arguably the model is quite a bit cleaner.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3206/reactions",
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 3,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
479940669 MDU6SXNzdWU0Nzk5NDA2Njk= 3212 Custom fill_value for from_dataframe/from_series shoyer 1217238 open 0     0 2019-08-13T03:22:46Z 2020-04-06T20:40:26Z   MEMBER      

It would be to have the option to customize the fill value when creating an xarray objects from pandas, instead of requiring to always be NaN.

This would probably be especially useful when creating sparse arrays (https://github.com/pydata/xarray/issues/3206), for which it often makes sense to use a fill value of zero. If your data has integer values (e.g., it represents counts), you probably don't want to let it be cast to float first.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3212/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
314482923 MDU6SXNzdWUzMTQ0ODI5MjM= 2061 Backend specific conventions decoding shoyer 1217238 open 0     1 2018-04-16T02:45:46Z 2020-04-05T23:42:34Z   MEMBER      

Currently, we have a single function xarray.decode_cf() that we apply to data loaded from all xarray backends.

This is appropriate for netCDF data, but it's not appropriate for backends with different implementations. For example, it doesn't work for zarr (which is why we have the separate open_zarr), and is also a poor fit for PseudoNetCDF (https://github.com/pydata/xarray/pull/1905). In the worst cases (e.g., for PseudoNetCDF) it can actually result in data being decoded twice, which can result in incorrectly scaled data.

Instead, we should declare default decoders as part of the backend API, and use those decoders as the defaults for open_dataset().

This should probably be tackled as part of the broader backends refactor: https://github.com/pydata/xarray/issues/1970

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2061/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
28376794 MDU6SXNzdWUyODM3Njc5NA== 25 Consistent rules for handling merges between variables with different attributes shoyer 1217238 closed 0     13 2014-02-26T22:37:01Z 2020-04-05T19:13:13Z 2014-09-04T06:50:49Z MEMBER      

Currently, variable attributes are checked for equality before allowing for a merge via a call to xarray_equal. It should be possible to merge datasets even if some of the variable metadata disagrees (conflicting attributes should be dropped). This is already the behavior for global attributes.

The right design of this feature should probably include some optional argument to Dataset.merge indicating how strict we want the merge to be. I can see at least three versions that could be useful: 1. Drop conflicting metadata silently. 2. Don't allow for conflicting values, but drop non-matching keys. 3. Require all keys and values to match.

We can argue about which of these should be the default option. My inclination is to be as flexible as possible by using 1 or 2 in most cases.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/25/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
173612265 MDU6SXNzdWUxNzM2MTIyNjU= 988 Hooks for custom attribute handling in xarray operations shoyer 1217238 open 0     24 2016-08-27T19:48:22Z 2020-04-05T18:19:11Z   MEMBER      

Over in #964, I am working on a rewrite/unification of the guts of xarray's logic for computation with labelled data. The goal is to get all of xarray's internal logic for working with labelled data going through a minimal set of flexible functions which we can also expose as part of the API.

Because we will finally have all (or at least nearly all) xarray operations using the same code path, I think it will also finally become feasible to open up hooks allowing extensions how xarray handles metadata.

Two obvious use cases here are units (#525) and automatic maintenance of metadata (e.g., cell_methods or history fields). Both of these are out of scope for xarray itself, mostly because the specific logic tends to be domain specific. This could also subsume options like the existing keep_attrs on many operations.

I like the idea of supporting something like NumPy's __array_wrap__ to allow third-party code to finalize xarray objects in some way before they are returned. However, it's not obvious to me what the right design is. - Should we lookup a custom attribute on subclasses like __array_wrap__ (or __numpy_ufunc__) in NumPy, or should we have a system (e.g., unilaterally or with a context manager and xarray.set_options) for registering hooks that are then checked on all xarray objects? I am inclined toward the later, even though it's a little slower, just because it will be simpler and easier to get right - Should these methods be able to control the full result objects, or only set attrs and/or name? - To be useful, do we need to allow extensions to take control of the full operation, to support things like automatic unit conversion? This would suggest something closing to __numpy_ufunc__, which is a little more ambitious than what I had in mind here.

Feedback would be greatly appreciated.

CC @darothen @rabernat @jhamman @pwolfram

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/988/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
29136905 MDU6SXNzdWUyOTEzNjkwNQ== 60 Implement DataArray.idxmax() shoyer 1217238 closed 0   1.0 741199 14 2014-03-10T22:03:06Z 2020-03-29T01:54:25Z 2020-03-29T01:54:25Z MEMBER      

Should match the pandas function: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/60/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
261805282 MDU6SXNzdWUyNjE4MDUyODI= 1600 groupby doesn't work when a dimension is resized as part of apply shoyer 1217238 closed 0     1 2017-09-30T01:01:06Z 2020-03-25T15:30:18Z 2020-03-25T15:30:17Z MEMBER      

``` In [60]: da = xarray.DataArray([1, 2, 3], dims='x', coords={'y': ('x', [1, 1, 1])})

In [61]: da.groupby('y').apply(lambda x: x[:2])

IndexError Traceback (most recent call last) <ipython-input-61-4c28a4712c34> in <module>() ----> 1 da.groupby('y').apply(lambda x: x[:2])

~/dev/xarray/xarray/core/groupby.py in apply(self, func, shortcut, kwargs) 516 applied = (maybe_wrap_array(arr, func(arr, kwargs)) 517 for arr in grouped) --> 518 return self._combine(applied, shortcut=shortcut) 519 520 def _combine(self, applied, shortcut=False):

~/dev/xarray/xarray/core/groupby.py in _combine(self, applied, shortcut) 526 else: 527 combined = concat(applied, dim) --> 528 combined = _maybe_reorder(combined, dim, positions) 529 530 if isinstance(combined, type(self._obj)):

~/dev/xarray/xarray/core/groupby.py in _maybe_reorder(xarray_obj, dim, positions) 436 return xarray_obj 437 else: --> 438 return xarray_obj[{dim: order}] 439 440

~/dev/xarray/xarray/core/dataarray.py in getitem(self, key) 476 else: 477 # orthogonal array indexing --> 478 return self.isel(**self._item_key_to_dict(key)) 479 480 def setitem(self, key, value):

~/dev/xarray/xarray/core/dataarray.py in isel(self, drop, indexers) 710 DataArray.sel 711 """ --> 712 ds = self._to_temp_dataset().isel(drop=drop, indexers) 713 return self._from_temp_dataset(ds) 714

~/dev/xarray/xarray/core/dataset.py in isel(self, drop, indexers) 1172 for name, var in iteritems(self._variables): 1173 var_indexers = dict((k, v) for k, v in indexers if k in var.dims) -> 1174 new_var = var.isel(var_indexers) 1175 if not (drop and name in var_indexers): 1176 variables[name] = new_var

~/dev/xarray/xarray/core/variable.py in isel(self, **indexers) 596 if dim in indexers: 597 key[i] = indexers[dim] --> 598 return self[tuple(key)] 599 600 def squeeze(self, dim=None):

~/dev/xarray/xarray/core/variable.py in getitem(self, key) 426 dims = tuple(dim for k, dim in zip(key, self.dims) 427 if not isinstance(k, integer_types)) --> 428 values = self._indexable_data[key] 429 # orthogonal indexing should ensure the dimensionality is consistent 430 if hasattr(values, 'ndim'):

~/dev/xarray/xarray/core/indexing.py in getitem(self, key) 476 def getitem(self, key): 477 key = self._convert_key(key) --> 478 return self._ensure_ndarray(self.array[key]) 479 480 def setitem(self, key, value):

IndexError: index 2 is out of bounds for axis 1 with size 2 ```

This would be useful, for example, for grouped sampling: https://stackoverflow.com/questions/46498247/how-to-downsample-xarray-dataset-using-groupby

To fix this, we will need to update our heuristics that decide if a groupby operation is a "transform" type operation that should have the output reordered to the original order: https://github.com/pydata/xarray/blob/24643ecee2eab04d0f84c41715d753e829f448e6/xarray/core/groupby.py#L293-L299

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1600/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
390774883 MDU6SXNzdWUzOTA3NzQ4ODM= 2605 Pad method shoyer 1217238 closed 0     9 2018-12-13T17:08:25Z 2020-03-19T14:41:49Z 2020-03-19T14:41:49Z MEMBER      

It would be nice to have a generic .pad() method to xarray objects based on numpy.pad and dask.array.pad.

In particular,pad with mode='wrap' could solve several use-cases related to periodic boundary conditions: https://github.com/pydata/xarray/issues/1005 , https://github.com/pydata/xarray/issues/2007. For example, ds.pad(longitude=(0, 1), mode='wrap') to add an extra point with periodic boundary conditions along the longitude dimension.

It probably makes sense to linearly extrapolate coordinates along padded dimensions, as long as they are regularly spaced. This might use heuristics and/or a keyword argument.

I don't have a plans to work on this in the near term. It could be a good project of moderate complexity for a new contributor.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2605/reactions",
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
484622545 MDU6SXNzdWU0ODQ2MjI1NDU= 3252 interp and reindex should work for 1d -> nd indexing shoyer 1217238 closed 0     12 2019-08-23T16:52:44Z 2020-03-13T13:58:38Z 2020-03-13T13:58:38Z MEMBER      

This works with isel and sel (see below). There's no particular reason why it can't work with reindex and interp, too, though we would need to implement our own vectorized version of linear interpolation (should not be too bad, it's mostly a matter of indexing twice and calculating weights from the difference in coordinate values).

Apparently this is quite important for vertical regridding in weather/climate science (cc @rabernat, @nbren12 ) ``` In [35]: import xarray as xr

In [36]: import numpy as np

In [37]: data = xr.DataArray(np.arange(12).reshape((3, 4)), [('x', np.arange(3)), ('y', np.arange(4))])

In [38]: ind = xr.DataArray([[0, 2], [1, 0], [1, 2]], dims=['x', 'z'], coords={'x': [0, 1, 2]})

In [39]: data Out[39]: <xarray.DataArray (x: 3, y: 4)> array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) Coordinates: * x (x) int64 0 1 2 * y (y) int64 0 1 2 3

In [40]: ind Out[40]: <xarray.DataArray (x: 3, z: 2)> array([[0, 2], [1, 0], [1, 2]]) Coordinates: * x (x) int64 0 1 2 Dimensions without coordinates: z

In [41]: data.isel(y=ind) Out[41]: <xarray.DataArray (x: 3, z: 2)> array([[ 0, 2], [ 5, 4], [ 9, 10]]) Coordinates: * x (x) int64 0 1 2 y (x, z) int64 0 2 1 0 1 2 Dimensions without coordinates: z

In [42]: data.sel(y=ind) Out[42]: <xarray.DataArray (x: 3, z: 2)> array([[ 0, 2], [ 5, 4], [ 9, 10]]) Coordinates: * x (x) int64 0 1 2 y (x, z) int64 0 2 1 0 1 2 Dimensions without coordinates: z

In [43]: data.interp(y=ind)

ValueError Traceback (most recent call last) <ipython-input-43-e6eb7e39fd31> in <module> ----> 1 data.interp(y=ind)

~/dev/xarray/xarray/core/dataarray.py in interp(self, coords, method, assume_sorted, kwargs, coords_kwargs) 1303 kwargs=kwargs, 1304 assume_sorted=assume_sorted, -> 1305 coords_kwargs 1306 ) 1307 return self._from_temp_dataset(ds)

~/dev/xarray/xarray/core/dataset.py in interp(self, coords, method, assume_sorted, kwargs, coords_kwargs) 2455 } 2456 variables[name] = missing.interp( -> 2457 var, var_indexers, method, kwargs 2458 ) 2459 elif all(d not in indexers for d in var.dims):

~/dev/xarray/xarray/core/missing.py in interp(var, indexes_coords, method, *kwargs) 533 else: 534 out_dims.add(d) --> 535 return result.transpose(tuple(out_dims)) 536 537

~/dev/xarray/xarray/core/variable.py in transpose(self, *dims) 1219 return self.copy(deep=False) 1220 -> 1221 data = as_indexable(self._data).transpose(axes) 1222 return type(self)(dims, data, self._attrs, self._encoding, fastpath=True) 1223

~/dev/xarray/xarray/core/indexing.py in transpose(self, order) 1218 1219 def transpose(self, order): -> 1220 return self.array.transpose(order) 1221 1222 def getitem(self, key):

ValueError: axes don't match array

In [44]: data.reindex(y=ind) /Users/shoyer/dev/xarray/xarray/core/dataarray.py:1240: FutureWarning: Indexer has dimensions ('x', 'z') that are different from that to be indexed along y. This will behave differently in the future. fill_value=fill_value,


ValueError Traceback (most recent call last) <ipython-input-44-1277ead996ae> in <module> ----> 1 data.reindex(y=ind)

~/dev/xarray/xarray/core/dataarray.py in reindex(self, indexers, method, tolerance, copy, fill_value, **indexers_kwargs) 1238 tolerance=tolerance, 1239 copy=copy, -> 1240 fill_value=fill_value, 1241 ) 1242 return self._from_temp_dataset(ds)

~/dev/xarray/xarray/core/dataset.py in reindex(self, indexers, method, tolerance, copy, fill_value, **indexers_kwargs) 2360 tolerance, 2361 copy=copy, -> 2362 fill_value=fill_value, 2363 ) 2364 coord_names = set(self._coord_names)

~/dev/xarray/xarray/core/alignment.py in reindex_variables(variables, sizes, indexes, indexers, method, tolerance, copy, fill_value) 398 ) 399 --> 400 target = new_indexes[dim] = utils.safe_cast_to_index(indexers[dim]) 401 402 if dim in indexes:

~/dev/xarray/xarray/core/utils.py in safe_cast_to_index(array) 104 index = array 105 elif hasattr(array, "to_index"): --> 106 index = array.to_index() 107 else: 108 kwargs = {}

~/dev/xarray/xarray/core/dataarray.py in to_index(self) 545 arrays. 546 """ --> 547 return self.variable.to_index() 548 549 @property

~/dev/xarray/xarray/core/variable.py in to_index(self) 445 def to_index(self): 446 """Convert this variable to a pandas.Index""" --> 447 return self.to_index_variable().to_index() 448 449 def to_dict(self, data=True):

~/dev/xarray/xarray/core/variable.py in to_index_variable(self) 438 """Return this variable as an xarray.IndexVariable""" 439 return IndexVariable( --> 440 self.dims, self._data, self._attrs, encoding=self._encoding, fastpath=True 441 ) 442

~/dev/xarray/xarray/core/variable.py in init(self, dims, data, attrs, encoding, fastpath) 1941 super().init(dims, data, attrs, encoding, fastpath) 1942 if self.ndim != 1: -> 1943 raise ValueError("%s objects must be 1-dimensional" % type(self).name) 1944 1945 # Unlike in Variable, always eagerly load values into memory

ValueError: IndexVariable objects must be 1-dimensional ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3252/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
309136602 MDU6SXNzdWUzMDkxMzY2MDI= 2019 Appending to an existing netCDF file fails with scipy==1.0.1 shoyer 1217238 closed 0     5 2018-03-27T21:15:05Z 2020-03-09T07:18:07Z 2020-03-09T07:18:07Z MEMBER      

https://travis-ci.org/pydata/xarray/builds/359093748

Example failure: ``` ___ ScipyFilePathTest.test_append_write ____ self = <xarray.tests.test_backends.ScipyFilePathTest testMethod=test_append_write> def test_append_write(self): # regression for GH1215 data = create_test_data()

  with self.roundtrip_append(data) as actual:

xarray/tests/test_backends.py:786:


../../../miniconda/envs/test_env/lib/python3.6/contextlib.py:81: in enter return next(self.gen) xarray/tests/test_backends.py:155: in roundtrip_append self.save(data[[key]], path, mode=mode, save_kwargs) xarray/tests/test_backends.py:162: in save kwargs) xarray/core/dataset.py:1131: in to_netcdf unlimited_dims=unlimited_dims) xarray/backends/api.py:657: in to_netcdf unlimited_dims=unlimited_dims) xarray/core/dataset.py:1068: in dump_to_store unlimited_dims=unlimited_dims) xarray/backends/common.py:363: in store unlimited_dims=unlimited_dims) xarray/backends/common.py:402: in set_variables self.writer.add(source, target) xarray/backends/common.py:265: in add target[...] = source xarray/backends/scipy_.py:61: in setitem data[key] = value


self = <scipy.io.netcdf.netcdf_variable object at 0x7fe3eb3ec6a0> index = Ellipsis, data = array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. ]) def setitem(self, index, data): if self.maskandscale: missing_value = ( self._get_missing_value() or getattr(data, 'fill_value', 999999)) self._attributes.setdefault('missing_value', missing_value) self._attributes.setdefault('_FillValue', missing_value) data = ((data - self._attributes.get('add_offset', 0.0)) / self._attributes.get('scale_factor', 1.0)) data = np.ma.asarray(data).filled(missing_value) if self._typecode not in 'fd' and data.dtype.kind == 'f': data = np.round(data)

    # Expand data for record vars?
    if self.isrec:
        if isinstance(index, tuple):
            rec_index = index[0]
        else:
            rec_index = index
        if isinstance(rec_index, slice):
            recs = (rec_index.start or 0) + len(data)
        else:
            recs = rec_index + 1
        if recs > len(self.data):
            shape = (recs,) + self._shape[1:]
            # Resize in-place does not always work since
            # the array might not be single-segment
            try:
                self.data.resize(shape)
            except ValueError:
                self.__dict__['data'] = np.resize(self.data, shape).astype(self.data.dtype)
  self.data[index] = data

E ValueError: assignment destination is read-only ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2019/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
296120524 MDU6SXNzdWUyOTYxMjA1MjQ= 1901 Update assign to preserve order for **kwargs shoyer 1217238 open 0     1 2018-02-10T18:05:45Z 2020-02-10T19:44:20Z   MEMBER      

In Python 3.6+, keyword arguments preserve the order in which they are written. We should update assign and assign_coords to rely on this in the next major release, as has been done in pandas: https://github.com/pandas-dev/pandas/issues/14207

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1901/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
304630814 MDU6SXNzdWUzMDQ2MzA4MTQ= 1986 Doc build in Travis-CI should fail when IPython encounters unexpected error shoyer 1217238 closed 0     2 2018-03-13T05:15:03Z 2020-01-13T20:33:46Z 2020-01-13T17:43:36Z MEMBER      

We don't want to release docs in a broken state.

Ideally, we would simply fail the build when Sphinx encounters a warning (e.g., by adding the -W flag). I attempted to do this in https://github.com/pydata/xarray/pull/1984. However, there are two issues with this: 1. We currently issue a very long list of warnings as part of a sphinx-build (see below), most of these due to failed references or a formatting issue with docstrings for numpydoc. Fixing this will be non-trivial. 2. IPython's sphinx directive currently does not even issue warnings/errors, due a bug in recent versions of IPython. This has been fixed on master, but not in a released version yet. We should be able to fix this when pandas removes their versioned version of the IPython directive (https://github.com/pandas-dev/pandas/pull/19657).

Expand for warnings from sphinx:

/Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.all:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.any:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.argmax:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.argmin:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.assign_attrs:4: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.assign_attrs:4: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.count:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.cumprod:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.cumsum:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.groupby_bins:66: WARNING: duplicate citation R3, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.apply_ufunc.rst /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.interpolate_na:15: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.max:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.mean:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.median:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.min:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.pipe:2: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.pipe:2: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.prod:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.quantile:44: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.resample:54: WARNING: duplicate citation R4, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.arcsinh.rst /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.std:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.sum:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.to_netcdf:22: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.to_netcdf:58: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.to_netcdf:55: WARNING: Inline literal start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.DataArray.var:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.assign_attrs:4: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.assign_attrs:4: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.cumprod:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.cumsum:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.groupby_bins:66: WARNING: duplicate citation R7, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.arctanh.rst /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.interpolate_na:15: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.merge:28: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.pipe:2: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.pipe:2: WARNING: Inline strong start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/common.py:docstring of xarray.Dataset.resample:54: WARNING: duplicate citation R8, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.exp.rst /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.to_netcdf:59: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.to_netcdf:56: WARNING: Inline literal start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/alignment.py:docstring of xarray.align:25: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/alignment.py:docstring of xarray.align:45: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/core/computation.py:docstring of xarray.apply_ufunc:147: WARNING: duplicate citation R9, other instance in /Users/shoyer/dev/xarray/doc/generated/xarray.ufuncs.exp.rst /Users/shoyer/dev/xarray/xarray/core/combine.py:docstring of xarray.auto_combine:36: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/combine.py:docstring of xarray.concat:35: WARNING: Definition list ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:11: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:13: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:26: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:28: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.apply:30: WARNING: Enumerated list ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.apply:11: WARNING: Unexpected indentation. /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.apply:13: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/core/merge.py:docstring of xarray.merge:15: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_mfdataset:11: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_mfdataset:38: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.angle:13: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arccos:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arccosh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arcsin:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arcsinh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctan:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctan2:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctanh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.ceil:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.conj:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.copysign:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.cos:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.cosh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.deg2rad:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.degrees:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.exp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.expm1:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fabs:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fix:16: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.floor:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmod:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.frexp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.hypot:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isfinite:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isinf:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.ldexp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log10:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log1p:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log2:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logaddexp:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logaddexp2:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_and:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_not:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_or:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_xor:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.nextafter:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.rad2deg:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.radians:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.rint:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sign:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.signbit:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sin:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sinh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sqrt:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.square:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.tan:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.tanh:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.trunc:6: WARNING: Inline emphasis start-string without end-string. /Users/shoyer/dev/xarray/doc/README.rst: WARNING: document isn't included in any toctree /Users/shoyer/dev/xarray/doc/api-hidden.rst: WARNING: document isn't included in any toctree done checking consistency... done preparing documents... done /Users/shoyer/dev/xarray/doc/api.rst:165: WARNING: py:attr reference target not found: Dataset.astype /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: py:obj reference target not found: xarray.DataArray.isel_points /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: py:obj reference target not found: xarray.DataArray.sel_points /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray:1: WARNING: py:obj reference target not found: xarray.DataArray.dt /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.chunk:29: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.compute:26: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.DataArray.conj:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.DataArray.conjugate:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.identical:16: WARNING: py:obj reference target not found: DataArray.equal /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.interpolate_na:53: WARNING: py:obj reference target not found: scipy.interpolate /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.load:25: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/dataarray.py:docstring of xarray.DataArray.persist:23: WARNING: py:obj reference target not found: dask.persist /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.astype /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.dump_to_store /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.get /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.isel_points /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.keys /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.load_store /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset:1: WARNING: py:obj reference target not found: xarray.Dataset.sel_points /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.chunk:29: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.compute:26: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.Dataset.conj:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/ops.py:docstring of xarray.Dataset.conjugate:16: WARNING: py:obj reference target not found: numpy.conjugate /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.info:20: WARNING: py:obj reference target not found: netCDF /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.interpolate_na:53: WARNING: py:obj reference target not found: scipy.interpolate /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.load:25: WARNING: py:obj reference target not found: dask.array.compute /Users/shoyer/dev/xarray/xarray/core/dataset.py:docstring of xarray.Dataset.persist:25: WARNING: py:obj reference target not found: dask.persist /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.all /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.any /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.argmax /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.argmin /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.argsort /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.astype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.broadcast_equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.chunk /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.clip /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.compute /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.concat /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.conj /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.conjugate /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.copy /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.count /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.cumprod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.cumsum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.expand_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.fillna /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.get_axis_num /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.get_level_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.identical /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.isel /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.isnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.item /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.load /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.max /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.mean /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.median /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.min /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.no_conflicts /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.notnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.pad_with_fill_value /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.prod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.quantile /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.rank /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.reduce /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.roll /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.rolling_window /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.round /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.searchsorted /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.set_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.shift /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.squeeze /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.stack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.std /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.sum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_base_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_coord /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_index /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_index_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.to_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.transpose /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.unstack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.var /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.where /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.T /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.attrs /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.chunks /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.data /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.dtype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.encoding /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.imag /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.level_names /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.name /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.nbytes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.ndim /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.real /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.shape /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.size /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.sizes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.IndexVariable:1: WARNING: py:obj reference target not found: xarray.IndexVariable.values /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.all /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.any /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.argmax /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.argmin /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.argsort /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.astype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.broadcast_equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.chunk /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.clip /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.compute /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.concat /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.conj /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.conjugate /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.copy /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.count /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.cumprod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.cumsum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.equals /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.expand_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.fillna /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.get_axis_num /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.identical /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.isel /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.isnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.item /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.load /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.max /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.mean /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.median /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.min /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.no_conflicts /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.notnull /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.pad_with_fill_value /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.prod /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.quantile /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.rank /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.reduce /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.roll /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.rolling_window /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.round /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.searchsorted /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.set_dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.shift /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.squeeze /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.stack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.std /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.sum /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_base_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_coord /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_index /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_index_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.to_variable /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.transpose /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.unstack /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.var /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.where /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.T /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.attrs /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.chunks /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.data /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.dims /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.dtype /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.encoding /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.imag /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.nbytes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.ndim /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.real /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.shape /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.size /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.sizes /Users/shoyer/dev/xarray/xarray/core/variable.py:docstring of xarray.Variable:1: WARNING: py:obj reference target not found: xarray.Variable.values /Users/shoyer/dev/xarray/xarray/core/computation.py:docstring of xarray.apply_ufunc:59: WARNING: py:func reference target not found: numpy.vectorize /Users/shoyer/dev/xarray/xarray/core/computation.py:docstring of xarray.apply_ufunc:197: WARNING: py:func reference target not found: numpy.vectorize /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.assert_open /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.close /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.encode /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.encode_attribute /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.encode_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.ensure_open /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.items /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.keys /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.load /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.prepare_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_attribute /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_attributes /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_dimension /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_dimensions /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_variable /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.set_variables /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.store /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.store_dataset /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.sync /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.values /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.attrs /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.ds /Users/shoyer/dev/xarray/xarray/backends/h5netcdf_.py:docstring of xarray.backends.H5NetCDFStore:1: WARNING: py:obj reference target not found: xarray.backends.H5NetCDFStore.variables /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.assert_open /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.close /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.encode /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.encode_attribute /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.encode_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.ensure_open /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.items /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.keys /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.load /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.open /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.prepare_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_attribute /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_attributes /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_dimension /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_dimensions /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_variable /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.set_variables /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.store /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.store_dataset /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.sync /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.values /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.attrs /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.ds /Users/shoyer/dev/xarray/xarray/backends/netCDF4_.py:docstring of xarray.backends.NetCDF4DataStore:1: WARNING: py:obj reference target not found: xarray.backends.NetCDF4DataStore.variables /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.close /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.items /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.keys /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.load /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.open /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.values /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.attrs /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/pydap_.py:docstring of xarray.backends.PydapDataStore:1: WARNING: py:obj reference target not found: xarray.backends.PydapDataStore.variables /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.assert_open /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.close /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.encode /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.encode_attribute /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.encode_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.ensure_open /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_attrs /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_dimensions /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_encoding /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.get_variables /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.items /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.keys /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.load /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.open_store_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.prepare_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_attribute /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_attributes /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_dimension /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_dimensions /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_variable /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.set_variables /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.store /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.store_dataset /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.sync /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.values /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.attrs /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.dimensions /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.ds /Users/shoyer/dev/xarray/xarray/backends/scipy_.py:docstring of xarray.backends.ScipyDataStore:1: WARNING: py:obj reference target not found: xarray.backends.ScipyDataStore.variables /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.all /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.any /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.argmax /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.argmin /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.count /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.max /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.mean /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.median /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.min /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.prod generating indices.../Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.std /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.sum /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.var /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DataArrayGroupBy.groups /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.assign_coords:15: WARNING: py:obj reference target not found: Dataset.assign_coords /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.fillna:29: WARNING: py:obj reference target not found: Dataset.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.fillna:29: WARNING: py:obj reference target not found: DataArray.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DataArrayGroupBy.where:30: WARNING: py:obj reference target not found: Dataset.where /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.all /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.any /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.argmax /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.argmin /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.count /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.max /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.mean /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.median /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.min /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.prod /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.std /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.sum /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.var /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy:1: WARNING: py:obj reference target not found: xarray.core.groupby.DatasetGroupBy.groups /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.assign:15: WARNING: py:obj reference target not found: Dataset.assign genindex/Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.assign_coords:15: WARNING: py:obj reference target not found: Dataset.assign_coords /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.fillna:29: WARNING: py:obj reference target not found: Dataset.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.fillna:29: WARNING: py:obj reference target not found: DataArray.fillna /Users/shoyer/dev/xarray/xarray/core/groupby.py:docstring of xarray.core.groupby.DatasetGroupBy.where:30: WARNING: py:obj reference target not found: Dataset.where /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.argmax /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.argmin /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.count /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.max /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.mean /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.median /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.min /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.prod /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.std /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.sum /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DataArrayRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DataArrayRolling.var /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.rolling /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: Dataset.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling.__init__:45: WARNING: py:obj reference target not found: DataArray.groupby /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.argmax /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.argmin /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.count /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.max /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.mean /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.median /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.min /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.prod /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.std /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.sum /Users/shoyer/dev/xarray/xarray/core/rolling.py:docstring of xarray.core.rolling.DatasetRolling:1: WARNING: py:obj reference target not found: xarray.core.rolling.DatasetRolling.var /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_dataarray:73: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_dataset:72: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/backends/api.py:docstring of xarray.open_mfdataset:68: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/backends/rasterio_.py:docstring of xarray.open_rasterio:48: WARNING: py:func reference target not found: dask.array.from_array /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.add_colorbar /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.set_axis_labels /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.set_xlabels /Users/shoyer/dev/xarray/xarray/plot/facetgrid.py:docstring of xarray.plot.FacetGrid:1: WARNING: py:obj reference target not found: xarray.plot.FacetGrid.set_ylabels /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_equal:29: WARNING: py:obj reference target not found: Dataset.equals /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_equal:29: WARNING: py:obj reference target not found: DataArray.equals /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_identical:26: WARNING: py:obj reference target not found: Dataset.equals /Users/shoyer/dev/xarray/xarray/testing.py:docstring of xarray.testing.assert_identical:26: WARNING: py:obj reference target not found: DataArray.equals /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arccos:53: WARNING: py:obj reference target not found: emath.arccos /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arcsin:49: WARNING: py:obj reference target not found: emath.arcsin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.arctanh:47: WARNING: py:obj reference target not found: emath.arctanh /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.deg2rad:50: WARNING: py:obj reference target not found: unwrap /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.exp:50: WARNING: py:obj reference target not found: exp2 /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fabs:52: WARNING: py:obj reference target not found: absolute /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fix:33: WARNING: py:obj reference target not found: around /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:63: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:66: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:68: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmax:68: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:63: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:66: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:68: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmin:68: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmod:57: WARNING: py:obj reference target not found: remainder /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.fmod:59: WARNING: py:obj reference target not found: divide /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.iscomplex:31: WARNING: py:obj reference target not found: iscomplexobj /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isfinite:57: WARNING: py:obj reference target not found: isneginf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isfinite:57: WARNING: py:obj reference target not found: isposinf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isinf:61: WARNING: py:obj reference target not found: isneginf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isinf:61: WARNING: py:obj reference target not found: isposinf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:53: WARNING: py:obj reference target not found: isneginf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:53: WARNING: py:obj reference target not found: isposinf /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isnan:53: WARNING: py:obj reference target not found: isnat /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.isreal:31: WARNING: py:obj reference target not found: isrealobj /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log:51: WARNING: py:obj reference target not found: emath.log /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log10:48: WARNING: py:obj reference target not found: emath.log10 /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.log2:47: WARNING: py:obj reference target not found: emath.log2 /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_and:48: WARNING: py:obj reference target not found: bitwise_and /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_or:49: WARNING: py:obj reference target not found: bitwise_or /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.logical_xor:50: WARNING: py:obj reference target not found: bitwise_xor /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:63: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:66: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:68: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.maximum:68: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:63: WARNING: py:obj reference target not found: amin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:66: WARNING: py:obj reference target not found: nanmin /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:68: WARNING: py:obj reference target not found: amax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.minimum:68: WARNING: py:obj reference target not found: nanmax /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.rad2deg:50: WARNING: py:obj reference target not found: unwrap /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.sqrt:52: WARNING: py:obj reference target not found: lib.scimath.sqrt /Users/shoyer/dev/xarray/xarray/ufuncs.py:docstring of xarray.ufuncs.square:48: WARNING: py:obj reference target not found: power /Users/shoyer/dev/xarray/doc/whats-new.rst:51: WARNING: py:func reference target not found: np.einsum /Users/shoyer/dev/xarray/doc/whats-new.rst:80: WARNING: py:func reference target not found: xarray.DataArrayRolling /Users/shoyer/dev/xarray/doc/whats-new.rst:80: WARNING: py:func reference target not found: xarray.DataArrayRolling.construct /Users/shoyer/dev/xarray/doc/whats-new.rst:153: WARNING: py:func reference target not found: plot /Users/shoyer/dev/xarray/doc/whats-new.rst:236: WARNING: py:meth reference target not found: DataArray.__dask_scheduler__ /Users/shoyer/dev/xarray/doc/whats-new.rst:238: WARNING: py:meth reference target not found: DataArray.plot.imshow /Users/shoyer/dev/xarray/doc/whats-new.rst:415: WARNING: py:func reference target not found: xarray.show_versions /Users/shoyer/dev/xarray/doc/whats-new.rst:428: WARNING: py:func reference target not found: xarray.conventions.decode_cf_datetime /Users/shoyer/dev/xarray/doc/whats-new.rst:446: WARNING: py:func reference target not found: xarray.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:487: WARNING: py:meth reference target not found: xarray.backends.PydapDataStore.open /Users/shoyer/dev/xarray/doc/whats-new.rst:856: WARNING: py:meth reference target not found: DataArray.rolling(...).count /Users/shoyer/dev/xarray/doc/whats-new.rst:994: WARNING: py:meth reference target not found: xarray.Variable.to_base_variable /Users/shoyer/dev/xarray/doc/whats-new.rst:994: WARNING: py:meth reference target not found: xarray.Variable.to_index_variable /Users/shoyer/dev/xarray/doc/whats-new.rst:1047: WARNING: py:meth reference target not found: Variable.compute /Users/shoyer/dev/xarray/doc/whats-new.rst:1073: WARNING: py:class reference target not found: FacetGrid /Users/shoyer/dev/xarray/doc/whats-new.rst:1089: WARNING: py:attr reference target not found: xray.Dataset.encoding /Users/shoyer/dev/xarray/doc/whats-new.rst:1089: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:1170: WARNING: py:meth reference target not found: xarray.Dataset.isel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1170: WARNING: py:meth reference target not found: xarray.Dataset.sel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1282: WARNING: py:meth reference target not found: resample /Users/shoyer/dev/xarray/doc/whats-new.rst:1287: WARNING: py:meth reference target not found: sel /Users/shoyer/dev/xarray/doc/whats-new.rst:1287: WARNING: py:meth reference target not found: loc /Users/shoyer/dev/xarray/doc/whats-new.rst:1307: WARNING: py:meth reference target not found: filter_by_attrs /Users/shoyer/dev/xarray/doc/whats-new.rst:1434: WARNING: py:class reference target not found: pd.Series /Users/shoyer/dev/xarray/doc/whats-new.rst:1453: WARNING: py:meth reference target not found: xarray.Dataset.from_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1504: WARNING: py:class reference target not found: xray.DataArray /Users/shoyer/dev/xarray/doc/whats-new.rst:1541: WARNING: py:meth reference target not found: xray.DataArray.to_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1611: WARNING: py:meth reference target not found: xray.Dataset.shift /Users/shoyer/dev/xarray/doc/whats-new.rst:1611: WARNING: py:meth reference target not found: xray.Dataset.roll /Users/shoyer/dev/xarray/doc/whats-new.rst:1626: WARNING: py:func reference target not found: xray.broadcast /Users/shoyer/dev/xarray/doc/whats-new.rst:1683: WARNING: py:meth reference target not found: xray.DataArray.plot /Users/shoyer/dev/xarray/doc/whats-new.rst:1692: WARNING: py:class reference target not found: xray.plot.FacetGrid /Users/shoyer/dev/xarray/doc/whats-new.rst:1692: WARNING: py:meth reference target not found: xray.plot.plot /Users/shoyer/dev/xarray/doc/whats-new.rst:1695: WARNING: py:meth reference target not found: xray.Dataset.sel /Users/shoyer/dev/xarray/doc/whats-new.rst:1695: WARNING: py:meth reference target not found: xray.Dataset.reindex /Users/shoyer/dev/xarray/doc/whats-new.rst:1712: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:1715: WARNING: py:attr reference target not found: xray.Dataset.real /Users/shoyer/dev/xarray/doc/whats-new.rst:1715: WARNING: py:attr reference target not found: xray.Dataset.imag /Users/shoyer/dev/xarray/doc/whats-new.rst:1717: WARNING: py:meth reference target not found: xray.Dataset.from_dataframe /Users/shoyer/dev/xarray/doc/whats-new.rst:1732: WARNING: py:meth reference target not found: xray.DataArray.name /Users/shoyer/dev/xarray/doc/whats-new.rst:1734: WARNING: py:meth reference target not found: xray.DataArray.where /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.isel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.sel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.where /Users/shoyer/dev/xarray/doc/whats-new.rst:1759: WARNING: py:meth reference target not found: xray.Dataset.diff /Users/shoyer/dev/xarray/doc/whats-new.rst:1768: WARNING: py:meth reference target not found: xray.DataArray.plot /Users/shoyer/dev/xarray/doc/whats-new.rst:1773: WARNING: undefined label: copies vs views (if the link has no caption the label must precede a section header) /Users/shoyer/dev/xarray/doc/whats-new.rst:1778: WARNING: py:meth reference target not found: xray.Dataset.isel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1778: WARNING: py:meth reference target not found: xray.Dataset.sel_points /Users/shoyer/dev/xarray/doc/whats-new.rst:1823: WARNING: py:meth reference target not found: xray.Dataset.where /Users/shoyer/dev/xarray/doc/whats-new.rst:1834: WARNING: py:meth reference target not found: xray.DataArray.diff /Users/shoyer/dev/xarray/doc/whats-new.rst:1834: WARNING: py:meth reference target not found: xray.Dataset.diff /Users/shoyer/dev/xarray/doc/whats-new.rst:1838: WARNING: py:meth reference target not found: xray.DataArray.to_masked_array /Users/shoyer/dev/xarray/doc/whats-new.rst:1847: WARNING: py:meth reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1876: WARNING: py:func reference target not found: xray.concat /Users/shoyer/dev/xarray/doc/whats-new.rst:1886: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1890: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1890: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1895: WARNING: py:func reference target not found: xray.save_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1914: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1914: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1931: WARNING: py:meth reference target not found: xray.Dataset.pipe /Users/shoyer/dev/xarray/doc/whats-new.rst:1933: WARNING: py:meth reference target not found: xray.Dataset.assign /Users/shoyer/dev/xarray/doc/whats-new.rst:1933: WARNING: py:meth reference target not found: xray.Dataset.assign_coords /Users/shoyer/dev/xarray/doc/whats-new.rst:1953: WARNING: py:func reference target not found: xray.open_mfdataset /Users/shoyer/dev/xarray/doc/whats-new.rst:1969: WARNING: py:func reference target not found: xray.concat /Users/shoyer/dev/xarray/doc/whats-new.rst:2005: WARNING: py:meth reference target not found: xray.Dataset.to_array /Users/shoyer/dev/xarray/doc/whats-new.rst:2005: WARNING: py:meth reference target not found: xray.DataArray.to_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2016: WARNING: py:meth reference target not found: xray.Dataset.fillna /Users/shoyer/dev/xarray/doc/whats-new.rst:2028: WARNING: py:meth reference target not found: xray.Dataset.assign /Users/shoyer/dev/xarray/doc/whats-new.rst:2028: WARNING: py:meth reference target not found: xray.Dataset.assign_coords /Users/shoyer/dev/xarray/doc/whats-new.rst:2040: WARNING: py:meth reference target not found: xray.Dataset.sel /Users/shoyer/dev/xarray/doc/whats-new.rst:2040: WARNING: py:meth reference target not found: xray.Dataset.reindex /Users/shoyer/dev/xarray/doc/whats-new.rst:2078: WARNING: py:class reference target not found: xray.set_options /Users/shoyer/dev/xarray/doc/whats-new.rst:2103: WARNING: py:meth reference target not found: xray.Dataset.load /Users/shoyer/dev/xarray/doc/whats-new.rst:2117: WARNING: py:meth reference target not found: xray.Dataset.resample /Users/shoyer/dev/xarray/doc/whats-new.rst:2155: WARNING: py:meth reference target not found: xray.Dataset.swap_dims /Users/shoyer/dev/xarray/doc/whats-new.rst:2165: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2165: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2198: WARNING: py:func reference target not found: xray.align /Users/shoyer/dev/xarray/doc/whats-new.rst:2198: WARNING: py:meth reference target not found: xray.Dataset.reindex_like /Users/shoyer/dev/xarray/doc/whats-new.rst:2251: WARNING: py:class reference target not found: xray.Dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2290: WARNING: py:meth reference target not found: xray.Dataset.reindex /Users/shoyer/dev/xarray/doc/whats-new.rst:2303: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2305: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2308: WARNING: py:func reference target not found: xray.open_dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2308: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2311: WARNING: py:meth reference target not found: xray.Dataset.drop /Users/shoyer/dev/xarray/doc/whats-new.rst:2311: WARNING: py:meth reference target not found: xray.DataArray.drop /Users/shoyer/dev/xarray/doc/whats-new.rst:2325: WARNING: py:meth reference target not found: xray.Dataset.broadcast_equals /Users/shoyer/dev/xarray/doc/whats-new.rst:2350: WARNING: py:meth reference target not found: xray.Dataset.to_netcdf /Users/shoyer/dev/xarray/doc/whats-new.rst:2352: WARNING: py:meth reference target not found: xray.Dataset.drop /Users/shoyer/dev/xarray/doc/whats-new.rst:2482: WARNING: py:meth reference target not found: xray.Dataset.count /Users/shoyer/dev/xarray/doc/whats-new.rst:2482: WARNING: py:meth reference target not found: xray.Dataset.dropna /Users/shoyer/dev/xarray/doc/whats-new.rst:2485: WARNING: py:meth reference target not found: xray.DataArray.to_pandas /Users/shoyer/dev/xarray/doc/whats-new.rst:2518: WARNING: py:class reference target not found: xray.Dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2532: WARNING: py:meth reference target not found: xray.Dataset.equals /Users/shoyer/dev/xarray/doc/whats-new.rst:2542: WARNING: py:meth reference target not found: xray.DataArray.reset_coords /Users/shoyer/dev/xarray/doc/whats-new.rst:2551: WARNING: unknown document: tutorial /Users/shoyer/dev/xarray/doc/whats-new.rst:2554: WARNING: py:class reference target not found: xray.Dataset /Users/shoyer/dev/xarray/doc/whats-new.rst:2562: WARNING: py:meth reference target not found: xray.Dataset.load_data /Users/shoyer/dev/xarray/doc/whats-new.rst:2562: WARNING: py:meth reference target not found: xray.Dataset.close
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1986/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
291405750 MDU6SXNzdWUyOTE0MDU3NTA= 1855 swap_dims should support dimension names that are not existing variables shoyer 1217238 closed 0     3 2018-01-25T00:08:26Z 2020-01-08T18:27:29Z 2020-01-08T18:27:29Z MEMBER      

Code Sample, a copy-pastable example if possible

python input_ds = xarray.Dataset({'foo': ('x', [1, 2])}, {'x': [0, 1]}) input_ds.swap_dims({'x': 'z'})

Problem description

Currently this results in the error KeyError: 'z'

Expected Output

We now support dimensions without associated coordinate variables. So swap_dims() should be able to create new dimensions (e.g., z in this example) even if there isn't already a coordinate variable.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1855/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
398107776 MDU6SXNzdWUzOTgxMDc3NzY= 2666 Dataset.from_dataframe will produce a FutureWarning for DatetimeTZ data shoyer 1217238 open 0     6 2019-01-11T02:45:49Z 2019-12-30T22:58:23Z   MEMBER      

This appears with the development version of pandas; see https://github.com/pandas-dev/pandas/issues/24716 for details.

Example: ``` In [16]: df = pd.DataFrame({"A": pd.date_range('2000', periods=12, tz='US/Central')})

In [17]: df.to_xarray() /Users/taugspurger/Envs/pandas-dev/lib/python3.7/site-packages/xarray/core/dataset.py:3111: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. data = np.asarray(series).reshape(shape) Out[17]: <xarray.Dataset> Dimensions: (index: 12) Coordinates: * index (index) int64 0 1 2 3 4 5 6 7 8 9 10 11 Data variables: A (index) datetime64[ns] 2000-01-01T06:00:00 ... 2000-01-12T06:00:00 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2666/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
346823063 MDU6SXNzdWUzNDY4MjMwNjM= 2337 Test for warnings fail when using old version of pytest shoyer 1217238 closed 0     2 2018-08-02T01:09:37Z 2019-11-12T19:38:07Z 2019-11-12T19:37:48Z MEMBER      

Some of our tests for warnings currently fail when run using an old version of pytest. The problem appears to be that we rely on pytest.warns() accepting subclasses rather exact matches.

This was fixed upstream in pytest (https://github.com/pytest-dev/pytest/pull/2166), but we still should specify the more specific warning types in xarray.

``` =================================== FAILURES =================================== __ TestEncodeCFVariable.testmissing_fillvalue ____

self = <xarray.tests.test_conventions.TestEncodeCFVariable testMethod=test_missing_fillvalue>

def test_missing_fillvalue(self):
    v = Variable(['x'], np.array([np.nan, 1, 2, 3]))
    v.encoding = {'dtype': 'int16'}
    with pytest.warns(Warning, match='floating point data as an integer'):
      conventions.encode_cf_variable(v)

E Failed: DID NOT WARN

tests/test_conventions.py:89: Failed

_____ TestAlias.test _____

self = <xarray.tests.test_utils.TestAlias testMethod=test>

def test(self):
    def new_method():
        pass
    old_method = utils.alias(new_method, 'old_method')
    assert 'deprecated' in old_method.__doc__
    with pytest.warns(Warning, match='deprecated'):
      old_method()

E Failed: DID NOT WARN

tests/test_utils.py:28: Failed

___ TestIndexVariable.test_coordinate_alias ______

self = <xarray.tests.test_variable.TestIndexVariable testMethod=test_coordinate_alias>

def test_coordinate_alias(self):
    with pytest.warns(Warning, match='deprecated'):
      x = Coordinate('x', [1, 2, 3])

E Failed: DID NOT WARN

tests/test_variable.py:1752: Failed ____ TestAccessor.test_register ______

self = <xarray.tests.test_extensions.TestAccessor testMethod=test_register>

def test_register(self):

    @xr.register_dataset_accessor('demo')
    @xr.register_dataarray_accessor('demo')
    class DemoAccessor(object):
        """Demo accessor."""

        def __init__(self, xarray_obj):
            self._obj = xarray_obj

        @property
        def foo(self):
            return 'bar'

    ds = xr.Dataset()
    assert ds.demo.foo == 'bar'

    da = xr.DataArray(0)
    assert da.demo.foo == 'bar'

    # accessor is cached
    assert ds.demo is ds.demo

    # check descriptor
    assert ds.demo.__doc__ == "Demo accessor."
    assert xr.Dataset.demo.__doc__ == "Demo accessor."
    assert isinstance(ds.demo, DemoAccessor)
    assert xr.Dataset.demo is DemoAccessor

    # ensure we can remove it
    del xr.Dataset.demo
    assert not hasattr(xr.Dataset, 'demo')

    with pytest.warns(Warning, match='overriding a preexisting attribute'):
        @xr.register_dataarray_accessor('demo')
      class Foo(object):

E Failed: DID NOT WARN

tests/test_extensions.py:60: Failed

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2337/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
511651492 MDU6SXNzdWU1MTE2NTE0OTI= 3440 Build failure with pandas master shoyer 1217238 closed 0     0 2019-10-24T01:27:07Z 2019-11-08T15:33:07Z 2019-11-08T15:33:07Z MEMBER      

See https://dev.azure.com/xarray/d5e7a686-a114-4b8c-a2d8-4b5b11efd896/_build/results?buildId=1218&view=logs&jobId=41d90575-019f-5cfd-d78e-c2adebf9a30b for a log.

Appears to be due to https://github.com/pandas-dev/pandas/pull/29062, which adds a .attrs attribute to pandas objects. We copy this attribute in the DataArray constructor.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3440/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
96211612 MDU6SXNzdWU5NjIxMTYxMg== 486 API for multi-dimensional resampling/regridding shoyer 1217238 open 0     32 2015-07-21T02:38:29Z 2019-11-06T18:00:52Z   MEMBER      

This notebook by @kegl shows a nice example of how to use pyresample with xray: https://www.lri.fr/~kegl/Ramps/edaElNino.html#Downsampling

It would nice to build a wrapper for this machinery directly into xray in some way.

xref #475

cc @jhamman @rabernat

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/486/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
269348789 MDU6SXNzdWUyNjkzNDg3ODk= 1668 Remove use of allow_cleanup_failure in test_backends.py shoyer 1217238 open 0     6 2017-10-28T20:47:31Z 2019-09-29T20:07:03Z   MEMBER      

This exists for the benefit of Windows, on which trying to delete an open file results in an error. But really, it would be nice to have a test suite that doesn't leave any temporary files hanging around.

The main culprit is tests like this, where opening a file triggers an error: python with raises_regex(TypeError, 'pip install netcdf4'): open_dataset(tmp_file, engine='scipy')

The way to fix this is to use mocking of some sort, to intercept calls to backend file objects and close them afterwards.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1668/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
489270698 MDU6SXNzdWU0ODkyNzA2OTg= 3280 Deprecation cycles to finish for xarray 0.13 shoyer 1217238 closed 0     9 2019-09-04T16:37:26Z 2019-09-17T18:50:05Z 2019-09-17T18:50:05Z MEMBER      

Clean-ups we should definitely do: - [x] remove deprecated options from xarray.concat (deprecated back in July 2015!): https://github.com/pydata/xarray/blob/79dc7dc461c7540cc0b84a98543c6f7796c05268/xarray/core/concat.py#L114-L144 (edit by @max-sixty ) - [x] argument order in DataArray.to_dataset (also from July 2015) https://github.com/pydata/xarray/blob/41fecd8658ba50ddda0a52e04c21cec5e53415ac/xarray/core/dataarray.py#L491 (edit by @max-sixty ) - [x] remove the warning back reindex with variables with different dimensions (from 2017). This could either be replaced by replacing dimensions like sel or by simply raising an error for now and leaving replacing dimensions for later: https://github.com/pydata/xarray/pull/1639): https://github.com/pydata/xarray/blob/79dc7dc461c7540cc0b84a98543c6f7796c05268/xarray/core/alignment.py#L389-L398 (edit by @max-sixty ) - [x] remove xarray.broadcast_array, deprecated back in 2016 in https://github.com/pydata/xarray/commit/52ee95f8ae6b9631ac381b5b889de47e41f2440e (edit by @max-sixty ) - [x] remove Variable.expand_dims (deprecated back in August 2017), whose implementation actually looks like it's already broken: https://github.com/pydata/xarray/blob/41fecd8658ba50ddda0a52e04c21cec5e53415ac/xarray/core/variable.py#L1232-L1237 (edit by @max-sixty ) - [x] stop supporting a list of colors in the cmap argument (dating back to at least v0.7.0): https://github.com/pydata/xarray/blob/d089df385e737f71067309ff7abae15994d581ec/xarray/plot/utils.py#L737-L745 (edit by @max-sixty ) - [x] push the removal of the compat and encoding arguments from Dataset/DataArray back to 0.14. These were only deprecated 7 months ago in https://github.com/pydata/xarray/pull/2703. (edit by @max-sixty )

Clean-ups to consider: - [x] switch the default reduction dimension of groupby and resample? (https://github.com/pydata/xarray/pull/2366) This has been giving a FutureWarning since v0.11.0, released back in November 2018. We could also potentially push this back to 0.14, but these warnings are a little annoying... - [x] deprecate auto_combine (#2616) only since 29 June 2019, so that should be pushed.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3280/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
57254455 MDU6SXNzdWU1NzI1NDQ1NQ== 319 Add head(), tail() and thin() methods? shoyer 1217238 closed 0     10 2015-02-10T23:28:15Z 2019-09-05T04:22:24Z 2019-09-05T04:22:24Z MEMBER      

These would be shortcuts for isel/slice syntax: - ds.head(time=5) -> ds.isel(time=slice(5)): select the first five time values - ds.tail(time=5) -> ds.isel(time=slice(-5, None)): select the last five time values - ds.thin(time=5) -> ds.isel(time=slice(None, None, 5)): select every 5th time value

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/319/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
317362786 MDU6SXNzdWUzMTczNjI3ODY= 2078 apply_ufunc should include variable names in error messages shoyer 1217238 open 0     4 2018-04-24T19:26:13Z 2019-08-26T18:10:23Z   MEMBER      

This would make it easier to debug issues with dimensions.

For example, in this example from StackOverflow, the error message was ValueError: operand to apply_ufunc has required core dimensions ['time', 'lat', 'lon'], but some of these are missing on the input variable: ['lat', 'lon'].

A better error message would be: ValueError: operand to apply_ufunc has required core dimensions ['time', 'lat', 'lon'], but some of these are missing on input variable 'status': ['lat', 'lon']

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2078/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
188113943 MDU6SXNzdWUxODgxMTM5NDM= 1097 Better support for subclasses: tests, docs and API shoyer 1217238 open 0     14 2016-11-08T21:54:00Z 2019-08-22T13:07:44Z   MEMBER      

Given that people do currently subclass xarray objects, it's worth considering making a subclass API like pandas: http://pandas.pydata.org/pandas-docs/stable/internals.html#subclassing-pandas-data-structures

At the very least, it would be nice to have docs that describe how/when it's safe to subclass, and tests that verify our support for such subclasses.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1097/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
435339263 MDU6SXNzdWU0MzUzMzkyNjM= 2910 Keyword argument support for drop() shoyer 1217238 closed 0     1 2019-04-20T00:45:09Z 2019-08-18T17:42:45Z 2019-08-18T17:42:45Z MEMBER      

Currently, to drop labels along an existing dimension, you need to write something like: ds.drop(['a', 'b'], dim='x).

It would be nice if keyword arguments were supported, e.g., ds.drop(x=['a', 'b']). This would make drop() more symmetric with sel().

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2910/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
464793626 MDU6SXNzdWU0NjQ3OTM2MjY= 3083 test_rasterio_vrt_network is failing in continuous integration tests shoyer 1217238 closed 0     3 2019-07-05T23:13:25Z 2019-07-31T00:28:46Z 2019-07-31T00:28:46Z MEMBER      

``` @network def test_rasterio_vrt_network(self): import rasterio

    url = 'https://storage.googleapis.com/\
    gcp-public-data-landsat/LC08/01/047/027/\
    LC08_L1TP_047027_20130421_20170310_01_T1/\
    LC08_L1TP_047027_20130421_20170310_01_T1_B4.TIF'
    env = rasterio.Env(GDAL_DISABLE_READDIR_ON_OPEN='EMPTY_DIR',
                       CPL_VSIL_CURL_USE_HEAD=False,
                       CPL_VSIL_CURL_ALLOWED_EXTENSIONS='TIF')
    with env:
      with rasterio.open(url) as src:

xarray/tests/test_backends.py:3734:


/usr/share/miniconda/envs/test_env/lib/python3.6/site-packages/rasterio/env.py:430: in wrapper return f(args, kwds) /usr/share/miniconda/envs/test_env/lib/python3.6/site-packages/rasterio/init.py:216: in open s = DatasetReader(path, driver=driver, sharing=sharing, *kwargs)


??? E rasterio.errors.RasterioIOError: HTTP response code: 400 - Failed writing header ``` https://dev.azure.com/xarray/xarray/_build/results?buildId=150&view=ms.vss-test-web.build-test-results-tab&runId=2358&resultId=101228&paneView=debug

I'm not sure what's going on here -- the tiff file is still available at the given URL.

@scottyhq any idea?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3083/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
246386102 MDU6SXNzdWUyNDYzODYxMDI= 1495 DOC: combining datasets with different coordinates shoyer 1217238 closed 0     2 2017-07-28T15:45:07Z 2019-07-12T19:20:44Z 2019-07-12T19:20:44Z MEMBER      

It would be nice to have documentation recipe showing how to combine datasets with different latitude/longitude arrays, as often occurs due to numerical precision issues. It's a little more complicated than just using xarray.open_mfdataset(), and comes up with some regularity.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1495/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
292000828 MDU6SXNzdWUyOTIwMDA4Mjg= 1861 Add an example page to the docs on geospatial filtering/indexing shoyer 1217238 open 0     0 2018-01-26T19:07:11Z 2019-07-12T02:53:53Z   MEMBER      

We cover standard time-series stuff pretty well in the "Toy weather data" example, but geospatial filtering/indexing questions come up all the time aren't well covered.

Topics could include: - How to filter out a region of interest (sel() with slice and where(..., drop=True)) - How to align two gridded datasets in space. - How to sample a gridded dataset at a list of station locations - How to resample a dataset to a new resolution (possibly referencing xESMF)

Not all of these are as smooth as they could be, but hopefully that will clearly point to where we have room for improvement in our APIs :).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1861/reactions",
    "total_count": 6,
    "+1": 6,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
440233667 MDU6SXNzdWU0NDAyMzM2Njc= 2940 test_rolling_wrapped_dask is failing with dask-master shoyer 1217238 closed 0     5 2019-05-03T21:44:23Z 2019-06-28T16:49:04Z 2019-06-28T16:49:04Z MEMBER      

The test_rolling_wrapped_dask tests in test_dataarray.py are failing with dask master, e.g., as seen here: https://travis-ci.org/pydata/xarray/jobs/527936531

I reproduced this locally. git bisect identified the culprit as https://github.com/dask/dask/pull/4756.

The source of this issue on the xarray side appears to be these lines: https://github.com/pydata/xarray/blob/dd99b7d7d8576eefcef4507ae9eb36a144b60adf/xarray/core/rolling.py#L287-L291

In particular, we are currently padded as an xarray.DataArray object, not a dask array. Changing this to padded.data shows that passing an actual dask array to dask_array_ops.rolling_window results in failing tests.

@fujiisoup @jhamman any idea what's going on here?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2940/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
454168102 MDU6SXNzdWU0NTQxNjgxMDI= 3009 Xarray test suite failing with dask-master shoyer 1217238 closed 0     8 2019-06-10T13:21:50Z 2019-06-23T16:49:23Z 2019-06-23T16:49:23Z MEMBER      

There are a wide variety of failures, mostly related to backends and indexing, e.g., AttributeError: 'tuple' object has no attribute 'tuple'. By the looks of it, something is going wrong with xarray's internal ExplicitIndexer objects, which are getting converted into something else.

I'm pretty sure this is due to the recent merge of the Array._meta pull request: https://github.com/dask/dask/pull/4543

There are 81 test failures, but my guess is that there that probably only a handful (at most) of underlying causes.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3009/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
325436508 MDU6SXNzdWUzMjU0MzY1MDg= 2170 keepdims=True for xarray reductions shoyer 1217238 closed 0     3 2018-05-22T19:44:17Z 2019-06-23T09:18:33Z 2019-06-23T09:18:33Z MEMBER      

For operations where arrays are aggregated but then combined, the keepdims=True option for NumPy aggregations is convenient.

We should consider supporting this in xarray as well. Aggregating a DataArray/Dataset with keepdims=True (or maybe keep_dims=True) would remove all original coordinates along aggregated dimensions and return a result with a dimension of size 1 without any coordinates, e.g., ```

array = xr.DataArray([1, 2, 3], dims='x', coords={'x': ['a', 'b', 'c']}) array.mean(keepdims=True) <xarray.DataArray (x: 1)> array([2.]) Dimensions without coordinates: x ```

In case, array.mean(keepdims=True() is equivalent to array.mean().expand_dims('x') but in general this equivalent does not hold, because the location of the original dimension is lost.

Implementation-wise, we have two options: 1. Pass on keepdims=True to NumPy functions like numpy.mean(), or 2. Implement keepdims=True ourselves, in Variable.reduce().

I think I like option 2 a little better, because it places fewer requirements on aggregation functions. For example, functions like bottleneck.nanmean() don't accept a keepdims argument.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2170/reactions",
    "total_count": 10,
    "+1": 9,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
  completed xarray 13221727 issue
430203605 MDU6SXNzdWU0MzAyMDM2MDU= 2876 Custom fill value for align, reindex and reindex_like shoyer 1217238 closed 0     2 2019-04-07T23:08:17Z 2019-05-05T00:20:55Z 2019-05-05T00:20:55Z MEMBER      

It would be nice to be able to specify a custom fill value other than NaN for alignment/reindexing, e.g., ```

xr.DataArray([0, 0], [('x', [1, 2])]).reindex(x=[0, 1, 2, 3], fill_value=-1) <xarray.DataArray (x: 4)> array([-1, 0, 0, -1]) Coordinates: * x (x) int64 1 2 3 4 ```

This should be pretty straightforward, simplify a matter of adding a fill_value keyword argument to the various interfaces and passing it on to Variable.__getitem_with_mask inside xarray.core.alignment.reindex_variables().

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2876/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
435876863 MDU6SXNzdWU0MzU4NzY4NjM= 2914 Behavior of da.expand_dims(da.coords) changed in 0.12.1 shoyer 1217238 closed 0     1 2019-04-22T20:23:47Z 2019-04-22T20:26:32Z 2019-04-22T20:25:34Z MEMBER      
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2914/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
29453809 MDU6SXNzdWUyOTQ1MzgwOQ== 66 HDF5 backend for xray shoyer 1217238 closed 0     15 2014-03-14T17:17:47Z 2019-04-21T23:55:02Z 2017-10-22T01:01:54Z MEMBER      

The obvious libraries to wrap are pytables or h5py: http://www.pytables.org http://h5py.org/

Both provide at least some support for in-memory operations (though I'm not sure if they can pass around HDF5 file objects without dumping them to disk).

From a cursory look at the documentation for both projects, the h5py appears to offer a simpler API that would be easier to map to our existing data model.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/66/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
430189759 MDU6SXNzdWU0MzAxODk3NTk= 2874 xarray/tests/test_cftimeindex_resample.py::test_resampler is way too slow shoyer 1217238 closed 0     1 2019-04-07T20:38:55Z 2019-04-11T11:42:09Z 2019-04-11T11:42:09Z MEMBER      

Some profiling results from pytest: $ pytest -k cftime --durations=50 ... ============================================= slowest 50 test durations ============================================== 7.92s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-700T] 7.45s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-700T] 7.17s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-700T] 7.12s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-700T] 7.12s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-700T] 7.03s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-700T] 6.88s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-700T] 6.70s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-700T] 5.88s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-12H] 5.69s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-12H] 5.55s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-12H] 5.44s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-12H] 5.44s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-12H] 5.32s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-12H] 5.21s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-12H] 5.08s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-12H] 1.56s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-None-700T] 1.36s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-right-700T] 1.22s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-right-700T] 1.19s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-None-700T] 1.16s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-None-700T] 1.15s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-right-700T] 1.11s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-None-700T] 1.09s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-right-700T] 0.96s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-None-12H] 0.93s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-None-right-12H] 0.93s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-None-12H] 0.91s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-None-12H] 0.91s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-31-right-right-12H] 0.89s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-None-12H] 0.88s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-None-right-12H] 0.86s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3Q_AUG-24-right-right-12H] 0.69s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-8001T] 0.69s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-8001T] 0.66s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-8001T] 0.65s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-8001T] 0.64s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-8D] 0.62s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-right-8D] 0.62s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-right-8001T] 0.60s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-8001T] 0.59s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-8D] 0.59s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-right-8001T] 0.57s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-right-8D] 0.57s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-8001T] 0.38s call xarray/tests/test_cftimeindex_resample.py::test_resampler[41987T-31-None-None-700T] 0.36s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-None-None-8D] 0.36s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-None-None-8D] 0.34s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-24-right-None-8D] 0.33s call xarray/tests/test_cftimeindex_resample.py::test_resampler[3AS_JUN-31-right-None-8D] 0.33s call xarray/tests/test_cftimeindex_resample.py::test_resampler[41987T-31-right-right-700T]

This is a heavily parametrized test, and many of these test cases take 5+ seconds to run! Are there ways we could simplify these tests to make them faster?

On my laptop, this test alone roughly doubles the runtime of our entire test suite, increasing it from about 2 minutes to 4 minutes.

@jwenfai @spencerkclark Any ideas?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2874/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
220278600 MDU6SXNzdWUyMjAyNzg2MDA= 1360 Document that aggregation functions like .mean() pass on **kwargs to dask shoyer 1217238 closed 0     2 2017-04-07T17:29:47Z 2019-04-07T19:58:58Z 2019-04-07T19:15:49Z MEMBER      

We should also add tests to verify that invocations like ds.mean(split_every=2) work.

xref https://github.com/dask/dask/issues/874#issuecomment-292597973

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1360/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
278713328 MDU6SXNzdWUyNzg3MTMzMjg= 1756 Deprecate inplace methods shoyer 1217238 closed 0   0.11 2856429 6 2017-12-02T20:09:00Z 2019-03-25T19:19:10Z 2018-11-03T21:24:13Z MEMBER      

The following methods have an inplace argument: DataArray.reset_coords DataArray.set_index DataArray.reset_index DataArray.reorder_levels Dataset.set_coords Dataset.reset_coords Dataset.rename Dataset.swap_dims Dataset.set_index Dataset.reset_index Dataset.reorder_levels Dataset.update Dataset.merge

As proposed in https://github.com/pydata/xarray/issues/1755#issuecomment-348682403, let's deprecate all of these at the next major release (v0.11). They add unnecessary complexity to methods and promote confusing about xarray's data model.

Practically, we would change all of the default values to inplace=None and issue either a DeprecationWarning or FutureWarning (see PEP 565 for more details on that choice).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1756/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 92.798ms · About: xarray-datasette