home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

206 rows where repo = 13221727, state = "closed" and user = 2443309 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: milestone, comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 142
  • issue 64

state 1

  • closed · 206 ✖

repo 1

  • xarray · 206 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2089084562 PR_kwDOAMm_X85kd6jT 8622 Update min deps in docs jhamman 2443309 closed 0     0 2024-01-18T21:35:49Z 2024-01-19T00:12:08Z 2024-01-19T00:12:07Z MEMBER   0 pydata/xarray/pulls/8622

Follow up to https://github.com/pydata/xarray/pull/8586

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8622/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1953088785 PR_kwDOAMm_X85dUY1- 8346 Bump minimum numpy version jhamman 2443309 closed 0     3 2023-10-19T21:31:58Z 2023-10-19T22:16:23Z 2023-10-19T22:16:22Z MEMBER   0 pydata/xarray/pulls/8346

I believe this was missed in v2023.08.0 (Aug 18, 2023).

xref: https://github.com/conda-forge/xarray-feedstock/pull/97

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8346/reactions",
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
    xarray 13221727 pull
33637243 MDU6SXNzdWUzMzYzNzI0Mw== 131 Dataset summary methods jhamman 2443309 closed 0   0.2 650893 10 2014-05-16T00:17:56Z 2023-09-28T12:42:34Z 2014-05-21T21:47:29Z MEMBER      

Add summary methods to Dataset object. For example, it would be great if you could summarize a entire dataset in a single line.

(1) Mean of all variables in dataset.

python mean_ds = ds.mean()

(2) Mean of all variables in dataset along a dimension:

python time_mean_ds = ds.mean(dim='time')

In the case where a dimension is specified and there are variables that don't use that dimension, I'd imagine you would just pass that variable through unchanged.

Related to #122.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/131/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1562712670 PR_kwDOAMm_X85I1FYF 7488 Attempt to reproduce #7079 in CI jhamman 2443309 closed 0     1 2023-01-30T15:57:44Z 2023-09-20T00:11:39Z 2023-09-19T23:52:20Z MEMBER   0 pydata/xarray/pulls/7488
  • [x] towards understanding #7079
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7488/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1822860755 PR_kwDOAMm_X85Wd1dG 8022 (chore) min versions bump jhamman 2443309 closed 0     1 2023-07-26T17:31:12Z 2023-07-27T04:27:44Z 2023-07-27T04:27:40Z MEMBER   0 pydata/xarray/pulls/8022
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8022/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1705857851 PR_kwDOAMm_X85QS3VM 7836 Fix link to xarray twitter page jhamman 2443309 closed 0     0 2023-05-11T13:53:14Z 2023-05-11T23:00:36Z 2023-05-11T23:00:35Z MEMBER   0 pydata/xarray/pulls/7836
  • [x] Closes #7835

Thanks @pierre-manchon for the report!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7836/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
    xarray 13221727 pull
1699112787 PR_kwDOAMm_X85P8LbF 7825 test: Fix test_write_read_select_write for Zarr V3 jhamman 2443309 closed 0     1 2023-05-07T15:26:56Z 2023-05-10T02:43:22Z 2023-05-10T02:43:22Z MEMBER   0 pydata/xarray/pulls/7825

Previously, the first context manager in this test was closed before accessing the data. This resulted in key errors when trying to access the opened dataset.

  • [x] Fixes the Zarr V3 parts of https://github.com/pydata/xarray/issues/7707
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7825/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1550109629 PR_kwDOAMm_X85ILNM- 7461 bump minimum versions, drop py38 jhamman 2443309 closed 0     18 2023-01-19T23:38:42Z 2023-04-21T14:07:09Z 2023-01-26T16:57:10Z MEMBER   0 pydata/xarray/pulls/7461

This updates our minimum versions based on our 24/18/12 month policy.

Details are shown below.

  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
``` ❯ ./ci/min_deps_check.py ./ci/requirements/min-all-deps.yml ... Package Required Policy Status ----------------- -------------------- -------------------- ------ python 3.9 (2020-10-07) 3.9 (2020-10-07) = boto3 1.20 (2021-11-08) 1.20 (2021-11-08) = bottleneck 1.3 (2021-01-20) 1.3 (2021-01-20) = cartopy 0.20 (2021-09-17) 0.20 (2021-09-17) = cdms2 3.1 (- ) - (- ) (!) cfgrib 0.9 (2019-02-25) 0.9 (2019-02-25) = cftime 1.5 (2021-05-20) 1.5 (2021-05-20) = dask-core 2022.1 (2022-01-14) 2022.1 (2022-01-14) = distributed 2022.1 (2022-01-14) 2022.1 (2022-01-14) = flox 0.5 (2022-05-02) 0.3 (2021-12-28) > (!) h5netcdf 0.13 (2022-01-12) 0.13 (2022-01-12) = h5py 3.6 (2021-11-17) 3.6 (2021-11-17) = hdf5 1.12 (2021-01-01) 1.12 (2021-01-01) = iris 3.1 (2021-11-23) 3.1 (2021-11-23) = lxml 4.7 (2021-12-14) 4.7 (2021-12-14) = matplotlib-base 3.5 (2021-11-17) 3.5 (2021-11-17) = nc-time-axis 1.4 (2021-10-23) 1.4 (2021-10-23) = netcdf4 1.5.7 (2021-04-19) 1.5 (2021-04-19) = (w) numba 0.55 (2022-01-14) 0.55 (2022-01-14) = numpy 1.21 (2021-06-22) 1.21 (2021-06-22) = packaging 21.3 (2021-11-18) 21.3 (2021-11-18) = pandas 1.3 (2021-07-02) 1.3 (2021-07-02) = pint 0.18 (2021-10-26) 0.18 (2021-10-26) = pseudonetcdf 3.2 (2021-10-16) 3.2 (2021-10-16) = pydap 3.2 (2020-10-13) 3.2 (2020-10-13) = rasterio 1.2 (2021-09-02) 1.2 (2021-09-02) = scipy 1.7 (2021-06-27) 1.7 (2021-06-27) = seaborn 0.11 (2020-09-19) 0.11 (2020-09-19) = sparse 0.13 (2021-08-28) 0.13 (2021-08-28) = toolz 0.11 (2020-09-23) 0.11 (2020-09-23) = typing_extensions 4.0 (2021-11-17) 4.0 (2021-11-17) = zarr 2.10 (2021-09-19) 2.10 (2021-09-19) = Errors: ------- 1. not found in conda: cdms2 ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7461/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1644566201 PR_kwDOAMm_X85NGfRt 7693 add to_zarr method to dataarray jhamman 2443309 closed 0     0 2023-03-28T19:49:00Z 2023-04-03T15:53:39Z 2023-04-03T15:53:35Z MEMBER   0 pydata/xarray/pulls/7693

This PR add's the to_zarr method to Xarray's DataArray objects. This allows users to roundtrip named and unnamed DataArrays to Zarr without having to first convert to a Dataset.

  • [x] Closes #7692
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7693/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1644429340 I_kwDOAMm_X85iBAAc 7692 Feature proposal: DataArray.to_zarr() jhamman 2443309 closed 0     5 2023-03-28T18:00:24Z 2023-04-03T15:53:37Z 2023-04-03T15:53:37Z MEMBER      

Is your feature request related to a problem?

It would be nice to mimic the behavior of DataArray.to_netcdf for the Zarr backend.

Describe the solution you'd like

This should be possible: python xr.open_dataarray('file.nc').to_zarr('store.zarr')

Describe alternatives you've considered

None.

Additional context

xref DataArray.to_netcdf issue/PR: #915 / #990

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7692/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1642922680 PR_kwDOAMm_X85NA9uq 7689 add reset_encoding to dataset/dataarray/variable jhamman 2443309 closed 0     6 2023-03-27T22:34:27Z 2023-03-30T21:28:53Z 2023-03-30T21:09:16Z MEMBER   0 pydata/xarray/pulls/7689
  • [x] Closes #7686
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7689/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1642635191 I_kwDOAMm_X85h6J-3 7686 Add reset_encoding to Dataset and DataArray objects jhamman 2443309 closed 0     2 2023-03-27T18:51:39Z 2023-03-30T21:09:17Z 2023-03-30T21:09:17Z MEMBER      

Is your feature request related to a problem?

Xarray maintains the encoding of datasets read from most of its supported backend formats (e.g. NetCDF, Zarr, etc.). This is very useful when you want to perfectly roundtrip but it often gets in the way, causing conflicts when writing a modified dataset or when appending to another dataset. Most of the time, the solution is to just remove the encoding from the dataset and continue on. The following code sample is found in a number of issues that reference this problem.

```python for v in list(ds.coords.keys()): if ds.coords[v].dtype == object: ds[v].encoding.clear()

for v in list(ds.variables.keys()):
    if ds[v].dtype == object:
        ds[v].encoding.clear()

```

A sample of issues that show variants of this problem.

  • https://github.com/pydata/xarray/issues/3476
  • https://github.com/pydata/xarray/issues/3739
  • https://github.com/pydata/xarray/issues/4380
  • https://github.com/pydata/xarray/issues/5219
  • https://github.com/pydata/xarray/issues/5969
  • https://github.com/pydata/xarray/issues/6329
  • https://github.com/pydata/xarray/issues/6352

Describe the solution you'd like

In many cases, the solution to these problems is to leave the original dataset encoding behind and either use Xarray's default encoding (or the backends default) or to specify one's own encoding options. Both cases would benefit from a convenience method to reset the original encoding. Something like would serve this process:

python ds = xr.open_dataset(...).reset_encoding()

Describe alternatives you've considered

Variations on the API above could also be considered:

python xr.open_dataset(..., keep_encoding=False)

or even: python with xr.set_options(keep_encoding=False): ds = xr.open_dataset(...)

We can/should also do a better job of surfacing inconsistent encoding in our backends (e.g. to_netcdf).

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7686/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1624835973 PR_kwDOAMm_X85MEd7D 7631 Remove incomplete sentence in IO docs jhamman 2443309 closed 0     0 2023-03-15T06:22:21Z 2023-03-15T12:04:08Z 2023-03-15T12:04:06Z MEMBER   0 pydata/xarray/pulls/7631
  • [x] Closes #7624
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7631/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1558497871 I_kwDOAMm_X85c5MpP 7479 Use NumPy's SupportsDType jhamman 2443309 closed 0     0 2023-01-26T17:21:32Z 2023-02-28T23:23:47Z 2023-02-28T23:23:47Z MEMBER      

What is your issue?

Now that we've bumped our minimum NumPy version to 1.21, we can address this comment:

https://github.com/pydata/xarray/blob/b21f62ee37eea3650a58e9ffa3a7c9f4ae83006b/xarray/core/types.py#L57-L62

I decided not to tackle this as part of #7461 but we may be able to do something like this:

python from numpy.typing._dtype_like import _DTypeLikeNested, _ShapeLike, _SupportsDType

xref: #6834 cc @headtr1ck

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7479/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1549639421 PR_kwDOAMm_X85IJnRV 7458 Lint with ruff jhamman 2443309 closed 0     1 2023-01-19T17:40:47Z 2023-01-30T18:12:18Z 2023-01-30T18:12:13Z MEMBER   0 pydata/xarray/pulls/7458

This switches our primary linter to Ruff. As adervertised, Ruff is very fast. Plust we get the benefit of using a single tool that combines the previous functionality of pyflakes, isort, and pyupgrade.

  • [x] Closes https://twitter.com/TEGNicholasCode/status/1613226956887056385
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

cc @max-sixty, @TomNicholas

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7458/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 1
}
    xarray 13221727 pull
1532648441 PR_kwDOAMm_X85HWTes 7436 pin scipy version in doc environment jhamman 2443309 closed 0     1 2023-01-13T17:08:50Z 2023-01-13T17:37:59Z 2023-01-13T17:37:59Z MEMBER   0 pydata/xarray/pulls/7436

This should fix our doc build.

  • [x] Closes #7434
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7436/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1456026667 PR_kwDOAMm_X85DQfj3 7301 deprecate pynio backend jhamman 2443309 closed 0     3 2022-11-19T00:15:11Z 2022-11-26T15:41:07Z 2022-11-26T15:40:36Z MEMBER   0 pydata/xarray/pulls/7301

This PR finally deprecates the PyNIO backend. PyNIO is technically in maintenance mode but it hasn't had any maintenance in 4+ years. Its conda packages cannot be installed in any of our test environments. I have added a future warning to the NioDataStore.__init__ method and noted the deprecation in the IO docs.

  • [x] Closes #4491
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7301/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1455786576 PR_kwDOAMm_X85DPqH_ 7300 bump min deps jhamman 2443309 closed 0     2 2022-11-18T20:53:45Z 2022-11-19T04:15:23Z 2022-11-19T04:15:23Z MEMBER   0 pydata/xarray/pulls/7300

The min versions checks are failing in #6475. This hopefully fixes those failures.

  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7300/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1217821452 PR_kwDOAMm_X8425iyT 6530 Doc index update jhamman 2443309 closed 0     2 2022-04-27T20:00:10Z 2022-05-31T18:28:13Z 2022-05-31T18:28:13Z MEMBER   0 pydata/xarray/pulls/6530

In light of the new splash page site (https://xarray.dev), this PR updates the documentation site's index page to simply provide pointers to key parts of Xarray's documentation.

TODOs: - [x] Get feedback on the content and layout - [x] Update the Icon SVGs (these along with the layout were borrowed, in part, from Pandas).

cc @andersy005, @rabernat

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6530/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1247083449 PR_kwDOAMm_X844ZETT 6635 Feature/to dict encoding jhamman 2443309 closed 0     0 2022-05-24T20:21:24Z 2022-05-26T19:50:53Z 2022-05-26T19:17:35Z MEMBER   0 pydata/xarray/pulls/6635

This adds an encoding option to Xarray's to_dict methods.

  • [x] Closes #6634
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6635/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1247014308 I_kwDOAMm_X85KU-2k 6634 Optionally include encoding in Dataset to_dict jhamman 2443309 closed 0     0 2022-05-24T19:10:01Z 2022-05-26T19:17:35Z 2022-05-26T19:17:35Z MEMBER      

Is your feature request related to a problem?

When using Xarray's to_dict methods to record a Dataset's schema, it would be useful to (optionally) include encoding in the output.

Describe the solution you'd like

The feature request may be resolved by simply adding an encoding keyword argument. This may look like this:

python ds = xr.Dataset(...) ds.to_dict(data=False, encoding=True)

Describe alternatives you've considered

It is currently possible to manually extract encoding attributes but this is a less desirable solution.

xref: https://github.com/pangeo-forge/pangeo-forge-recipes/issues/256

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6634/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
636449225 MDU6SXNzdWU2MzY0NDkyMjU= 4139 [Feature request] Support file-like objects in open_rasterio jhamman 2443309 closed 0     2 2020-06-10T18:11:26Z 2022-04-19T17:15:21Z 2022-04-19T17:15:20Z MEMBER      

With some acrobatics, it is possible to open file-like objects to rasterio. It would be useful if xarray supported this workflow, particularly for working with cloud optimized geotiffs and fs-spec.

MCVE Code Sample

```python with open('my_data.tif', 'rb') as f: da = xr.open_rasterio(f)

```

Expected Output

DataArray -> equivalent to xr.open_rasterio('my_data.tif')

Problem Description

We only currently allow str, rasterio.DatasetReader, or rasterio.WarpedVRT as inputs to open_rasterio.

Versions

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: 2a288f6ed4286910fcf3ab9895e1e9cbd44d30b4 python: 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 07:56:27) [Clang 9.0.1 ] python-bits: 64 OS: Darwin OS-release: 18.7.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: None libnetcdf: None xarray: 0.15.2.dev68+gb896a68f pandas: 1.0.4 numpy: 1.18.5 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.5 cfgrib: None iris: None bottleneck: None dask: 2.18.1 distributed: 2.18.0 matplotlib: None cartopy: None seaborn: None numbagg: None pint: None setuptools: 46.1.3.post20200325 pip: 20.1 conda: None pytest: 5.4.3 IPython: 7.13.0 sphinx: 3.0.3

xref: https://github.com/pangeo-data/pangeo-datastore/issues/109

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4139/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1118974427 PR_kwDOAMm_X84x0GoS 6214 update HOW_TO_RELEASE.md jhamman 2443309 closed 0     2 2022-01-31T05:01:14Z 2022-03-03T13:05:04Z 2022-01-31T18:35:27Z MEMBER   0 pydata/xarray/pulls/6214

This PR updates our step-by-step guide for releasing Xarray. It makes a few minor changes to account for #6206 and officially documents the switch to CALVER. This should be clearly documented in whats-new.rst as part of the first release utilizing CALVER.

Also, note that this should probably wait until we make the 0.20.1 patch release.

  • [x] Closes #6176, #6206
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6214/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1108564253 I_kwDOAMm_X85CE1kd 6176 Xarray versioning to switch to CalVer jhamman 2443309 closed 0     10 2022-01-19T21:09:45Z 2022-03-03T04:32:10Z 2022-01-31T18:35:27Z MEMBER      

Xarray is planning to switch to Calendar versioning (calver). This issue serves as a general announcement.

The idea has come up in multiple developer meetings (#4001) and is part of a larger effort to increase our release cadence (#5927). Today's developer meeting included unanimous consent for the change. Other projects in Xarray's ecosystem have also made this change recently (e.g. https://github.com/dask/community/issues/100). While it is likely we will make this change in the next release or two, users and developers should feel free to voice objections here.

The proposed calver implementation follows the same schema as the Dask project, that is; YYYY.MM.X (4 digit year, two digit month, one digit micro zero-indexed version. For example, the code block below provides comparison of the current and future version tags:

```python In [1]: import xarray as xr

current

In [2]: xr.version Out[2]: '0.19.1'

proposed

In [2]: xr.version Out[2]: '2022.01.0' ```

cc @pydata/xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6176/reactions",
    "total_count": 6,
    "+1": 6,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1129263296 PR_kwDOAMm_X84yVrKT 6262 [docs] update urls throughout documentation jhamman 2443309 closed 0     0 2022-02-10T00:41:54Z 2022-02-10T19:44:57Z 2022-02-10T19:44:52Z MEMBER   0 pydata/xarray/pulls/6262

We are in the process of moving our documentation url from https://xarray.pydata.org to https://docs.xarray.dev. This PR makes that change throughout the documentation. Additionally, I corrected some broken links and fixed some missing https urls in the process.

cc @andersy005

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6262/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
636451398 MDExOlB1bGxSZXF1ZXN0NDMyNjIxMjgy 4140 support file-like objects in xarray.open_rasterio jhamman 2443309 closed 0     6 2020-06-10T18:15:18Z 2021-12-03T19:22:14Z 2021-11-15T16:17:59Z MEMBER   0 pydata/xarray/pulls/4140
  • [x] Closes #4139
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

cc @scottyhq and @martindurant

xref: https://github.com/pangeo-data/pangeo-datastore/issues/109

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4140/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1047795001 PR_kwDOAMm_X84uPpLm 5956 Create CITATION.cff jhamman 2443309 closed 0     1 2021-11-08T18:40:15Z 2021-11-09T20:56:25Z 2021-11-09T18:15:01Z MEMBER   0 pydata/xarray/pulls/5956

This adds a new file to the root of the Xarray repository, CITATION.cff. GitHub recently added support for citation files and adding this file will add a UI feature to the Xarray GitHub repo.

The author list is based on the latest Zenodo release (0.20.1) and I did my best to find everyone's ORCIDs.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5956/reactions",
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 3,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
139064764 MDU6SXNzdWUxMzkwNjQ3NjQ= 787 Add Groupby and Rolling methods to docs jhamman 2443309 closed 0     2 2016-03-07T19:10:26Z 2021-11-08T19:51:00Z 2021-11-08T19:51:00Z MEMBER      

The injected apply/reduce methods for the Groupby and Rolling objects are not shown in the api documentation page. While there is obviously a fair bit of overlap between the similar DataArray/Dataset methods, it would help users to know what methods are available to the Groupby and Rolling methods if we explicitly listed them in the documentation. Suggestions on the best format to show these mehtods (e.g. Rolling.mean) are welcomed.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/787/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
985498976 MDExOlB1bGxSZXF1ZXN0NzI0Nzg1NjIz 5759 update development roadmap jhamman 2443309 closed 0     1 2021-09-01T18:50:15Z 2021-09-07T15:30:49Z 2021-09-07T15:03:06Z MEMBER   0 pydata/xarray/pulls/5759
  • [x] Passes pre-commit run --all-files

cc @pydata/xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5759/reactions",
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
663968779 MDU6SXNzdWU2NjM5Njg3Nzk= 4253 [community] Backends refactor meeting jhamman 2443309 closed 0     13 2020-07-22T18:39:19Z 2021-03-11T20:42:33Z 2021-03-11T20:42:33Z MEMBER      

In today's dev call, we opted to schedule a separate meeting to discuss the backends refactor that BOpen (@alexamici and his team) is beginning to work on. This issue is meant to coordinate the scheduling of this meeting. To that end, I've created the following Doodle Poll to help choose a time: https://doodle.com/poll/4mtzxncka7gee4mq

Anyone from @pydata/xarray should feel free to join if there is interest. At a minimum, I'm hoping to have @alexamici, @aurghs, @shoyer, and @rabernat there.

Please respond to the poll by COB tomorrow so I can quickly get the meeting on the books. Thanks!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4253/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
473795509 MDExOlB1bGxSZXF1ZXN0MzAxODY2NzAx 3166 [Feature] Backend entrypoint jhamman 2443309 closed 0     3 2019-07-28T23:01:47Z 2021-01-12T16:41:23Z 2021-01-12T16:41:23Z MEMBER   0 pydata/xarray/pulls/3166

In this PR, I'm experimenting with using the entrypoints package to support 3rd party backends. This does not attempt to solidify the API for what the store is, I feel like that should happen in a second PR. Here's how it would work...

In @rabernat's xmitgcm package, there is a _MDSDataStore that inherits from xarray.backends.common.AbstractDataStore. To allow reading mds datasets directly in xarray.open_dataset, xmitgcm would add the following lines to its setup.py file:

python setup( ... entry_points={ 'xarray.backends': [ 'mds = xmitgcm.mds_store:_MDSDataStore', ... ] } )

Xarray would then be able to discover this backend at runtime and users could use the store directly in open_dataset calls like this:

python ds = xr.open_dataset('./path/to/file.mds', engine='mds', backend_kwargs={...})

Note: I recognize that xmitgcm.open_mdsdataset has a bunch of other user options that I'm likely ignoring here but this is meant just as an illustration.

Now a list of caveats and things to consider:

  1. I have only done this for open_dataset, not for to_netcdf. We may want to consider more generic serialization method that allows for plug-able writers.
  2. open_dataset has some special handling for some readers (lock and group selection, file-like objects, etc.). We should work toward moving as much of that logic into the Store objects as possible.
  3. We should decide what to do when a 3rd party plugin conflicts with an existing backend. For example, someone could include an entrypoint with the key of netcdf4.

  • [x] Partially closes #1970
  • [ ] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3166/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
287223508 MDU6SXNzdWUyODcyMjM1MDg= 1815 apply_ufunc(dask='parallelized') with multiple outputs jhamman 2443309 closed 0     17 2018-01-09T20:40:52Z 2020-08-19T06:57:55Z 2020-08-19T06:57:55Z MEMBER      

I have an application where I'd like to use apply_ufunc with dask on a function that requires multiple inputs and outputs. This was left as a TODO item in the #1517. However, its not clear to me looking at the code how this can be done given the current form of dask's atop. I'm hoping @shoyer has already thought of a clever solution here...

Code Sample, a copy-pastable example if possible

```python def func(foo, bar):

assert foo.shape == bar.shape
spam = np.zeros_like(bar)
spam2 = np.full_like(bar, 2)


return spam, spam2

foo = xr.DataArray(np.zeros((10, 10))).chunk() bar = xr.DataArray(np.zeros((10, 10))).chunk() + 5

xrfunc = xr.apply_ufunc(func, foo, bar, output_core_dims=[[], []], dask='parallelized') ```

Problem description

This currently raises a NotImplementedError.

Expected Output

Multiple dask arrays. In my example above, two dask arrays.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 64 OS: Linux OS-release: 4.4.86+ machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0+dev.c92020a pandas: 0.22.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: 0.5.0 Nio: None zarr: 2.2.0a2.dev176 bottleneck: 1.2.1 cyordereddict: None dask: 0.16.0 distributed: 1.20.2+36.g7387410 matplotlib: 2.1.1 cartopy: None seaborn: None setuptools: 38.4.0 pip: 9.0.1 conda: 4.3.29 pytest: 3.3.2 IPython: 6.2.1 sphinx: None

cc @mrocklin, @arbennett

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1815/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
588165025 MDExOlB1bGxSZXF1ZXN0MzkzOTY0MzE4 3897 expose a few zarr backend functions as semi-public api jhamman 2443309 closed 0     3 2020-03-26T05:24:22Z 2020-08-10T15:20:31Z 2020-03-27T22:37:26Z MEMBER   0 pydata/xarray/pulls/3897
  • [x] Fixes #3851
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3897/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
663962183 MDExOlB1bGxSZXF1ZXN0NDU1MjgyNTI2 4252 update docs to point to xarray-contrib and xarray-tutorial jhamman 2443309 closed 0     1 2020-07-22T18:27:29Z 2020-07-23T16:34:18Z 2020-07-23T16:34:10Z MEMBER   0 pydata/xarray/pulls/4252
  • [x] Closes #1850
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4252/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
318988669 MDU6SXNzdWUzMTg5ODg2Njk= 2094 Drop win-32 platform CI from appveyor matrix? jhamman 2443309 closed 0     3 2018-04-30T18:29:17Z 2020-03-30T20:30:58Z 2020-03-24T03:41:24Z MEMBER      

Conda-forge has dropped support for 32-bit windows builds (https://github.com/conda-forge/cftime-feedstock/issues/2#issuecomment-385485144). Do we want to continue testing against this environment? The point becomes moot after #1876 gets wrapped up in ~7 months.

xref: https://github.com/pydata/xarray/pull/1252

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2094/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
578017585 MDU6SXNzdWU1NzgwMTc1ODU= 3851 Exposing Zarr backend internals as semi-public API jhamman 2443309 closed 0     3 2020-03-09T16:04:49Z 2020-03-27T22:37:26Z 2020-03-27T22:37:26Z MEMBER      

We recently built a prototype REST API for serving xarray datasets via a Fast-API application (see #3850 for more details). In the process of doing this, we needed to use a few internal functions in Xarray's Zarr backend:

python from xarray.backends.zarr import ( _DIMENSION_KEY, _encode_zarr_attr_value, _extract_zarr_variable_encoding, encode_zarr_variable, ) from xarray.core.pycompat import dask_array_type from xarray.util.print_versions import get_sys_info, netcdf_and_hdf5_versions

Obviously, none of these imports are really meant for use outside of Xarray's backends so I'd like to discuss how we may go about exposing these functions (or variables) as semi-public (advanced use) API features. Thoughts?

cc @rabernat

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3851/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
197920258 MDU6SXNzdWUxOTc5MjAyNTg= 1188 Should we deprecate the compat and encoding constructor arguments? jhamman 2443309 closed 0     5 2016-12-28T21:41:26Z 2020-03-24T14:34:37Z 2020-03-24T14:34:37Z MEMBER      

In https://github.com/pydata/xarray/pull/1170#discussion_r94078121, @shoyer writes:

...I would consider deprecating the encoding argument to DataArray instead. It would also make sense to get rid of the compat argument to Dataset.

These extra arguments are not part of the fundamental xarray data model and thus are a little distracting, especially to new users.

@pydata/xarray and others, what do we think about deprecating the compat argument to the Dataset constructor and the encoding arguement to the DataArray (and Dataset via #1170).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1188/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
578005145 MDExOlB1bGxSZXF1ZXN0Mzg1NjY1Nzk1 3850 Add xpublish to related projects jhamman 2443309 closed 0     0 2020-03-09T15:46:14Z 2020-03-10T06:06:08Z 2020-03-10T06:06:08Z MEMBER   0 pydata/xarray/pulls/3850

We've recently released Xpublish. This PR adds the project to the _related-projects` page in the Xarray documentation. To find out more about Xpublish, check out the docs or the release announcement blogpost.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3850/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
508743579 MDU6SXNzdWU1MDg3NDM1Nzk= 3413 Can apply_ufunc be used on arrays with different dimension sizes jhamman 2443309 closed 0     2 2019-10-17T22:04:00Z 2019-12-11T22:32:23Z 2019-12-11T22:32:23Z MEMBER      

We have an application where we want to use apply_ufunc to apply a function that takes two 1-D arrays and returns a scalar value (basically a reduction over the only axis). We start with two DataArrays that share all the same dimensions - except for the lengths of the dimension we'll be reducing along (t in this case):

```python def diff_mean(X, y): ''' a function that only works on 1d arrays that are different lengths''' assert X.ndim == 1, X.ndim assert y.ndim == 1, y.ndim assert len(X) != len(y), X return X.mean() - y.mean()

X = np.random.random((10, 4, 5)) y = np.random.random((6, 4, 5))

Xda = xr.DataArray(X, dims=('t', 'x', 'y')).chunk({'t': -1, 'x': 2, 'y': 2}) yda = xr.DataArray(y, dims=('t', 'x', 'y')).chunk({'t': -1, 'x': 2, 'y': 2}) ```

Then, we'd like to use apply_ufunc to apply our function (e.g. diff_mean):

python out = xr.apply_ufunc( diff_mean, Xda, yda, vectorize=True, dask="parallelized", output_dtypes=[np.float], input_core_dims=[['t'], ['t']], )

This fails with an error when aligning the t dimensions:

```python-traceback

ValueError Traceback (most recent call last) <ipython-input-4-e90cf6fba482> in <module> 9 dask="parallelized", 10 output_dtypes=[np.float], ---> 11 input_core_dims=[['t'], ['t']], 12 )

~/miniconda3/envs/xarray-ml/lib/python3.7/site-packages/xarray/core/computation.py in apply_ufunc(func, input_core_dims, output_core_dims, exclude_dims, vectorize, join, dataset_join, dataset_fill_value, keep_attrs, kwargs, dask, output_dtypes, output_sizes, *args) 1042 join=join, 1043 exclude_dims=exclude_dims, -> 1044 keep_attrs=keep_attrs 1045 ) 1046 elif any(isinstance(a, Variable) for a in args):

~/miniconda3/envs/xarray-ml/lib/python3.7/site-packages/xarray/core/computation.py in apply_dataarray_vfunc(func, signature, join, exclude_dims, keep_attrs, *args) 222 if len(args) > 1: 223 args = deep_align( --> 224 args, join=join, copy=False, exclude=exclude_dims, raise_on_invalid=False 225 ) 226

~/miniconda3/envs/xarray-ml/lib/python3.7/site-packages/xarray/core/alignment.py in deep_align(objects, join, copy, indexes, exclude, raise_on_invalid, fill_value) 403 indexes=indexes, 404 exclude=exclude, --> 405 fill_value=fill_value 406 ) 407

~/miniconda3/envs/xarray-ml/lib/python3.7/site-packages/xarray/core/alignment.py in align(join, copy, indexes, exclude, fill_value, *objects) 321 "arguments without labels along dimension %r cannot be " 322 "aligned because they have different dimension sizes: %r" --> 323 % (dim, sizes) 324 ) 325

ValueError: arguments without labels along dimension 't' cannot be aligned because they have different dimension sizes: {10, 6}

```

https://nbviewer.jupyter.org/gist/jhamman/0e52d9bb29f679e26b0878c58bb813d2

I'm curious if this can be made to work with apply_ufunc or if we should pursue other options here. Advice and suggestions appreciated.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56) [Clang 4.0.1 (tags/RELEASE_401/final)] python-bits: 64 OS: Darwin OS-release: 18.7.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: None libnetcdf: None xarray: 0.14.0 pandas: 0.25.1 numpy: 1.17.1 scipy: 1.3.1 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.3.2 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.3.0 distributed: 2.3.2 matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 41.2.0 pip: 19.2.3 conda: None pytest: 5.0.1 IPython: 7.8.0 sphinx: 2.2.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3413/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
527830145 MDExOlB1bGxSZXF1ZXN0MzQ1MDAzOTU4 3568 add environment file for binderized examples jhamman 2443309 closed 0     1 2019-11-25T04:00:59Z 2019-11-25T15:57:19Z 2019-11-25T15:57:19Z MEMBER   0 pydata/xarray/pulls/3568
  • [x] Closes #3563
  • [ ] Tests added
  • [ ] Passes black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3568/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
505409694 MDExOlB1bGxSZXF1ZXN0MzI2ODQ4ODk1 3389 OrderedDict --> dict, some python3.5 cleanup too jhamman 2443309 closed 0     9 2019-10-10T17:30:43Z 2019-10-23T07:07:10Z 2019-10-12T21:33:34Z MEMBER   0 pydata/xarray/pulls/3389
  • [x] Toward https://github.com/pydata/xarray/issues/3380#issuecomment-539224341
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

See below for inline comments where I could use some input from @shoyer and @crusaderky

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3389/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
503700649 MDU6SXNzdWU1MDM3MDA2NDk= 3380 [Release] 0.14 jhamman 2443309 closed 0     19 2019-10-07T21:28:28Z 2019-10-15T01:08:11Z 2019-10-14T21:26:59Z MEMBER      

3358 is going to make some fairly major changes to the minimum supported versions of required and optional dependencies. We also have a few bug fixes that have landed since releasing 0.13 that would be good to get out.

From what I can tell, the following pending PRs are close enough to get into this release. - [ ] ~tests for arrays with units #3238~ - [x] map_blocks #3276 - [x] Rolling minimum dependency versions policy #3358 - [x] Remove all OrderedDict's (#3389) - [x] Speed up isel and __getitem__ #3375 - [x] Fix concat bug when concatenating unlabeled dimensions. #3362 - [ ] ~Add hypothesis test for netCDF4 roundtrip #3283~ - [x] Fix groupby reduce for dataarray #3338 - [x] Need a fix for https://github.com/pydata/xarray/issues/3377

Am I missing anything else that needs to get in?

I think we should aim to wrap this release up soon (this week). I can volunteer to go through the release steps once we're ready.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3380/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
505617351 MDExOlB1bGxSZXF1ZXN0MzI3MDEzMDQx 3392 fix for #3377 jhamman 2443309 closed 0     1 2019-10-11T03:32:19Z 2019-10-11T11:30:52Z 2019-10-11T11:30:51Z MEMBER   0 pydata/xarray/pulls/3392
  • [x] Closes #3377
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3392/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
406035264 MDExOlB1bGxSZXF1ZXN0MjQ5ODQ1MTAz 2737 add h5netcdf+dask tests jhamman 2443309 closed 0     7 2019-02-02T23:50:20Z 2019-02-12T06:31:01Z 2019-02-12T05:39:19Z MEMBER   0 pydata/xarray/pulls/2737
  • [x] Closes #1571
  • [x] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2737/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
407548101 MDExOlB1bGxSZXF1ZXN0MjUwOTk3NTYx 2750 remove references to cyordereddict jhamman 2443309 closed 0     0 2019-02-07T05:32:27Z 2019-02-07T18:30:01Z 2019-02-07T18:30:01Z MEMBER   0 pydata/xarray/pulls/2750
  • [x] Closes #2744
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2750/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
406049155 MDExOlB1bGxSZXF1ZXN0MjQ5ODUzNTA1 2738 reintroduce pynio/rasterio/iris to py36 test env jhamman 2443309 closed 0     1 2019-02-03T03:43:31Z 2019-02-07T00:08:49Z 2019-02-07T00:08:17Z MEMBER   0 pydata/xarray/pulls/2738
  • [x] Closes #1910
  • [x] Tests added

xref: #2683

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2738/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
297227247 MDU6SXNzdWUyOTcyMjcyNDc= 1910 Pynio tests are being skipped on TravisCI jhamman 2443309 closed 0     3 2018-02-14T20:03:31Z 2019-02-07T00:08:17Z 2019-02-07T00:08:17Z MEMBER      

Problem description

Currently on Travis, the Pynio tests are being skipped. The py27-cdat+iris+pynio is supposed to be running tests for each of these but it is not.

https://travis-ci.org/pydata/xarray/jobs/341426116#L2429-L2518

I can't look at this right now in depth but I'm wondering if this is related to #1531.

reported by @WeatherGod

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1910/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
406187700 MDExOlB1bGxSZXF1ZXN0MjQ5OTQyODM1 2741 remove xfail from test_cross_engine_read_write_netcdf4 jhamman 2443309 closed 0     0 2019-02-04T05:35:18Z 2019-02-06T22:49:19Z 2019-02-04T14:50:16Z MEMBER   0 pydata/xarray/pulls/2741

This is passing in my local test environment. We'll see on CI...

  • [x] Closes #535
  • [x] Tests added
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2741/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
400841236 MDExOlB1bGxSZXF1ZXN0MjQ1OTM1OTA4 2691 try no rasterio in py36 env jhamman 2443309 closed 0     4 2019-01-18T18:35:58Z 2019-02-03T03:44:11Z 2019-01-18T21:47:44Z MEMBER   0 pydata/xarray/pulls/2691

As described in #2683, our test suite is failing on Travis with an unfortunate segfault. For now, I've just taken rasterio (and therefore GDAL) out of the offending environment. I'll use this PR to test a few other options.

cc @max-sixty

  • [x] Closes #2683
  • [ ] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2691/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
406023579 MDExOlB1bGxSZXF1ZXN0MjQ5ODM4MTA3 2736 remove bottleneck dev build from travis jhamman 2443309 closed 0     0 2019-02-02T21:18:29Z 2019-02-03T03:32:38Z 2019-02-03T03:32:21Z MEMBER   0 pydata/xarray/pulls/2736

This dev build is failing due to problems with bottlenecks setup script. Generally, the bottleneck package seems to be missing some maintenance effort so until a new release is issued, I don't think we need to be testing against its dev state.

  • [x] Closes #1109

xref: #2661

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2736/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
405955807 MDExOlB1bGxSZXF1ZXN0MjQ5Nzk2MzQx 2735 add tests for handling of empty pandas objects in constructors jhamman 2443309 closed 0     3 2019-02-02T06:54:42Z 2019-02-02T23:18:21Z 2019-02-02T07:47:58Z MEMBER   0 pydata/xarray/pulls/2735
  • [x] Closes #697
  • [x] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2735/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
405038519 MDExOlB1bGxSZXF1ZXN0MjQ5MDg2NjYx 2730 improve error message for invalid encoding jhamman 2443309 closed 0     1 2019-01-31T01:20:49Z 2019-01-31T17:27:03Z 2019-01-31T17:26:54Z MEMBER   0 pydata/xarray/pulls/2730

Improved error message for invalid encodings.

  • [x] Closes #2728
  • [ ] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2730/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
395431629 MDExOlB1bGxSZXF1ZXN0MjQxODg3MjU2 2645 Remove py2 compat jhamman 2443309 closed 0     14 2019-01-03T01:20:51Z 2019-01-25T16:46:22Z 2019-01-25T16:38:45Z MEMBER   0 pydata/xarray/pulls/2645

I was feeling particularly zealous today so I decided to see what it would take to strip out all the Python 2 compatibility code in xarray. I expect some will feel its too soon to merge this so I'm mostly putting this up for show-and-tell and to highlight some of the knots we've tied ourselves into over the years.

  • [x] Closes #1876
  • [ ] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2645/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
302930480 MDU6SXNzdWUzMDI5MzA0ODA= 1971 Should we be testing against multiple dask schedulers? jhamman 2443309 closed 0     5 2018-03-07T01:25:37Z 2019-01-13T20:58:21Z 2019-01-13T20:58:20Z MEMBER      

Almost all of our unit tests are against the dask's default scheduler (usually dask.threaded). While it is true that beauty of dask is that one can separate the scheduler from the logical implementation, there are a few idiosyncrasies to consider, particularly in xarray's backends. To that end, we have a few tests covering the integration of the distributed scheduler with xarray's backends but the test coverage is not particularly complete.

If nothing more, I think it is worth considering tests that use the threaded, multiprocessing, and distributed schedulers for a larger subset of the backends tests (those that use dask).

Note, I'm bringing this up because I'm seeing some failing tests in #1793 that are unrelated to my code change but do appear to be related to dask and possibly a different different default scheduler (example failure).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1971/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
395004129 MDExOlB1bGxSZXF1ZXN0MjQxNTgxMjY0 2637 DEP: drop python 2 support and associated ci mods jhamman 2443309 closed 0     3 2018-12-31T16:35:59Z 2019-01-02T04:52:18Z 2019-01-02T04:52:04Z MEMBER   0 pydata/xarray/pulls/2637

This is a WIP. I expect the CI changes to take a few iterations.

  • [x] Closes #1876
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2637/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
293414745 MDU6SXNzdWUyOTM0MTQ3NDU= 1876 DEP: drop Python 2.7 support jhamman 2443309 closed 0     2 2018-02-01T06:11:07Z 2019-01-02T04:52:04Z 2019-01-02T04:52:04Z MEMBER      

The timeline for dropping Python 2.7 support for new Xarray releases is the end of 2018.

This issue can be used to track the necessary documentation and code changes to make that happen.

xref: #1830

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1876/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
377423603 MDExOlB1bGxSZXF1ZXN0MjI4MzcwMzUz 2545 Expand test environment for Python 3.7 jhamman 2443309 closed 0     2 2018-11-05T14:27:50Z 2018-11-06T16:29:35Z 2018-11-06T16:22:46Z MEMBER   0 pydata/xarray/pulls/2545

Just adding a full environment for python 3.7.

  • [x] Extends #2271
  • [x] Tests added
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2545/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
377075253 MDExOlB1bGxSZXF1ZXN0MjI4MTMwMzQx 2538 Stop loading tutorial data by default jhamman 2443309 closed 0     6 2018-11-03T17:24:26Z 2018-11-05T15:36:17Z 2018-11-05T15:36:17Z MEMBER   0 pydata/xarray/pulls/2538
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

In working on an xarray/dask tutorial, I've come to realize we eagerly load the tutorial datasets in xarray.tutorial.load_dataset. I'm going to just say that I don't think we should do that but I could be missing some rational. I didn't open an issue so please feel free to share thoughts here.

One option would be to create a new function (xr.tutorial.open_dataset) that does what I'm suggesting and then slowly deprecate tutorial.load_dataset. Thoughts?

xref: https://github.com/dask/dask-examples/pull/51

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2538/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
362913084 MDExOlB1bGxSZXF1ZXN0MjE3NDkyNDIy 2432 switch travis language to generic jhamman 2443309 closed 0     3 2018-09-23T04:37:38Z 2018-09-26T23:27:55Z 2018-09-26T23:27:54Z MEMBER   0 pydata/xarray/pulls/2432

Following up on #2271. This switches the set language in our Travis-CI config from "python" to "generic". Since we don't use any of the Travis Python utilities, we didn't really need the python setting and the generic setting gives a few benefits:

  • smaller base image which should give a bit faster spin-up time
  • build matrix without reliance on python version, instead we just point to the conda environment file

  • [x] Tests passed (for all non-documentation changes)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2432/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
339197312 MDExOlB1bGxSZXF1ZXN0MTk5OTI1NDg3 2271 dev/test build for python 3.7 jhamman 2443309 closed 0     3 2018-07-08T05:02:19Z 2018-09-22T23:09:43Z 2018-09-22T20:13:28Z MEMBER   0 pydata/xarray/pulls/2271
  • [x] Tests added
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2271/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
323765896 MDU6SXNzdWUzMjM3NjU4OTY= 2142 add CFTimeIndex enabled date_range function jhamman 2443309 closed 0     1 2018-05-16T20:02:08Z 2018-09-19T20:24:40Z 2018-09-19T20:24:40Z MEMBER      

Pandas' date_range function is a fast and flexible way to create DateTimeIndex objects. Now that we have a functioning CFTimeIndex, it would be great to add a version of the date_range function that supports other calendars and dates out of range for Pandas.

Code Sampl and expected output

```python In [1]: import xarray as xr

In [2]: xr.date_range('2000-02-26', '2000-03-02') Out[2]: DatetimeIndex(['2000-02-26', '2000-02-27', '2000-02-28', '2000-02-29', '2000-03-01', '2000-03-02'], dtype='datetime64[ns]', freq='D')

In [3]: xr.date_range('2000-02-26', '2000-03-02', calendar='noleap') Out[3]: CFTimeIndex(['2000-02-26', '2000-02-27', '2000-02-28', '2000-03-01', '2000-03-02'], dtype='cftime.datetime', freq='D') ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2142/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
361453268 MDExOlB1bGxSZXF1ZXN0MjE2NDIxMTE3 2421 Update NumFOCUS donate link jhamman 2443309 closed 0     1 2018-09-18T19:40:53Z 2018-09-19T05:59:28Z 2018-09-19T05:59:28Z MEMBER   0 pydata/xarray/pulls/2421
  • [ ] Closes #xxxx (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [ ] Tests added (for all bug fixes or enhancements)
  • [ ] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2421/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
357720579 MDExOlB1bGxSZXF1ZXN0MjEzNjY4MTgz 2403 add some blurbs about numfocus sponsorship to docs jhamman 2443309 closed 0     3 2018-09-06T15:54:06Z 2018-09-19T05:37:34Z 2018-09-11T02:14:18Z MEMBER   0 pydata/xarray/pulls/2403

Xarray is now a fiscally sponsored project of NumFOCUS. This PR adds a few blurbs of text highlighting that on the main readme and index page of the docs.

TODO: - Update flipcause to xarray specific donation page

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2403/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
358870903 MDExOlB1bGxSZXF1ZXN0MjE0NTAwNjk5 2409 Numfocus jhamman 2443309 closed 0     0 2018-09-11T03:15:52Z 2018-09-11T05:13:51Z 2018-09-11T05:13:51Z MEMBER   0 pydata/xarray/pulls/2409

followup PR fixing two small typos in my previous PR.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2409/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
345300237 MDExOlB1bGxSZXF1ZXN0MjA0NDg4NDI2 2320 Fix for zarr encoding bug jhamman 2443309 closed 0     1 2018-07-27T17:05:27Z 2018-08-14T03:46:37Z 2018-08-14T03:46:34Z MEMBER   0 pydata/xarray/pulls/2320
  • [x] Closes #2278
  • [x] Tests added
  • [ ] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2320/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
340489812 MDExOlB1bGxSZXF1ZXN0MjAwODg4Mzc0 2282 fix dask get_scheduler warning jhamman 2443309 closed 0     1 2018-07-12T05:01:02Z 2018-07-14T16:19:58Z 2018-07-14T16:19:53Z MEMBER   0 pydata/xarray/pulls/2282
  • [x] Closes #2238
  • [ ] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2282/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
327905732 MDExOlB1bGxSZXF1ZXN0MTkxNTg1ODU4 2204 update minimum versions and associated code cleanup jhamman 2443309 closed 0   0.11 2856429 6 2018-05-30T21:27:14Z 2018-07-08T00:55:36Z 2018-07-08T00:55:32Z MEMBER   0 pydata/xarray/pulls/2204
  • [x] closes #2200, closes #1829, closes #2203
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

This updates the following minimum versions:

  • numpy: 1.11 (Mar 27, 2016) --> 1.12 (Jan 15, 2017)
  • pandas: 0.18 (Mar 11, 2016) --> 0.19 (Oct 2, 2016)
  • dask: 0.9 (May 10, 2016) --> 0.16

and drops our tests for python 3.4.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2204/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
288465429 MDU6SXNzdWUyODg0NjU0Mjk= 1829 Drop support for Python 3.4 jhamman 2443309 closed 0   0.11 2856429 13 2018-01-15T02:38:19Z 2018-07-08T00:55:32Z 2018-07-08T00:55:32Z MEMBER      

Python 3.7-final is due out in June (PEP 537). When do we want to deprecate 3.4 and when should we drop support all together. @maxim-lian brought this up in a PR he's working on: https://github.com/pydata/xarray/pull/1828#issuecomment-357562144.

For reference, we dropped Python 3.3 in #1175 (12/20/2016).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1829/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
327893262 MDU6SXNzdWUzMjc4OTMyNjI= 2203 Update minimum version of dask jhamman 2443309 closed 0     6 2018-05-30T20:47:57Z 2018-07-08T00:55:32Z 2018-07-08T00:55:32Z MEMBER      

Xarray currently states that it supports dask version 0.9 and later. However, 1) I don't think this is true and my quick test shows that some of our tests fail using dask 0.9, and 2) we have a growing number of tests that are being skipped for older dask versions:

$ grep -irn "dask.__version__" xarray/tests/*py xarray/tests/__init__.py:90: if LooseVersion(dask.__version__) < '0.18': xarray/tests/test_computation.py:755: if LooseVersion(dask.__version__) < LooseVersion('0.17.3'): xarray/tests/test_computation.py:841: if not use_dask or LooseVersion(dask.__version__) > LooseVersion('0.17.4'): xarray/tests/test_dask.py:211: @pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_dask.py:223: @pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_dask.py:284: @pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_dask.py:296: @pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_dask.py:387: if LooseVersion(dask.__version__) == LooseVersion('0.15.3'): xarray/tests/test_dask.py:784: pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_dask.py:802: pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_dask.py:818:@pytest.mark.skipif(LooseVersion(dask.__version__) <= '0.15.4', xarray/tests/test_variable.py:1664: if LooseVersion(dask.__version__) <= LooseVersion('0.15.1'): xarray/tests/test_variable.py:1670: if LooseVersion(dask.__version__) <= LooseVersion('0.15.1'):

I'd like to see xarray bump the minimum version number of dask to something around 0.15.4 (Oct. 2017) or 0.16 (Nov. 2017).

cc @mrocklin, @pydata/xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2203/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
327875183 MDU6SXNzdWUzMjc4NzUxODM= 2200 DEPS: drop numpy < 1.12 jhamman 2443309 closed 0     0 2018-05-30T19:52:40Z 2018-07-08T00:55:31Z 2018-07-08T00:55:31Z MEMBER      

Pandas is dropping Numpy 1.11 and earlier in their 0.24 release. It is probably easiest to follow suit with xarray.

xref: https://github.com/pandas-dev/pandas/issues/21242

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2200/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
331752926 MDExOlB1bGxSZXF1ZXN0MTk0NDA3MzU5 2228 fix zarr chunking bug jhamman 2443309 closed 0     2 2018-06-12T21:04:10Z 2018-06-13T13:07:58Z 2018-06-13T05:51:36Z MEMBER   0 pydata/xarray/pulls/2228
  • [x] Closes #2225
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2228/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
331415995 MDU6SXNzdWUzMzE0MTU5OTU= 2225 Zarr Backend: check for non-uniform chunks is too strict jhamman 2443309 closed 0     3 2018-06-12T02:36:05Z 2018-06-13T05:51:36Z 2018-06-13T05:51:36Z MEMBER      

I think the following block of code is more strict than either dask or zarr requires:

https://github.com/pydata/xarray/blob/6c3abedf906482111b06207b9016ea8493c42713/xarray/backends/zarr.py#L80-L89

It should be possible to have uneven chunks in the last position of multiple dimensions in a zarr dataset.

Code Sample, a copy-pastable example if possible

```python In [1]: import xarray as xr

In [2]: import dask.array as dsa

In [3]: da = xr.DataArray(dsa.random.random((8, 7, 11), chunks=(3, 3, 3)), dims=('x', 'y', 't'))

In [4]: da Out[4]: <xarray.DataArray 'da.random.random_sample-1aed3ea2f9dd784ec947cb119459fa56' (x: 8, y: 7, t: 11)> dask.array<shape=(8, 7, 11), dtype=float64, chunksize=(3, 3, 3)> Dimensions without coordinates: x, y, t

In [5]: da.data.chunks Out[5]: ((3, 3, 2), (3, 3, 1), (3, 3, 3, 2))

In [6]: da.to_dataset('varname').to_zarr('/Users/jhamman/workdir/test_chunks.zarr') /Users/jhamman/anaconda/bin/ipython:1: FutureWarning: the order of the arguments on DataArray.to_dataset has changed; you now need to supply name as a keyword argument #!/Users/jhamman/anaconda/bin/python


ValueError Traceback (most recent call last) <ipython-input-7-32fa9a7d0276> in <module>() ----> 1 da.to_dataset('varname').to_zarr('/Users/jhamman/workdir/test_chunks.zarr')

~/anaconda/lib/python3.6/site-packages/xarray/core/dataset.py in to_zarr(self, store, mode, synchronizer, group, encoding, compute) 1185 from ..backends.api import to_zarr 1186 return to_zarr(self, store=store, mode=mode, synchronizer=synchronizer, -> 1187 group=group, encoding=encoding, compute=compute) 1188 1189 def unicode(self):

~/anaconda/lib/python3.6/site-packages/xarray/backends/api.py in to_zarr(dataset, store, mode, synchronizer, group, encoding, compute) 856 # I think zarr stores should always be sync'd immediately 857 # TODO: figure out how to properly handle unlimited_dims --> 858 dataset.dump_to_store(store, sync=True, encoding=encoding, compute=compute) 859 860 if not compute:

~/anaconda/lib/python3.6/site-packages/xarray/core/dataset.py in dump_to_store(self, store, encoder, sync, encoding, unlimited_dims, compute) 1073 1074 store.store(variables, attrs, check_encoding, -> 1075 unlimited_dims=unlimited_dims) 1076 if sync: 1077 store.sync(compute=compute)

~/anaconda/lib/python3.6/site-packages/xarray/backends/zarr.py in store(self, variables, attributes, args, kwargs) 341 def store(self, variables, attributes, args, kwargs): 342 AbstractWritableDataStore.store(self, variables, attributes, --> 343 *args, kwargs) 344 345 def sync(self, compute=True):

~/anaconda/lib/python3.6/site-packages/xarray/backends/common.py in store(self, variables, attributes, check_encoding_set, unlimited_dims) 366 self.set_dimensions(variables, unlimited_dims=unlimited_dims) 367 self.set_variables(variables, check_encoding_set, --> 368 unlimited_dims=unlimited_dims) 369 370 def set_attributes(self, attributes):

~/anaconda/lib/python3.6/site-packages/xarray/backends/common.py in set_variables(self, variables, check_encoding_set, unlimited_dims) 403 check = vn in check_encoding_set 404 target, source = self.prepare_variable( --> 405 name, v, check, unlimited_dims=unlimited_dims) 406 407 self.writer.add(source, target)

~/anaconda/lib/python3.6/site-packages/xarray/backends/zarr.py in prepare_variable(self, name, variable, check_encoding, unlimited_dims) 325 326 encoding = _extract_zarr_variable_encoding( --> 327 variable, raise_on_invalid=check_encoding) 328 329 encoded_attrs = OrderedDict()

~/anaconda/lib/python3.6/site-packages/xarray/backends/zarr.py in _extract_zarr_variable_encoding(variable, raise_on_invalid) 181 182 chunks = _determine_zarr_chunks(encoding.get('chunks'), variable.chunks, --> 183 variable.ndim) 184 encoding['chunks'] = chunks 185 return encoding

~/anaconda/lib/python3.6/site-packages/xarray/backends/zarr.py in _determine_zarr_chunks(enc_chunks, var_chunks, ndim) 87 "Zarr requires uniform chunk sizes excpet for final chunk." 88 " Variable %r has incompatible chunks. Consider " ---> 89 "rechunking using chunk()." % (var_chunks,)) 90 # last chunk is allowed to be smaller 91 last_var_chunk = all_var_chunks[-1]

ValueError: Zarr requires uniform chunk sizes excpet for final chunk. Variable ((3, 3, 2), (3, 3, 1), (3, 3, 3, 2)) has incompatible chunks. Consider rechunking using chunk(). ```

Problem description

[this should explain why the current behavior is a problem and why the expected output is a better solution.]

Expected Output

IIUC, Zarr allows multiple dims to have uneven chunks, so long as they are all in the last position:

```Python In [9]: import zarr

In [10]: z = zarr.zeros((8, 7, 11), chunks=(3, 3, 3), dtype='i4')

In [11]: z.chunks Out[11]: (3, 3, 3) ```

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Darwin OS-release: 17.5.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.7 pandas: 0.22.0 numpy: 1.14.3 scipy: 1.1.0 netCDF4: 1.3.1 h5netcdf: 0.5.1 h5py: 2.7.1 Nio: None zarr: 2.2.0 bottleneck: 1.2.1 cyordereddict: None dask: 0.17.2 distributed: 1.21.6 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: 0.8.1 setuptools: 39.0.1 pip: 9.0.3 conda: 4.5.4 pytest: 3.5.1 IPython: 6.3.1 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2225/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
323017930 MDExOlB1bGxSZXF1ZXN0MTg3OTc4ODg2 2131 Feature/pickle rasterio jhamman 2443309 closed 0     13 2018-05-14T23:38:59Z 2018-06-08T05:00:59Z 2018-06-07T18:02:56Z MEMBER   0 pydata/xarray/pulls/2131
  • [x] Closes #2121
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

cc @rsignell-usgs

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2131/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
322445312 MDU6SXNzdWUzMjI0NDUzMTI= 2121 rasterio backend should use DataStorePickleMixin (or something similar) jhamman 2443309 closed 0     2 2018-05-11T21:51:59Z 2018-06-07T18:02:56Z 2018-06-07T18:02:56Z MEMBER      

Code Sample, a copy-pastable example if possible

```Python In [1]: import xarray as xr

In [2]: ds = xr.open_rasterio('RGB.byte.tif')

In [3]: ds Out[3]: <xarray.DataArray (band: 3, y: 718, x: 791)> [1703814 values with dtype=uint8] Coordinates: * band (band) int64 1 2 3 * y (y) float64 2.827e+06 2.826e+06 2.826e+06 2.826e+06 2.826e+06 ... * x (x) float64 1.021e+05 1.024e+05 1.027e+05 1.03e+05 1.033e+05 ... Attributes: transform: (101985.0, 300.0379266750948, 0.0, 2826915.0, 0.0, -300.0417... crs: +init=epsg:32618 res: (300.0379266750948, 300.041782729805) is_tiled: 0 nodatavals: (0.0, 0.0, 0.0)

In [4]: import pickle

In [5]: pickle.dumps(ds)

TypeError Traceback (most recent call last) <ipython-input-5-a165c2473431> in <module>() ----> 1 pickle.dumps(ds)

TypeError: can't pickle rasterio._io.RasterReader objects ```

Problem description

Originally reported by @rsignell-usgs in https://github.com/pangeo-data/pangeo/issues/249#issuecomment-388445370, the rasterio backend is not pickle-able. This obviously causes problems when using dask-distributed. We probably need to use DataStorePickleMixin or something similar on rasterio datasets to allow multiple readers of the same dataset.

Expected Output

python pickle.dumps(ds)

returns a pickled dataset.

Output of xr.show_versions()

xr.show_versions() /Users/jhamman/anaconda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Darwin OS-release: 17.5.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.3 pandas: 0.22.0 numpy: 1.14.2 scipy: 1.0.1 netCDF4: 1.3.1 h5netcdf: 0.5.1 h5py: 2.7.1 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.17.2 distributed: 1.21.6 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: 0.8.1 setuptools: 39.0.1 pip: 9.0.3 conda: 4.5.1 pytest: 3.5.1 IPython: 6.3.1 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2121/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
324204749 MDExOlB1bGxSZXF1ZXN0MTg4ODc1NDU3 2154 fix unlimited dims bug jhamman 2443309 closed 0     1 2018-05-17T22:13:51Z 2018-05-25T00:32:02Z 2018-05-18T14:48:11Z MEMBER   0 pydata/xarray/pulls/2154
  • [x] Closes #2134
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2154/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
324544072 MDExOlB1bGxSZXF1ZXN0MTg5MTI4NzY0 2163 Versioneer jhamman 2443309 closed 0     2 2018-05-18T20:35:39Z 2018-05-20T23:14:03Z 2018-05-20T23:14:03Z MEMBER   0 pydata/xarray/pulls/2163
  • [x] Closes #1300 (in a more portable way)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

This eliminates the need to edit setup.py before / after release and is a nice step towards simplifying xarray's release process.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2163/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
323732892 MDExOlB1bGxSZXF1ZXN0MTg4NTE4Nzg2 2141 expose CFTimeIndex to public API jhamman 2443309 closed 0     0 2018-05-16T18:19:59Z 2018-05-16T19:48:00Z 2018-05-16T19:48:00Z MEMBER   0 pydata/xarray/pulls/2141
  • [x] Closes #2140 ~- [ ] Tests added (for all bug fixes or enhancements)~
  • [ ] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

cc @spencerkclark and @shoyer

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2141/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
286542795 MDExOlB1bGxSZXF1ZXN0MTYxNTA4MzMx 1811 WIP: Compute==False for to_zarr and to_netcdf jhamman 2443309 closed 0     17 2018-01-07T05:01:42Z 2018-05-16T15:06:51Z 2018-05-16T15:05:03Z MEMBER   0 pydata/xarray/pulls/1811

review of this can wait until after #1800 is merged.

  • [x] Closes #1784
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

cc @mrocklin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1811/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
304589831 MDExOlB1bGxSZXF1ZXN0MTc0NTMxNTcy 1983 Parallel open_mfdataset jhamman 2443309 closed 0     18 2018-03-13T00:44:35Z 2018-04-20T12:04:31Z 2018-04-20T12:04:23Z MEMBER   0 pydata/xarray/pulls/1983
  • [x] Closes #1981
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

I'm sharing this in the hopes of getting comments from @mrocklin and @pydata/xarray.

What this does:

  • implements a dask.bag map/apply on the xarray open_dataset and preprocess steps in open_mfdataset
  • adds a new parallel option to open_mfdataset
  • provides about a 40% speedup in opening a multifile dataset when using the distributed scheduler (I tested on 1000 netcdf files that took about 9 seconds to open/concatenate in the default configuration)

What it does not do (yet):

  • check that autoclose=True when multiple processes are being use (multiprocessing/distributed scheduler)
  • provide any speedup with the multiprocessing backend (I do not understand why this is)

Benchmark Example

```Python In [1]: import xarray as xr ...: import dask ...: import dask.threaded ...: import dask.multiprocessing ...: from dask.distributed import Client ...:

In [2]: c = Client() ...: c ...: Out[2]: <Client: scheduler='tcp://127.0.0.1:59576' processes=4 cores=4>

In [4]: %%time ...: with dask.set_options(get=dask.multiprocessing.get): ...: ds = xr.open_mfdataset('../test_files/test_netcdf_*nc', autoclose=True, parallel=True) ...: CPU times: user 4.76 s, sys: 201 ms, total: 4.96 s Wall time: 7.74 s

In [5]: %%time ...: with dask.set_options(get=c.get): ...: ds = xr.open_mfdataset('../test_files/test_netcdf_*nc', autoclose=True, parallel=True) ...: ...: CPU times: user 1.88 s, sys: 60.6 ms, total: 1.94 s Wall time: 4.41 s

In [6]: %%time ...: with dask.set_options(get=dask.threaded.get): ...: ds = xr.open_mfdataset('../test_files/test_netcdf_*nc') ...: CPU times: user 7.77 s, sys: 247 ms, total: 8.02 s Wall time: 8.17 s

In [7]: %%time ...: with dask.set_options(get=dask.threaded.get): ...: ds = xr.open_mfdataset('../test_files/test_netcdf_*nc', autoclose=True) ...: ...: CPU times: user 7.89 s, sys: 202 ms, total: 8.09 s Wall time: 8.21 s

In [8]: ds Out[8]: <xarray.Dataset> Dimensions: (lat: 45, lon: 90, time: 1000) Coordinates: * lon (lon) float64 0.0 4.045 8.09 12.13 16.18 20.22 24.27 28.31 ... * lat (lat) float64 -90.0 -85.91 -81.82 -77.73 -73.64 -69.55 -65.45 ... * time (time) datetime64[ns] 1970-01-01 1970-01-02 1970-01-11 ... Data variables: foo (time, lon, lat) float64 dask.array<shape=(1000, 90, 45), chunksize=(1, 90, 45)> bar (time, lon, lat) float64 dask.array<shape=(1000, 90, 45), chunksize=(1, 90, 45)> baz (time, lon, lat) float32 dask.array<shape=(1000, 90, 45), chunksize=(1, 90, 45)> Attributes: history: created for xarray benchmarking

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1983/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
304201107 MDU6SXNzdWUzMDQyMDExMDc= 1981 use dask to open datasets in parallel jhamman 2443309 closed 0     5 2018-03-11T22:33:52Z 2018-04-20T12:04:23Z 2018-04-20T12:04:23Z MEMBER      

Code Sample, a copy-pastable example if possible

python xr.open_mfdataset('path/to/many/files*.nc', method='parallel')

Problem description

We have many issues describing the less than stelar performance of open_mfdataset (e.g. #511, #893, #1385, #1788, #1823). The problem can be broken into three pieces: 1) open each file, 2) decode/preprocess each datasets, and 3) merge/combine/concat the collection of datasets. We can perform (1) and (2) in parallel (performance improvements to (3) would be a separate task). Lately, I'm finding that for large numbers of files, it can take many seconds to many minutes just to open all the files in a multi-file dataset of mine.

I'm proposing that we use something like dask.bag to parallelize steps (1) and (2). I've played around with this a bit and it "works" almost right out of the box, provided you are using the "autoclose=True" option. A concrete example:

We could change the line: Python datasets = [open_dataset(p, **open_kwargs) for p in paths] to Python import dask.bag as db paths_bag = db.from_sequence(paths) datasets = paths_bag.map(open_dataset, **open_kwargs).compute()

I'm curious what others think of this idea and what the potential downfalls may be.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1981/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
283388962 MDExOlB1bGxSZXF1ZXN0MTU5Mjg2OTk0 1793 fix distributed writes jhamman 2443309 closed 0   0.10.3 3008859 35 2017-12-19T22:24:41Z 2018-03-13T15:32:54Z 2018-03-10T15:43:18Z MEMBER   0 pydata/xarray/pulls/1793
  • [x] Closes #1464
  • [x] Tests added
  • [x] Tests passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Right now, I've just modified the dask distributed integration tests so we can all see the failing tests.

I'm happy to push this further but I thought I'd see if either @shoyer or @mrocklin have an idea where to start?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1793/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
304097233 MDExOlB1bGxSZXF1ZXN0MTc0MTg1NDI5 1980 Fix for failing zarr test jhamman 2443309 closed 0     2 2018-03-10T19:26:37Z 2018-03-12T05:37:09Z 2018-03-12T05:37:02Z MEMBER   0 pydata/xarray/pulls/1980
  • [x] Closes #1979 and #1955
  • [x] Tests added
  • [x] Tests passed
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1980/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
298854863 MDExOlB1bGxSZXF1ZXN0MTcwMzg1ODI4 1933 Use conda-forge netcdftime wherever netcdf4 was tested jhamman 2443309 closed 0     8 2018-02-21T06:22:08Z 2018-03-09T19:22:34Z 2018-03-09T19:22:20Z MEMBER   0 pydata/xarray/pulls/1933
  • [x] Closes #1920
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented: see #1920
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1933/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
295621576 MDU6SXNzdWUyOTU2MjE1NzY= 1897 Vectorized indexing with cache=False jhamman 2443309 closed 0     5 2018-02-08T18:38:18Z 2018-03-06T22:00:57Z 2018-03-06T22:00:57Z MEMBER      

Code Sample, a copy-pastable example if possible

```python import numpy as np import xarray as xr n_times = 4; n_lats = 10; n_lons = 15 n_points = 4

ds = xr.Dataset({'test_var': (['time', 'latitude', 'longitude'], np.random.random((n_times, n_lats, n_lons)))}) ds.to_netcdf('test.nc')

rand_lons = xr.Variable('points', np.random.randint(0, high=n_lons, size=n_points)) rand_lats = xr.Variable('points', np.random.randint(0, high=n_lats, size=n_points))

ds = xr.open_dataset('test.nc', cache=False) points = ds['test_var'][:, rand_lats, rand_lons] yields:


NotImplementedError Traceback (most recent call last) <ipython-input-7-f16e4cae9456> in <module>() 12 13 ds = xr.open_dataset('test.nc', cache=False) ---> 14 points = ds['test_var'][:, rand_lats, rand_lons]

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/dataarray.py in getitem(self, key) 478 else: 479 # xarray-style array indexing --> 480 return self.isel(**self._item_key_to_dict(key)) 481 482 def setitem(self, key, value):

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/dataarray.py in isel(self, drop, indexers) 759 DataArray.sel 760 """ --> 761 ds = self._to_temp_dataset().isel(drop=drop, indexers) 762 return self._from_temp_dataset(ds) 763

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/dataset.py in isel(self, drop, indexers) 1390 for name, var in iteritems(self._variables): 1391 var_indexers = {k: v for k, v in indexers_list if k in var.dims} -> 1392 new_var = var.isel(var_indexers) 1393 if not (drop and name in var_indexers): 1394 variables[name] = new_var

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/variable.py in isel(self, **indexers) 851 if dim in indexers: 852 key[i] = indexers[dim] --> 853 return self[tuple(key)] 854 855 def squeeze(self, dim=None):

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/variable.py in getitem(self, key) 620 """ 621 dims, indexer, new_order = self._broadcast_indexes(key) --> 622 data = as_indexable(self._data)[indexer] 623 if new_order: 624 data = np.moveaxis(data, range(len(new_order)), new_order)

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/indexing.py in getitem(self, key) 554 555 def getitem(self, key): --> 556 return type(self)(_wrap_numpy_scalars(self.array[key])) 557 558 def setitem(self, key, value):

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/indexing.py in getitem(self, indexer) 521 522 def getitem(self, indexer): --> 523 return type(self)(self.array, self._updated_key(indexer)) 524 525 def setitem(self, key, value):

~/anaconda/envs/pangeo/lib/python3.6/site-packages/xarray/core/indexing.py in _updated_key(self, new_key) 491 'Vectorized indexing for {} is not implemented. Load your ' 492 'data first with .load() or .compute(), or disable caching by ' --> 493 'setting cache=False in open_dataset.'.format(type(self))) 494 495 iter_new_key = iter(expanded_indexer(new_key.tuple, self.ndim))

NotImplementedError: Vectorized indexing for <class 'xarray.core.indexing.LazilyIndexedArray'> is not implemented. Load your data first with .load() or .compute(), or disable caching by setting cache=False in open_dataset. ```

Problem description

Raising a NotImplementedError here is fine but it instructs the user to "disable caching by setting cache=False in open_dataset" which I've already done. So my questions are 1) should we expect this to work and 2) if not

Expected Output

Ideally, we can get the same behavior as:

```python ds = xr.open_dataset('test2.nc', cache=False).load() points = ds['test_var'][:, rand_lats, rand_lons]

<xarray.DataArray 'test_var' (time: 4, points: 4)> array([[0.939469, 0.406885, 0.939469, 0.759075], [0.470116, 0.585546, 0.470116, 0.37833 ], [0.274321, 0.648218, 0.274321, 0.383391], [0.754121, 0.078878, 0.754121, 0.903788]]) Dimensions without coordinates: time, points ```

without needing to use .load()

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 64 OS: Linux OS-release: 3.10.0-693.5.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0+dev55.g1d32399 pandas: 0.22.0 numpy: 1.14.0 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: 0.5.0 h5py: 2.7.1 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.16.1 distributed: 1.20.2 matplotlib: 2.1.2 cartopy: 0.15.1 seaborn: 0.8.1 setuptools: 38.4.0 pip: 9.0.1 conda: None pytest: 3.4.0 IPython: 6.2.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1897/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
287852184 MDU6SXNzdWUyODc4NTIxODQ= 1821 v0.10.1 Release jhamman 2443309 closed 0   0.10.3 3008859 11 2018-01-11T16:56:08Z 2018-02-26T23:20:45Z 2018-02-26T01:48:32Z MEMBER      

We're close to a minor/bug-fix release (0.10.1). What do we need to get done before that can happen?

  • [x] #1800 Performance improvements to Zarr (@jhamman)
  • [ ] #1793 Fix for to_netcdf writes with dask-distributed (@jhamman, could use help)
  • [x] #1819 Normalisation for RGB imshow

Help wanted / bugs that no-one is working on: - [ ] #1792 Comparison to masked numpy arrays - [ ] #1764 groupby_bins fails for empty bins

What else?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1821/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
300039859 MDExOlB1bGxSZXF1ZXN0MTcxMjM4Mzk3 1939 Fix/dask isnull jhamman 2443309 closed 0     0 2018-02-25T16:32:47Z 2018-02-25T20:52:17Z 2018-02-25T20:52:16Z MEMBER   0 pydata/xarray/pulls/1939
  • [x] Closes #1937
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Thanks @fujiisoup for the report.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1939/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
293047671 MDExOlB1bGxSZXF1ZXN0MTY2MTc3ODM5 1872 added contributing guide jhamman 2443309 closed 0     5 2018-01-31T06:41:35Z 2018-02-23T06:16:00Z 2018-02-05T21:00:02Z MEMBER   0 pydata/xarray/pulls/1872
  • [x] Closes #640
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

This is something we've talked about for a while and I'm capitalizing on a moment of inspiration. Full disclosure, I've taken most of this from Pandas and edited it just where it makes sense for Xarray.

If others would like specific changes to this, please comment only on doc/contributing.rst, ~CONTRIBUTING.md is auto generated with pandoc~. @pydata/xarray, feel free to push directly to this branch if there are larger edits you'd like to add.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1872/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
297935814 MDExOlB1bGxSZXF1ZXN0MTY5NzM3ODg0 1920 Add netcdftime as an optional dependency. jhamman 2443309 closed 0     1 2018-02-16T22:12:01Z 2018-02-22T03:23:25Z 2018-02-19T21:25:57Z MEMBER   0 pydata/xarray/pulls/1920
  • [x] Helps with #1084
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

I've added a temporary travis build with the master branch of netcdftime. After a while, we can probably remove that.

This is helping us move towards https://github.com/Unidata/netcdf4-python/issues/601 and #1252

cc @jswhit and @spencerkclark

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1920/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
296847687 MDExOlB1bGxSZXF1ZXN0MTY4OTI3NDcz 1907 drop zarr variable name from the dask chunk name jhamman 2443309 closed 0     1 2018-02-13T18:55:33Z 2018-02-17T04:40:18Z 2018-02-17T04:40:15Z MEMBER   0 pydata/xarray/pulls/1907
  • [x] Closes #1894
  • [ ] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

cc @mrocklin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1907/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
296867544 MDExOlB1bGxSZXF1ZXN0MTY4OTQyNTg4 1908 Build documentation on TravisCI jhamman 2443309 closed 0     8 2018-02-13T20:04:07Z 2018-02-15T23:20:34Z 2018-02-15T23:20:31Z MEMBER   0 pydata/xarray/pulls/1908
  • [x] Closes #1898 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Tests added (for all bug fixes or enhancements)
  • [ ] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1908/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
113497063 MDU6SXNzdWUxMTM0OTcwNjM= 640 Use pytest to simplify unit tests jhamman 2443309 closed 0     2 2015-10-27T03:06:48Z 2018-02-05T21:00:02Z 2018-02-05T21:00:02Z MEMBER      

xray's unit testing system uses Python's standard unittest framework. pytest offers a more flexible framework requiring less boilerplate code. I recently (#638) introduced pytest into xray's CI builds. This issue proposes incrementally migrating and simplifying xray's unit testing framework to pytest.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/640/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
293027121 MDExOlB1bGxSZXF1ZXN0MTY2MTYzMjQ5 1871 add warning stating that xarray will drop python 2 support at the end of 2018 jhamman 2443309 closed 0     1 2018-01-31T04:25:14Z 2018-02-01T06:04:12Z 2018-02-01T06:04:08Z MEMBER   0 pydata/xarray/pulls/1871
  • [x] Closes #1830 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1871/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
288466108 MDU6SXNzdWUyODg0NjYxMDg= 1830 Drop support for Python 2 jhamman 2443309 closed 0     7 2018-01-15T02:44:15Z 2018-02-01T06:04:08Z 2018-02-01T06:04:08Z MEMBER      

When do we want to drop Python 2 support for Xarray. For reference, Pandas has a stated drop date for Python 2 of the end of 2018 (this year) and Numpy is slightly later and includes an incremental depreciation, final on Jan. 1, 2020.

We may also consider signing this pledge to help make it clear when/why we're dropping Python 2 support: http://www.python3statement.org/

xref: https://github.com/pandas-dev/pandas/issues/18894, https://github.com/numpy/numpy/pull/10006, https://github.com/python3statement/python3statement.github.io/issues/11

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1830/reactions",
    "total_count": 5,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
292640727 MDExOlB1bGxSZXF1ZXN0MTY1ODc3NDI1 1868 add h5py to show_versions() jhamman 2443309 closed 0     0 2018-01-30T03:25:13Z 2018-01-30T15:33:14Z 2018-01-30T06:21:15Z MEMBER   0 pydata/xarray/pulls/1868
  • [x] Closes #1867
  • [x] Tests passed (for all non-documentation changes)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1868/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
284607311 MDExOlB1bGxSZXF1ZXN0MTYwMTY1NjI3 1800 WIP: Performance improvements for zarr backend jhamman 2443309 closed 0   0.10.3 3008859 6 2017-12-26T20:37:45Z 2018-01-24T14:56:57Z 2018-01-24T14:55:52Z MEMBER   0 pydata/xarray/pulls/1800
  • [x] Closes #https://github.com/pangeo-data/pangeo/issues/48
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Passes git diff upstream/master **/*py | flake8 --diff (remove if you did not edit any Python files)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

This is building on top of #1799. Based on the suggestion from @alimanfoo in https://github.com/pangeo-data/pangeo/issues/48#issuecomment-353807691, I have reworked the handling of attributes in the zarr backend. There is more to do here, particularly in the set_dimensions arena but this is giving almost a 2x speedup in writing to GCP.

cc @rabernat, @mrocklin and @alimanfoo

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1800/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
287186057 MDU6SXNzdWUyODcxODYwNTc= 1813 Test Failure: test_datetime_line_plot jhamman 2443309 closed 0     3 2018-01-09T18:29:35Z 2018-01-10T07:13:53Z 2018-01-10T07:13:53Z MEMBER      

We're getting a single test failure in the plot tests on master (link to travis failure. I haven't been able to reproduce this locally yet so I'm just going to post here to see if anyone has any ideas.

Code Sample

```python ___ TestDatetimePlot.test_datetime_line_plot _____ self = <xarray.tests.test_plot.TestDatetimePlot testMethod=test_datetime_line_plot> def test_datetime_line_plot(self): # test if line plot raises no Exception

  self.darray.plot.line()

xarray/tests/test_plot.py:1333:


xarray/plot/plot.py:328: in line return line(self._da, args, *kwargs) xarray/plot/plot.py:223: in line _ensure_plottable(x)


args = (<xarray.DataArray 'time' (time: 12)> array([datetime.datetime(2017, 1, 1, 0, 0), datetime.datetime(2017, 2, 1,... 12, 1, 0, 0)], dtype=object) Coordinates: * time (time) object 2017-01-01 2017-02-01 2017-03-01 2017-04-01 ...,) numpy_types = [<class 'numpy.floating'>, <class 'numpy.integer'>, <class 'numpy.timedelta64'>, <class 'numpy.datetime64'>] other_types = [<class 'datetime.datetime'>] x = <xarray.DataArray 'time' (time: 12)> array([datetime.datetime(2017, 1, 1, 0, 0), datetime.datetime(2017, 2, 1, ...7, 12, 1, 0, 0)], dtype=object) Coordinates: * time (time) object 2017-01-01 2017-02-01 2017-03-01 2017-04-01 ... def _ensure_plottable(*args): """ Raise exception if there is anything in args that can't be plotted on an axis. """ numpy_types = [np.floating, np.integer, np.timedelta64, np.datetime64] other_types = [datetime]

    for x in args:
        if not (_valid_numpy_subdtype(np.array(x), numpy_types)
                or _valid_other_type(np.array(x), other_types)):
          raise TypeError('Plotting requires coordinates to be numeric '
                            'or dates.')

E TypeError: Plotting requires coordinates to be numeric or dates. xarray/plot/plot.py:57: TypeError ```

Expected Output

This test was previously passing

Output of xr.show_versions()

https://travis-ci.org/pydata/xarray/jobs/326640013#L1262

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1813/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
287199483 MDExOlB1bGxSZXF1ZXN0MTYxOTUzNTAy 1814 Fix/plot error and warning jhamman 2443309 closed 0     0 2018-01-09T19:16:31Z 2018-01-10T07:13:53Z 2018-01-10T07:13:53Z MEMBER   0 pydata/xarray/pulls/1814
  • [x] Closes #1813 ~[ ] Tests added~
  • [x] Tests passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1814/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
267028954 MDExOlB1bGxSZXF1ZXN0MTQ3Njk1MzEx 1640 WIP: Feature/interpolate jhamman 2443309 closed 0     8 2017-10-20T00:26:25Z 2017-12-30T06:58:52Z 2017-12-30T06:21:42Z MEMBER   0 pydata/xarray/pulls/1640
  • [x] Closes #1631
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Rough draft of interpolate method for filling of arbitrary nans.

cc @darothen

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1640/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
265056503 MDU6SXNzdWUyNjUwNTY1MDM= 1631 Resample / upsample behavior diverges from pandas jhamman 2443309 closed 0     5 2017-10-12T19:22:44Z 2017-12-30T06:21:42Z 2017-12-30T06:21:42Z MEMBER      

I've found a few issues where xarray's new resample / upsample functionality is diverging from Pandas. I think they are mostly surrounding how NaNs are treated. Thoughts from @shoyer, @darothen and others.

Gist with all the juicy details: https://gist.github.com/jhamman/354f0e5ff32a39550ffd25800e7214fc#file-xarray_resample-ipynb

xref: #1608, #1272

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1631/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
283985001 MDExOlB1bGxSZXF1ZXN0MTU5NzM1NzI3 1799 move backend append logic to the prepare_variable methods jhamman 2443309 closed 0     2 2017-12-21T19:44:54Z 2017-12-28T05:40:21Z 2017-12-28T05:40:17Z MEMBER   0 pydata/xarray/pulls/1799
  • [x] Closes #1798
  • [ ] Tests added (ideas for how to test that load is not called? Regression tests from #1609 are passing)
  • [x] Tests passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff (remove if you did not edit any Python files)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1799/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1001.023ms · About: xarray-datasette