home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

78 rows where type = "pull" and user = 6815844 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 77
  • open 1

type 1

  • pull · 78 ✖

repo 1

  • xarray 78
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
818583834 MDExOlB1bGxSZXF1ZXN0NTgxODIxNTI0 4974 implemented pad with new-indexes fujiisoup 6815844 closed 0     8 2021-03-01T07:50:08Z 2023-09-14T02:47:24Z 2023-09-14T02:47:24Z MEMBER   0 pydata/xarray/pulls/4974
  • [x] Closes #3868
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

Now we use a tuple of indexes for DataArray.pad and Dataset.pad.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4974/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
527553050 MDExOlB1bGxSZXF1ZXN0MzQ0ODA1NzQ3 3566 Make 0d-DataArray compatible for indexing. fujiisoup 6815844 closed 0     6 2019-11-23T12:43:32Z 2023-08-31T02:06:21Z 2023-08-31T02:06:21Z MEMBER   0 pydata/xarray/pulls/3566
  • [x] Closes #3562
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Now 0d-DataArray can be used for indexing.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3566/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
531087939 MDExOlB1bGxSZXF1ZXN0MzQ3NTkyNzE1 3587 boundary options for rolling.construct fujiisoup 6815844 open 0     4 2019-12-02T12:11:44Z 2022-06-09T14:50:17Z   MEMBER   0 pydata/xarray/pulls/3587
  • [x] Closes #2007, #2011
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Added some boundary options for rolling.construct. Currently, the option names are inherited from np.pad, ['edge' | 'reflect' | 'symmetric' | 'wrap']. Do we want a more intuitive name, such as periodic?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3587/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
675604714 MDExOlB1bGxSZXF1ZXN0NDY1MDg1Njg1 4329 ndrolling repr fix fujiisoup 6815844 closed 0     6 2020-08-08T23:34:37Z 2020-08-09T13:15:50Z 2020-08-09T11:57:38Z MEMBER   0 pydata/xarray/pulls/4329
  • [x] Closes #4328
  • [x] Tests added
  • [x] Passes isort . && black . && mypy . && flake8

There was a bug in rolling.__repr__ but it was not tested. Fixed and tests are added.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4329/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
655389649 MDExOlB1bGxSZXF1ZXN0NDQ3ODkyNjE3 4219 nd-rolling fujiisoup 6815844 closed 0     16 2020-07-12T12:19:19Z 2020-08-08T07:23:51Z 2020-08-08T04:16:27Z MEMBER   0 pydata/xarray/pulls/4219
  • [x] Closes #4196
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

I noticed that the implementation of nd-rolling is straightforward. The core part is implemented but I am wondering what the best API is, with keeping it backward-compatible.

Obviously, it is basically should look like python da.rolling(x=3, y=3).mean()

A problem is other parameters, centers and min_periods. In principle, they can depend on dimension. For example, we can have center=True only for x but not for y.

So, maybe we allow dictionary for them? python da.rolling(x=3, y=3, center={'x': True, 'y': False}, min_periods={'x': 1, 'y': None}).mean()

The same thing happens for .construct method. python da.rolling(x=3, y=3).construct(x='x_window', y='y_window', stride={'x': 2, 'y': 1}) I'm afraid if this dictionary argument was a bit too redundant.

Does anyone have another idea?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4219/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
239918314 MDExOlB1bGxSZXF1ZXN0MTI4NDcxOTk4 1469 Argmin indexes fujiisoup 6815844 closed 0     6 2017-07-01T01:23:31Z 2020-06-29T19:36:25Z 2020-06-29T19:36:25Z MEMBER   0 pydata/xarray/pulls/1469
  • [x] Closes #1388
  • [x] Tests added / passed
  • [x] Passes git diff master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

With this PR, ValueError raises if argmin() is called by a multi-dimensional array.

argmin_indexes() method is also added for xr.DataArray. Current API design for argmin_indexes() returns the argmin-indexes as an OrderedDict of DataArrays.

Example: ```python

In [1]: import xarray as xr ...: da = xr.DataArray([[1, 2], [-1, 40], [5, 6]], ...: [('x', ['c', 'b', 'a']), ('y', [1, 0])]) ...: ...: da.argmin_indexes() ...: Out[1]: OrderedDict([('x', <xarray.DataArray 'x' ()> array(1)), ('y', <xarray.DataArray 'y' ()> array(0))])

In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', <xarray.DataArray 'y' (x: 3)> array([0, 0, 0]) Coordinates: * x (x) <U1 'c' 'b' 'a')])

```

(Because the returned object is an OrderedDict, it is not beautifully printed. The returned type can be a xr.Dataset if we want.)

Although in #1388 argmin_indexes() was originally suggested so that we can pass the result into isel_point,

python da.isel_points(**da.argmin_indexes()) current implementation of isel_points does NOT work for this case.

This is mainly because 1. isel_points currently does not work for 0-dimensional or multi-dimensional input. 2. Even for 1-dimensional input (the second one in the above examples), we should also pass x as an indexer rather than the coordinate of indexer.

For 1, I have prepared modification of isel_points to accept multi-dimensional arrays, but I guess it should be in another PR after the API decision. (It is related in #475, and #974.)

For 2, we should either + change API of argmin_indexes to return not only the indicated dimension but also all the dimensions, like

```python In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', array([0, 0, 0]), 'x', array(['c' 'b' 'a']))

`` or + change API ofisel_pointso that it takes care of theindexer's coordinate ifxr.DataArrayis passed for asindexers`.

I originally worked with the second option for the modification of isel_points, the second option breaks the backward-comaptibility and is somehow magical.

Another alternertive is to + change API of argmin_indexes to return xr.Dataset rather than an OrderedDict, and also change API of isel_points to accept xr.Dataset.
It keeps backward-compatibility.

Any comments are welcome.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1469/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
619374891 MDExOlB1bGxSZXF1ZXN0NDE4OTEyODc3 4069 Improve interp performance fujiisoup 6815844 closed 0     2 2020-05-16T04:23:47Z 2020-05-25T20:02:41Z 2020-05-25T20:02:37Z MEMBER   0 pydata/xarray/pulls/4069
  • [x] Closes #2223
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Now n-dimensional interp works sequentially if possible. It may speed up some cases.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4069/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
613044689 MDExOlB1bGxSZXF1ZXN0NDEzODcyODQy 4036 support darkmode fujiisoup 6815844 closed 0     5 2020-05-06T04:39:07Z 2020-05-21T21:06:15Z 2020-05-07T20:36:32Z MEMBER   0 pydata/xarray/pulls/4036
  • [x] Closes #4024
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Now it looks like

I'm pretty sure that this workaround is not the best (maybe the second worst), as it only supports the dark mode of vscode but not other environments.

I couldn't find a good way to make a workaround for the general dark-mode. Any advice is welcome.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4036/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
596163034 MDExOlB1bGxSZXF1ZXN0NDAwNTExNjkz 3953 Fix wrong order of coordinate converted from pd.series with MultiIndex fujiisoup 6815844 closed 0     2 2020-04-07T21:28:04Z 2020-04-08T05:49:46Z 2020-04-08T02:19:11Z MEMBER   0 pydata/xarray/pulls/3953
  • [x] Closes #3951
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

It looks dataframe.set_index(index).index == index is not always true.

Added a workaround for this...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3953/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
    xarray 13221727 pull
546784890 MDExOlB1bGxSZXF1ZXN0MzYwMzk1OTY4 3670 sel with categorical index fujiisoup 6815844 closed 0     7 2020-01-08T10:51:06Z 2020-01-25T22:38:28Z 2020-01-25T22:38:21Z MEMBER   0 pydata/xarray/pulls/3670
  • [x] Closes #3669, #3674
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

It is a bit surprising that no members have used xarray with CategoricalIndex... If there is anything missing additionally, please feel free to point it out.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3670/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
523853001 MDExOlB1bGxSZXF1ZXN0MzQxNzYxNTg1 3542 sparse option to reindex and unstack fujiisoup 6815844 closed 0     2 2019-11-16T14:41:00Z 2019-11-19T22:40:34Z 2019-11-19T16:23:34Z MEMBER   0 pydata/xarray/pulls/3542
  • [x] Closes #3518
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Added sparse option to reindex and unstack. I just added a minimal set of codes necessary to unstack and reindex.

There is still a lot of space to complete the sparse support as discussed in #3245.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3542/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
523831612 MDExOlB1bGxSZXF1ZXN0MzQxNzQ2NDA4 3541 Added fill_value for unstack fujiisoup 6815844 closed 0     3 2019-11-16T11:10:56Z 2019-11-16T14:42:31Z 2019-11-16T14:36:44Z MEMBER   0 pydata/xarray/pulls/3541
  • [x] Closes #3518
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Added an option fill_value for unstack. I am trying to add sparse option too, but it may take longer. Probably better to do in a separate PR?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3541/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
522319360 MDExOlB1bGxSZXF1ZXN0MzQwNTQxNzMz 3520 Fix set_index when an existing dimension becomes a level fujiisoup 6815844 closed 0     2 2019-11-13T16:06:50Z 2019-11-14T11:56:25Z 2019-11-14T11:56:18Z MEMBER   0 pydata/xarray/pulls/3520
  • [x] Closes #3512
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

There was a bug in set_index, where an old dimension was not updated if it becomes a level of MultiIndex.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3520/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
281423161 MDExOlB1bGxSZXF1ZXN0MTU3ODU2NTEx 1776 [WIP] Fix pydap array wrapper fujiisoup 6815844 closed 0   0.10.3 3008859 6 2017-12-12T15:22:07Z 2019-09-25T15:44:19Z 2018-01-09T01:48:13Z MEMBER   0 pydata/xarray/pulls/1776
  • [x] Closes #1775 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Passes git diff upstream/master **/*py | flake8 --diff (remove if you did not edit any Python files)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

I am trying to fix #1775, but tests are still failing. Any help would be appreciated.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1776/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
440900618 MDExOlB1bGxSZXF1ZXN0Mjc2MzQ2MTQ3 2942 Fix rolling operation with dask and bottleneck fujiisoup 6815844 closed 0     7 2019-05-06T21:23:41Z 2019-06-30T00:34:57Z 2019-06-30T00:34:57Z MEMBER   0 pydata/xarray/pulls/2942
  • [x] Closes #2940
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fix for #2940 It looks that there was a bug in the previous logic, but I am not sure why it was working...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2942/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
398468139 MDExOlB1bGxSZXF1ZXN0MjQ0MTYyMTgx 2668 fix datetime_to_numeric and Variable._to_numeric fujiisoup 6815844 closed 0     14 2019-01-11T22:02:07Z 2019-02-11T11:58:22Z 2019-02-11T09:47:09Z MEMBER   0 pydata/xarray/pulls/2668
  • [x] Closes #2667
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Started to fixing #2667

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2668/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
396157243 MDExOlB1bGxSZXF1ZXN0MjQyNDM1MjAz 2653 Implement integrate fujiisoup 6815844 closed 0     2 2019-01-05T11:22:10Z 2019-01-31T17:31:31Z 2019-01-31T17:30:31Z MEMBER   0 pydata/xarray/pulls/2653
  • [x] Closes #1288
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

I would like to add integrate, which is essentially an xarray-version of np.trapz. I know there was variety of discussions in #1288, but I think it would be nice to limit us within that numpy provides by np.trapz, i.e., 1. only for trapz not rectangle or simps 2. do not care np.nan 3. do not support bounds Most of them (except for 1) can be solved by combining several existing methods.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2653/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
231308952 MDExOlB1bGxSZXF1ZXN0MTIyNDE4MjA3 1426 scalar_level in MultiIndex fujiisoup 6815844 closed 0     10 2017-05-25T11:03:05Z 2019-01-14T21:20:28Z 2019-01-14T21:20:27Z MEMBER   0 pydata/xarray/pulls/1426
  • [x] Closes #1408
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

[Edit for more clarity] I restarted a new branch to fix #1408 (I closed the older one #1412).

Because the changes I made is relatively large, here I summarize this PR.

Sumamry

In this PR, I newly added two kinds of levels in MultiIndex, index-level and scalar-level. index-level is an ordinary level in MultiIndex (as in current implementation), while scalar-level indicates dropped level (which is newly added in this PR).

Changes in behaviors.

  1. Indexing a scalar at a particular level changes that level to scalar-level instead of dropping that level (changed from #767).
  2. Indexing a scalar from a MultiIndex, the selected value now becomes a MultiIndex-scalar rather than a scalar of tuple.
  3. Enabled indexing along a index-level if the MultiIndex has only a single index-level.

Examples of the output are shown below. Any suggestions for these behaviors are welcome.

```python In [1]: import numpy as np ...: import xarray as xr ...: ...: ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) ...: ds2 = xr.Dataset({'foo': (('x',), [4, 5, 6])}, {'x': [1, 2, 3], 'y': 'b'}) ...: # example data ...: ds = xr.concat([ds1, ds2], dim='y').stack(yx=['y', 'x']) ...: ds Out[1]: <xarray.Dataset> Dimensions: (yx: 6) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'a' 'a' 'b' 'b' 'b' # <--- this is index-level - x (yx) int64 1 2 3 1 2 3 # <--- this is also index-level Data variables: foo (yx) int64 1 2 3 4 5 6

In [2]: # 1. indexing a scalar converts index-level x to scalar-level. ...: ds.sel(x=1) Out[2]: <xarray.Dataset> Dimensions: (yx: 2) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'b' # <--- this is index-level - x int64 1 # <--- this is scalar-level Data variables: foo (yx) int64 1 4

In [3]: # 2. indexing a single element from MultiIndex makes a MultiIndex-scalar ...: ds.isel(yx=0) Out[3]: <xarray.Dataset> Dimensions: () Coordinates: yx MultiIndex # <--- this is MultiIndex-scalar - y <U1 'a' - x int64 1 Data variables: foo int64 1

In [6]: # 3. Enables to selecting along a index-level if only one index-level exists in MultiIndex ...: ds.sel(x=1).isel(y=[0,1]) Out[6]: <xarray.Dataset> Dimensions: (yx: 2) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'b' - x int64 1 Data variables: foo (yx) int64 1 4

```

Changes in the public APIs

Some changes were necessary to the public APIs, though I tried to minimize them.

  • level_names, get_level_values methods were moved from IndexVariable to Variable. This is because IndexVariable cannnot handle 0-d array, which I want to support in 2.

  • scalar_level_names and all_level_names properties were added to Variable

  • reset_levels method was added to Variable class to control scalar-level and index-level.

Implementation summary

The main changes in the implementation is the addition of our own wrapper of pd.MultiIndex, PandasMultiIndexAdapter. This does most of MultiIndex-related operations, such as indexing, concatenation, conversion between 'scalar-levelandindex-level`.

What we can do now

The main merit of this proposal is that it enables us to handle MultiIndex more consistent way to the normal Variable. Now we can

  • recover the MultiIndex with dropped level. ```python In [5]: ds.sel(x=1).expand_dims('x') Out[5]: <xarray.Dataset> Dimensions: (yx: 2) Coordinates:
  • yx (yx) MultiIndex
  • y (yx) object 'a' 'b'
  • x (yx) int64 1 1 Data variables: foo (yx) int64 1 4 ```

  • construct a MultiIndex by concatenation of MultiIndex-scalar. ```python In [8]: xr.concat([ds.isel(yx=i) for i in range(len(ds['yx']))], dim='yx') Out[8]: <xarray.Dataset> Dimensions: (yx: 6) Coordinates:

  • yx (yx) MultiIndex
  • y (yx) object 'a' 'a' 'a' 'b' 'b' 'b'
  • x (yx) int64 1 2 3 1 2 3 Data variables: foo (yx) int64 1 2 3 4 5 6 ```

What we cannot do now

With the current implementation, we can do python ds.sel(y='a').rolling(x=2) but with this PR we cannot, because x is not yet an ordinary coordinate, but a MultiIndex with a single index-level. I think it is better if we can handle such a MultiIndex with a single index-level as very similar way to an ordinary coordinate.

Similary, we can neither do ds.sel(y='a').mean(dim='x'). Also, ds.sel(y='a').to_netcdf('file') (#719)

What are to be decided

  • How to repr these new levels (Current formatting is shown in Out[2] and Out[3] above.)
  • Terminologies such as index-level, scalar-level, MultiIndex-scalar are clear enough?
  • How much operations should we support for a single index-level MultiIndex? Do we support ds.sel(y='a').rolling(x=2) and ds.sel(y='a').mean(dim='x')?

TODOs

  • [ ] Support indexing with DataAarray, ds.sel(x=ds.x[0])
  • [ ] Support stack, unstack, set_index, reset_index methods with scalar-level MultiIndex.
  • [ ] Add a full document
  • [ ] Clean up the code related to MultiIndex
  • [ ] Fix issues (#1428, #1430, #1431) related to MultiIndex
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1426/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
391477755 MDExOlB1bGxSZXF1ZXN0MjM4OTcyNzU5 2612 Added Coarsen fujiisoup 6815844 closed 0     16 2018-12-16T15:28:31Z 2019-01-06T09:13:56Z 2019-01-06T09:13:46Z MEMBER   0 pydata/xarray/pulls/2612
  • [x] Closes #2525
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Started to implement corsen. The API is currently something like python actual = ds.coarsen(time=2, x=3, side='right', coordinate_func={'time': np.max}).max()

Currently, it is not working for a datetime coordinate, since mean does not work for this dtype. e.g. python da = xr.DataArray(np.linspace(0, 365, num=365), dims='time', coords={'time': pd.date_range('15/12/1999', periods=365)}) da['time'].mean() # -> TypeError: ufunc add cannot use operands with types dtype('<M8[ns]') and dtype('<M8[ns]')

I am not familiar with datetime things. Any advice will be appreciated.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2612/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
392535505 MDExOlB1bGxSZXF1ZXN0MjM5Nzg0ODE1 2621 Fix multiindex selection fujiisoup 6815844 closed 0     7 2018-12-19T10:30:15Z 2018-12-24T15:37:27Z 2018-12-24T15:37:27Z MEMBER   0 pydata/xarray/pulls/2621
  • [x] Closes #2619
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fix using MultiIndex.remove_unused_levels()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2621/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
368045263 MDExOlB1bGxSZXF1ZXN0MjIxMzExNzcw 2477 Inhouse LooseVersion fujiisoup 6815844 closed 0     2 2018-10-09T05:23:56Z 2018-10-10T13:47:31Z 2018-10-10T13:47:23Z MEMBER   0 pydata/xarray/pulls/2477
  • [x] Closes #2468
  • [x] Tests added
  • [N.A.] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

A fix for #2468.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2477/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
366653476 MDExOlB1bGxSZXF1ZXN0MjIwMjcyODMz 2462 pep8speaks fujiisoup 6815844 closed 0     14 2018-10-04T07:17:34Z 2018-10-07T22:40:15Z 2018-10-07T22:40:08Z MEMBER   0 pydata/xarray/pulls/2462
  • [x] Closes #2428

I installed pep8speaks as suggested in #2428. It looks they do not need a yml file, but it may be safer to add this (just renamed from .stickler.yml)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2462/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
364565122 MDExOlB1bGxSZXF1ZXN0MjE4NzIxNDUy 2447 restore ddof support in std fujiisoup 6815844 closed 0     3 2018-09-27T16:51:44Z 2018-10-03T12:44:55Z 2018-09-28T13:44:29Z MEMBER   0 pydata/xarray/pulls/2447
  • [x] Closes #2440
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes

It looks that I wrongly remove ddof option for nanstd in #2236. This PR fixes this.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2447/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
364545910 MDExOlB1bGxSZXF1ZXN0MjE4NzA2NzQ1 2446 fix:2445 fujiisoup 6815844 closed 0     0 2018-09-27T16:00:17Z 2018-09-28T18:24:42Z 2018-09-28T18:24:36Z MEMBER   0 pydata/xarray/pulls/2446
  • [x] Closes #2445
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes

It is a regression after #2360.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2446/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
350247452 MDExOlB1bGxSZXF1ZXN0MjA4MTQ0ODQx 2366 Future warning for default reduction dimension of groupby fujiisoup 6815844 closed 0     1 2018-08-14T01:16:34Z 2018-09-28T06:54:30Z 2018-09-28T06:54:30Z MEMBER   0 pydata/xarray/pulls/2366
  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Started to fix #2363. Now warns a futurewarning in groupby if default reduction dimension is not specified. As a side effect, I added xarray.ALL_DIMS. With dim=ALL_DIMS always reduces along all the dimensions.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2366/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
333248242 MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3 2236 Refactor nanops fujiisoup 6815844 closed 0     19 2018-06-18T12:27:31Z 2018-09-26T12:42:55Z 2018-08-16T06:59:33Z MEMBER   0 pydata/xarray/pulls/2236
  • [x] Closes #2230
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

In #2230, the addition of min_count keywords for our reduction methods was discussed, but our duck_array_ops module is becoming messy (mainly due to nan-aggregation methods for dask, bottleneck and numpy) and it looks a little hard to maintain them.

I tried to refactor them by moving nan-aggregation methods to nanops module.

I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation.

Note: In my implementation, bottleneck is not used when skipna=False. bottleneck would be advantageous when skipna=True as numpy needs to copy the entire array once, but I think numpy's method is still OK if skipna=False.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2236/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
356698348 MDExOlB1bGxSZXF1ZXN0MjEyODg5NzMy 2398 implement Gradient fujiisoup 6815844 closed 0     19 2018-09-04T08:11:52Z 2018-09-21T20:02:43Z 2018-09-21T20:02:43Z MEMBER   0 pydata/xarray/pulls/2398
  • [x] Closes #1332
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Added xr.gradient, xr.DataArray.gradient, and xr.Dataset.gradient according to #1332.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2398/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
351502921 MDExOlB1bGxSZXF1ZXN0MjA5MDc4NDQ4 2372 [MAINT] Avoid using duck typing fujiisoup 6815844 closed 0     1 2018-08-17T08:26:31Z 2018-08-20T01:13:26Z 2018-08-20T01:13:16Z MEMBER   0 pydata/xarray/pulls/2372
  • [x] Closes #2179
  • [x] Tests passed
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2372/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
351591072 MDExOlB1bGxSZXF1ZXN0MjA5MTQ1NDcy 2373 More support of non-string dimension names fujiisoup 6815844 closed 0     2 2018-08-17T13:18:18Z 2018-08-20T01:13:02Z 2018-08-20T01:12:37Z MEMBER   0 pydata/xarray/pulls/2373
  • [x] Tests passed (for all non-documentation changes)

Following to #2174

In some methods, consistency of the dictionary arguments and keyword arguments are checked twice in Dataset and Variable. Can we change the API of Variable so that it does not take kwargs-type argument for dimension names?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2373/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
348536270 MDExOlB1bGxSZXF1ZXN0MjA2ODY0NzU4 2353 Raises a ValueError for a confliction between dimension names and level names fujiisoup 6815844 closed 0     0 2018-08-08T00:52:29Z 2018-08-13T22:16:36Z 2018-08-13T22:16:31Z MEMBER   0 pydata/xarray/pulls/2353
  • [x] Closes #2299
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API.

Now it raises an Error to assign new dimension with the name conflicting with an existing level name. Therefore, it is not allowed ```python b = xr.Dataset(coords={'dim0': ['a', 'b'], 'dim1': [0, 1]}) b = b.stack(dim_stacked=['dim0', 'dim1'])

This should raise an errors even though its length is consistent with b['dim0']

b['c'] = (('dim0',), [10, 11, 12, 13])

This is OK

b['c'] = (('dim_stacked',), [10, 11, 12, 13]) ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2353/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
348539667 MDExOlB1bGxSZXF1ZXN0MjA2ODY3MjMw 2354 Mark some tests related to cdat-lite as xfail fujiisoup 6815844 closed 0     2 2018-08-08T01:13:25Z 2018-08-10T16:09:30Z 2018-08-10T16:09:30Z MEMBER   0 pydata/xarray/pulls/2354

I just mark some to_cdms2 tests xfail. See #2332 for the details. It is a temporal workaround and we may need to keep #2332 open until it is solved.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2354/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
348108577 MDExOlB1bGxSZXF1ZXN0MjA2NTM3NDc0 2349 dask.ghost -> dask.overlap fujiisoup 6815844 closed 0     0 2018-08-06T22:54:46Z 2018-08-08T01:14:04Z 2018-08-08T01:14:02Z MEMBER   0 pydata/xarray/pulls/2349

Dask renamed dask.ghost -> dask.overlap in dask/dask#3830. This PR follows up this.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2349/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
347672994 MDExOlB1bGxSZXF1ZXN0MjA2MjI0Mjcz 2342 apply_ufunc now raises a ValueError when the size of input_core_dims is inconsistent with number of argument fujiisoup 6815844 closed 0     0 2018-08-05T06:20:03Z 2018-08-06T22:38:57Z 2018-08-06T22:38:53Z MEMBER   0 pydata/xarray/pulls/2342
  • [x] Closes #2341
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Now raises a ValueError when the size of input_core_dims is inconsistent with number of argument.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2342/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
347677525 MDExOlB1bGxSZXF1ZXN0MjA2MjI2ODU0 2343 local flake8 fujiisoup 6815844 closed 0     0 2018-08-05T07:47:38Z 2018-08-05T23:47:00Z 2018-08-05T23:47:00Z MEMBER   0 pydata/xarray/pulls/2343

Trivial changes to pass local flake8 tests.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2343/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
345434195 MDExOlB1bGxSZXF1ZXN0MjA0NTg1MDU5 2326 fix doc build error after #2312 fujiisoup 6815844 closed 0     0 2018-07-28T09:15:20Z 2018-07-28T10:05:53Z 2018-07-28T10:05:50Z MEMBER   0 pydata/xarray/pulls/2326

I merged #2312 without making sure the building test passing, but there was a typo. Ths PR fixes it.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2326/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
289556132 MDExOlB1bGxSZXF1ZXN0MTYzNjU3NDI0 1837 Rolling window with `as_strided` fujiisoup 6815844 closed 0     14 2018-01-18T09:18:19Z 2018-06-22T22:27:11Z 2018-03-01T03:39:19Z MEMBER   0 pydata/xarray/pulls/1837
  • [x] Closes #1831, #1142, #819
  • [x] Tests added
  • [x] Tests passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

I started to work for refactoring rollings. As suggested in #1831 comment, I implemented rolling_window methods based on as_strided.

I got more than 1,000 times speed up! yey!

python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(10000, 3), dims=['x', 'y']) with the master python %timeit da.rolling(x=5).reduce(np.mean) 1 loop, best of 3: 9.68 s per loop with the current implementation python %timeit da.rolling(x=5).reduce(np.mean) 100 loops, best of 3: 5.29 ms per loop and with the bottleneck python %timeit da.rolling(x=5).mean() 100 loops, best of 3: 2.62 ms per loop

My current concerns are + Can we expose the new rolling_window method of DataArray and Dataset to the public? I think this method itself is useful for many usecases, such as short-term-FFT and convolution. This also gives more flexible rolling operation, such as windowed moving average, strided rolling, and ND-rolling.

  • Is there any dask's equivalence to numpy's as_strided? Currently, I just use a slice->concatenate path, but I don't think it is very efficient. (Is it already efficient, as dask utilizes out-of-core computation?)

Any thoughts are welcome.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1837/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
330859619 MDExOlB1bGxSZXF1ZXN0MTkzNzYyMjMx 2222 implement interp_like fujiisoup 6815844 closed 0     4 2018-06-09T06:46:48Z 2018-06-20T01:39:40Z 2018-06-20T01:39:24Z MEMBER   0 pydata/xarray/pulls/2222
  • [x] Closes #2218
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API.

This adds interp_like, that behaves like reindex_like but using interpolation.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2222/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
320275317 MDExOlB1bGxSZXF1ZXN0MTg1OTgzOTc3 2104 implement interp() fujiisoup 6815844 closed 0     51 2018-05-04T13:28:38Z 2018-06-11T13:01:21Z 2018-06-08T00:33:52Z MEMBER   0 pydata/xarray/pulls/2104
  • [x] Closes #2079 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

I started working to add interpolate_at to xarray, as discussed in issue #2079 (but without caching).

I think I need to take care of more edge cases, but before finishing up this PR, I want to discuss what the best API is.

I would like to this method working similar to isel, which may support vectorized interpolation. Currently, this works as follwos

```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) ...:

In [2]: # simple linear interpolation ...: da.interpolate_at(x=[0.5, 1.5]) ...: Out[2]: <xarray.DataArray (x: 2)> array([0.05, 0.15]) Coordinates: * x (x) float64 0.5 1.5

In [3]: # with cubic spline interpolation ...: da.interpolate_at(x=[0.5, 1.5], method='cubic') ...: Out[3]: <xarray.DataArray (x: 2)> array([0.0375, 0.1625]) Coordinates: * x (x) float64 0.5 1.5

In [4]: # interpolation at one single position ...: da.interpolate_at(x=0.5) ...: Out[4]: <xarray.DataArray ()> array(0.05) Coordinates: x float64 0.5

In [5]: # interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([[0.5, 1.0], [1.5, 2.0]], dims=['y', 'z'])) ...: Out[5]: <xarray.DataArray (y: 2, z: 2)> array([[0.05, 0.1 ], [0.15, 0.2 ]]) Coordinates: x (y, z) float64 0.5 1.0 1.5 2.0 Dimensions without coordinates: y, z

In [6]: da = xr.DataArray([[0, 0.1, 0.2], [1.0, 1.1, 1.2]], ...: dims=['x', 'y'], ...: coords={'x': [0, 1], 'y': [0, 10, 20]}) ...:

In [7]: # multidimensional interpolation ...: da.interpolate_at(x=[0.5, 1.5], y=[5, 15]) ...: Out[7]: <xarray.DataArray (x: 2, y: 2)> array([[0.55, 0.65], [ nan, nan]]) Coordinates: * x (x) float64 0.5 1.5 * y (y) int64 5 15

In [8]: # multidimensional interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([0.5, 1.5], dims='z'), ...: y=xr.DataArray([5, 15], dims='z')) ...: Out[8]: <xarray.DataArray (z: 2)> array([0.55, nan]) Coordinates: x (z) float64 0.5 1.5 y (z) int64 5 15 Dimensions without coordinates: z ```

Design question

  1. How many interpolate methods should we support? Currently, I only implemented scipy.interpolate.interp1d for 1dimensional interpolation and scipy.interpolate.RegularGridInterpolator for multidimensional interpolation. I think 90% usecases are linear, but there are more methods in scipy.

  2. How do we handle nan? Currently this raises ValueError if nan is present. It may be possible to carry out the interpolation with skipping nan, but in this case the performance would be significantly drops because it cannot be vectorized.

  3. Do we support interpolation along dimension without coordinate? In that case, do we attach new coordinate to the object?

  4. How should we do if new coordinate has the dimensional coordinate for the dimension to be interpolated? e.g. in the following case, python da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) rslt = da.interpolate_at(x=xr.DataArray([0.5, 1.5], dims=['x'], coords={'x': [1, 3]}) what would be rslt['x']?

I appreciate any comments.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2104/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
330487989 MDExOlB1bGxSZXF1ZXN0MTkzNDg2NzYz 2220 Reduce memory usage in doc.interpolation.rst fujiisoup 6815844 closed 0     0 2018-06-08T01:23:13Z 2018-06-08T01:45:11Z 2018-06-08T01:31:19Z MEMBER   0 pydata/xarray/pulls/2220

I noticed an example I added to doc in #2104 consumes more than 1 GB memory, and it results in the failing in readthedocs build.

This PR changes this to a much lighter example.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2220/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
300268334 MDExOlB1bGxSZXF1ZXN0MTcxMzk2NjUw 1942 Fix precision drop when indexing a datetime64 arrays. fujiisoup 6815844 closed 0     2 2018-02-26T14:53:57Z 2018-06-08T01:21:07Z 2018-02-27T01:13:45Z MEMBER   0 pydata/xarray/pulls/1942
  • [x] Closes #1932
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

This precision drop was caused when converting pd.Timestamp to np.array python In [7]: ts = pd.Timestamp(np.datetime64('2018-02-12 06:59:59.999986560')) In [11]: np.asarray(ts, 'datetime64[ns]') Out[11]: array('2018-02-12T06:59:59.999986000', dtype='datetime64[ns]')

We need to call to_datetime64 explicitly.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1942/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
295838143 MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1 1899 Vectorized lazy indexing fujiisoup 6815844 closed 0     37 2018-02-09T11:22:02Z 2018-06-08T01:21:06Z 2018-03-06T22:00:57Z MEMBER   0 pydata/xarray/pulls/1899
  • [x] Closes #1897
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue.

My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation.

The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time.

I appreciate any suggestions.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1899/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
328006764 MDExOlB1bGxSZXF1ZXN0MTkxNjUzMjk3 2205 Support dot with older dask fujiisoup 6815844 closed 0     0 2018-05-31T06:13:48Z 2018-06-01T01:01:37Z 2018-06-01T01:01:34Z MEMBER   0 pydata/xarray/pulls/2205
  • [x] Related with #2203
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented

Related with #2203, I think it is better if xr.DataArray.dot() is working even with older dask, at least in the simpler case (as this is a very primary operation).

The cost is a slight complication of the code. Any comments are welcome.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2205/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
326420749 MDExOlB1bGxSZXF1ZXN0MTkwNTA5OTk5 2185 weighted rolling mean -> weighted rolling sum fujiisoup 6815844 closed 0     0 2018-05-25T08:03:59Z 2018-05-25T10:38:52Z 2018-05-25T10:38:48Z MEMBER   0 pydata/xarray/pulls/2185

An example of weighted rolling mean in doc is actually weighted rolling sum. It is a little bit misleading SO, so I propose to change

weighted rolling mean -> weighted rolling sum

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2185/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
322572723 MDExOlB1bGxSZXF1ZXN0MTg3NjU3MTg4 2124 Raise an Error if a coordinate with wrong size is assigned to a dataarray fujiisoup 6815844 closed 0     1 2018-05-13T07:50:15Z 2018-05-16T02:10:48Z 2018-05-15T16:39:22Z MEMBER   0 pydata/xarray/pulls/2124
  • [x] Closes #2112
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API.

Now uses dataset_merge_method when a new coordinate is assigned to a xr.DataArray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2124/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
322572858 MDExOlB1bGxSZXF1ZXN0MTg3NjU3MjY0 2125 Reduce pad size in rolling fujiisoup 6815844 closed 0     2 2018-05-13T07:52:50Z 2018-05-14T22:43:24Z 2018-05-13T22:37:48Z MEMBER   0 pydata/xarray/pulls/2125
  • [ ] Closes #N.A.
  • [x] Tests added (for all bug fixes or enhancements)
  • [ ] Tests N.A.
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

I noticed rolling with dask array and with bottleneck can be slightly improved by reducing the padding depth in da.ghost.ghost(a, depth=depth, boundary=boundary).

@jhamman , can you kindly review this?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2125/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
319420201 MDExOlB1bGxSZXF1ZXN0MTg1MzQzMTgw 2100 Fix a bug introduced in #2087 fujiisoup 6815844 closed 0     1 2018-05-02T06:07:01Z 2018-05-14T00:01:15Z 2018-05-02T21:59:34Z MEMBER   0 pydata/xarray/pulls/2100
  • [x] Closes #2099
  • [x] Tests added
  • [x] Tests passed

A quick fix for #2099

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2100/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
322475569 MDExOlB1bGxSZXF1ZXN0MTg3NjAwMzQy 2122 Fixes centerized rolling with bottleneck fujiisoup 6815844 closed 0     2 2018-05-12T02:28:21Z 2018-05-13T00:27:56Z 2018-05-12T06:15:55Z MEMBER   0 pydata/xarray/pulls/2122
  • [x] Closes #2113
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Two bugs were found and fixed.

  1. rolling a dask-array with center=True and bottleneck
  2. rolling an integer dask-array with bottleneck
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2122/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
322314146 MDExOlB1bGxSZXF1ZXN0MTg3NDc3Mzgz 2119 Support keep_attrs for apply_ufunc for xr.Variable fujiisoup 6815844 closed 0     0 2018-05-11T14:18:51Z 2018-05-11T22:54:48Z 2018-05-11T22:54:44Z MEMBER   0 pydata/xarray/pulls/2119
  • [x] Closes #2114
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fixes 2114.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2119/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
318237397 MDExOlB1bGxSZXF1ZXN0MTg0NDk1MDI4 2087 Drop conflicted coordinate when assignment. fujiisoup 6815844 closed 0     1 2018-04-27T00:12:43Z 2018-05-02T05:58:41Z 2018-05-02T02:31:02Z MEMBER   0 pydata/xarray/pulls/2087
  • [x] Closes #2068
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

After this, when assigning a dataarray to a dataset, non-dimensional and conflicted coordinates of the dataarray are dropped.

example ``` In [2]: ds = xr.Dataset({'da': ('x', [0, 1, 2])}, ...: coords={'y': (('x',), [0.1, 0.2, 0.3])}) ...: ds ...: Out[2]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 Dimensions without coordinates: x Data variables: da (x) int64 0 1 2

In [3]: other = ds['da'] ...: other['y'] = 'x', [0, 1, 2] # conflicted non-dimensional coordinate ...: ds['da'] = other ...: ds ...: Out[3]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 # 'y' is not overwritten Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2087/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
297794911 MDExOlB1bGxSZXF1ZXN0MTY5NjMxNTU3 1919 Remove flake8 from travis fujiisoup 6815844 closed 0     10 2018-02-16T14:03:46Z 2018-05-01T07:24:04Z 2018-05-01T07:24:00Z MEMBER   0 pydata/xarray/pulls/1919
  • [x] Closes #1912

The removal of flake8 from travis would increase the clearer separation between style-issue and test failure.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1919/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
305751269 MDExOlB1bGxSZXF1ZXN0MTc1NDAzMzE4 1994 Make constructing slices lazily. fujiisoup 6815844 closed 0     1 2018-03-15T23:15:26Z 2018-03-18T08:56:31Z 2018-03-18T08:56:27Z MEMBER   0 pydata/xarray/pulls/1994
  • [x] Closes #1993
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes.

Quick fix of #1993.

With this fix, the script shown in #1993 runs Bottleneck: 0.08317923545837402 s Pandas: 1.3338768482208252 s Xarray: 1.1349339485168457 s

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1994/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
302718231 MDExOlB1bGxSZXF1ZXN0MTczMTcwNjc1 1968 einsum for xarray fujiisoup 6815844 closed 0     5 2018-03-06T14:18:22Z 2018-03-12T06:42:12Z 2018-03-12T06:42:08Z MEMBER   0 pydata/xarray/pulls/1968
  • [x] Closes #1951
  • [x] Tests added
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

Currently, lazy-einsum for dask is not yet working.

@shoyer I think apply_ufunc supports lazy computation, but I did not yet figure out how to do this. Can you give me a help?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1968/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
302003819 MDExOlB1bGxSZXF1ZXN0MTcyNjcwNTI4 1957 Numpy 1.13 for rtd fujiisoup 6815844 closed 0     4 2018-03-03T14:51:21Z 2018-03-03T22:22:54Z 2018-03-03T22:22:49Z MEMBER   0 pydata/xarray/pulls/1957
  • [x] Partly closes #1944

I noticed this is due to the use of old numpy on rtd.

xref #1956

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1957/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
301613959 MDExOlB1bGxSZXF1ZXN0MTcyMzk0OTEz 1950 Fix doc for missing values. fujiisoup 6815844 closed 0     4 2018-03-02T00:47:23Z 2018-03-03T06:58:33Z 2018-03-02T20:17:29Z MEMBER   0 pydata/xarray/pulls/1950
  • [x] Closes #1944
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1950/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
300484822 MDExOlB1bGxSZXF1ZXN0MTcxNTU3Mjc5 1943 Fix rtd link on readme fujiisoup 6815844 closed 0     1 2018-02-27T03:52:56Z 2018-02-27T04:31:59Z 2018-02-27T04:27:24Z MEMBER   0 pydata/xarray/pulls/1943

Typo in url.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1943/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
298054181 MDExOlB1bGxSZXF1ZXN0MTY5ODEyMTA1 1922 Support indexing with 0d-np.ndarray fujiisoup 6815844 closed 0     0 2018-02-18T02:46:27Z 2018-02-18T07:26:33Z 2018-02-18T07:26:30Z MEMBER   0 pydata/xarray/pulls/1922
  • [x] Closes #1921
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

Now Variable accepts 0d-np.ndarray indexer.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1922/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
294052591 MDExOlB1bGxSZXF1ZXN0MTY2OTI1MzU5 1883 Support nan-ops for object-typed arrays fujiisoup 6815844 closed 0     0 2018-02-02T23:16:39Z 2018-02-15T22:03:06Z 2018-02-15T22:03:01Z MEMBER   0 pydata/xarray/pulls/1883
  • [x] Closes #1866
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

I am working to add aggregation ops for object-typed arrays, which may make #1837 cleaner. I added some tests but maybe not sufficient. Any other cases which should be considered? e.g. [True, 3.0, np.nan] etc...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1883/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
291544932 MDExOlB1bGxSZXF1ZXN0MTY1MDk5Mzk2 1858 Adding a link to asv benchmark. fujiisoup 6815844 closed 0     2 2018-01-25T11:56:56Z 2018-01-25T21:55:24Z 2018-01-25T17:46:12Z MEMBER   0 pydata/xarray/pulls/1858

As discussed in #1851, I added a link in doc/installing.rst and a badge on README.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1858/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
290666013 MDExOlB1bGxSZXF1ZXN0MTY0NDUyOTg4 1851 Indexing benchmarking fujiisoup 6815844 closed 0     3 2018-01-23T00:27:29Z 2018-01-24T08:10:19Z 2018-01-24T08:10:19Z MEMBER   0 pydata/xarray/pulls/1851
  • [x] Relates to #1771

Just added some benchmarks for basic, outer, and vectorized indexing and assignments.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1851/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
289877082 MDExOlB1bGxSZXF1ZXN0MTYzODk2MjYw 1841 Add dtype support for reduce methods. fujiisoup 6815844 closed 0     0 2018-01-19T06:40:41Z 2018-01-20T18:29:02Z 2018-01-20T18:29:02Z MEMBER   0 pydata/xarray/pulls/1841
  • [x] Closes #1838changes)
  • [x] Tests added
  • [x] Tests passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fixes #1838. The new rule for reduce is + If dtype is not None and different from array's dtype, use numpy's aggregation function instead of bottleneck's. + If out is not None, raise an error.

as suggested in this comments.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1841/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
277589143 MDExOlB1bGxSZXF1ZXN0MTU1MjIxMjQ3 1746 Fix in vectorized item assignment fujiisoup 6815844 closed 0     4 2017-11-29T00:37:41Z 2017-12-09T03:29:35Z 2017-12-09T03:29:35Z MEMBER   0 pydata/xarray/pulls/1746
  • [x] Closes #1743, #1744
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Found bugs in nputils.NumpyVindexAdapter.__setitem__ and DataArray.__setitem__.

I will add more tests later. Test case suggestions would be appreciated.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1746/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
274763120 MDExOlB1bGxSZXF1ZXN0MTUzMjIzMjMy 1724 Fix unexpected loading after ``print`` fujiisoup 6815844 closed 0     1 2017-11-17T06:20:28Z 2017-11-17T16:44:40Z 2017-11-17T16:44:40Z MEMBER   0 pydata/xarray/pulls/1724
  • [x] Closes #1720
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Only a single missing underscore causes this issue :) Added tests.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1724/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
269996138 MDExOlB1bGxSZXF1ZXN0MTQ5ODE0MTU2 1676 Support orthogonal indexing in MemoryCachedArray (Fix for #1429) fujiisoup 6815844 closed 0     7 2017-10-31T15:10:59Z 2017-11-09T13:47:38Z 2017-11-06T17:21:56Z MEMBER   0 pydata/xarray/pulls/1676
  • [x] Closes #1429
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

This bug originates from the complicated structure around the array wrappers and their indexing, i.e. different array wrappers support different indexing types, and moreover, some can store another array wrapper in it.

I made some cleanups. + Now every array wrapper is a subclass of NDArrayIndexable + Every array wrapper should implement its own __getitem__ or just store another NDArrayIndexable.

I think I added enough test for it, but I am not yet fully accustomed with xarray's backend. There might be many combinations of their hierarchical relation.

I would appreciate any comments.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1676/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
272164108 MDExOlB1bGxSZXF1ZXN0MTUxMzU2ODQz 1700 Add dropna test. fujiisoup 6815844 closed 0     3 2017-11-08T11:25:18Z 2017-11-09T07:56:19Z 2017-11-09T07:56:13Z MEMBER   0 pydata/xarray/pulls/1700
  • [x] Closes #1694
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

This PR simply adds a particular test pointed out in #1694 .

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1700/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
271180559 MDExOlB1bGxSZXF1ZXN0MTUwNjcwMTI4 1693 Bugfix in broadcast_indexes fujiisoup 6815844 closed 0     8 2017-11-04T09:58:43Z 2017-11-07T20:41:53Z 2017-11-07T20:41:44Z MEMBER   0 pydata/xarray/pulls/1693
  • [x] Closes #1688, #1694
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fixes #1688. It is caused that Variable._broadcast_indexes returns a wrong type of Indexer. Now it supports the orthogonal-indexing with LazilyIndexedArray.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1693/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
265344609 MDExOlB1bGxSZXF1ZXN0MTQ2NDk4Mzg4 1632 Support autocompletion dictionary access in ipython. fujiisoup 6815844 closed 0     6 2017-10-13T16:19:35Z 2017-11-04T16:05:02Z 2017-10-22T17:49:21Z MEMBER   0 pydata/xarray/pulls/1632
  • [x] Closes #1628
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Support #1628.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1632/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
253277979 MDExOlB1bGxSZXF1ZXN0MTM3OTA3NDIx 1530 Deprecate old pandas support fujiisoup 6815844 closed 0   0.10 2415632 1 2017-08-28T09:40:02Z 2017-11-04T09:51:51Z 2017-08-31T17:25:10Z MEMBER   0 pydata/xarray/pulls/1530
  • [x] Closes #1512
  • [x] Tests passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Explicitly deprecated old pandas (< 0.18) and old numpy (< 1.11) supports. Some backported functions in npcompat are removed because numpy == 1.11 already has them.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1530/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
256261536 MDExOlB1bGxSZXF1ZXN0MTQwMDQzMjAx 1564 Uint support in reduce methods with skipna fujiisoup 6815844 closed 0     3 2017-09-08T13:54:54Z 2017-11-04T09:51:49Z 2017-09-08T16:12:23Z MEMBER   0 pydata/xarray/pulls/1564
  • [x] Closes #1562
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fixes #1562

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1564/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
260619568 MDExOlB1bGxSZXF1ZXN0MTQzMTMxNjcz 1594 Remove unused version check for pandas. fujiisoup 6815844 closed 0     2 2017-09-26T13:16:42Z 2017-11-04T09:51:45Z 2017-09-27T02:10:58Z MEMBER   0 pydata/xarray/pulls/1594
  • [x] Closes #1593
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [n.a.] Fully documented, including whats-new.rst for all changes and api.rst for new API

Currently some tests fail due to dask bug in #1591

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1594/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
271180056 MDExOlB1bGxSZXF1ZXN0MTUwNjY5ODY1 1692 Bugfix in broadcast indexes fujiisoup 6815844 closed 0     1 2017-11-04T09:49:11Z 2017-11-04T09:51:37Z 2017-11-04T09:49:22Z MEMBER   0 pydata/xarray/pulls/1692
  • [x] Closes #1688
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fixes #1688. It is caused that Variable._broadcast_indexes returns a wrong type of Indexer. Now it supports the orthogonal-indexing with LazilyIndexedArray.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1692/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
270152596 MDExOlB1bGxSZXF1ZXN0MTQ5OTMzMzI1 1677 Removed `.T` from __dir__ explicitly fujiisoup 6815844 closed 0     1 2017-10-31T23:43:42Z 2017-11-04T09:51:21Z 2017-11-01T00:48:42Z MEMBER   0 pydata/xarray/pulls/1677
  • [x] Closes #1675
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Remved T from xr.Dataset.__dir__ to suppress a deprecation warning in Ipython autocompletion.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1677/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
267019149 MDExOlB1bGxSZXF1ZXN0MTQ3Njg4MzE5 1639 indexing with broadcasting fujiisoup 6815844 closed 0     2 2017-10-19T23:22:14Z 2017-11-04T08:29:55Z 2017-10-19T23:52:50Z MEMBER   0 pydata/xarray/pulls/1639
  • [x] Closes #1444, #1436
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

This is a duplicate of #1473 originally opened by @shoyer

Thanks, @shoyer, for giving me github's credit.

I enjoyed this PR. I really appreciate your help to finish up this PR.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1639/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
208713614 MDExOlB1bGxSZXF1ZXN0MTA2ODk5MDM1 1277 Restored dim order in DataArray.rolling().reduce() fujiisoup 6815844 closed 0     5 2017-02-19T12:14:55Z 2017-07-09T23:53:15Z 2017-02-27T17:11:02Z MEMBER   0 pydata/xarray/pulls/1277
  • [x] closes #1125
  • [x] tests added / passed
  • [x] passes git diff upstream/master | flake8 --diff
  • [x] whatsnew

Added 1 line to fix #1125.

I hope this is enough. If another care is necessary, please let me know.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1277/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
211323643 MDExOlB1bGxSZXF1ZXN0MTA4NzEwNjg5 1289 Added a support for Dataset.rolling. fujiisoup 6815844 closed 0     9 2017-03-02T08:40:03Z 2017-07-09T23:53:13Z 2017-03-31T03:10:45Z MEMBER   0 pydata/xarray/pulls/1289
  • [x] closes #859
  • [x] tests added / passed
  • [x] passes git diff upstream/master | flake8 --diff
  • [x] whatsnew entry

There seems to be two approaches to realize Dataset.rolling, 1. Apply rolling in each DataArrays and then combine them. 2. Apply Dataset directoly with some DataArrays that do not depend on dim kept aside, then merge them later.

I chose the latter approach to reuse existing Rolling object as much as possible, but it results in some duplicates in ImplementsRollingDatasetReduce. Any feedbacks and comments are very welcome.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1289/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
226778103 MDExOlB1bGxSZXF1ZXN0MTE5MzAzNzk3 1400 Patch isel points fujiisoup 6815844 closed 0     1 2017-05-06T14:59:51Z 2017-07-09T23:53:06Z 2017-05-09T02:31:52Z MEMBER   0 pydata/xarray/pulls/1400
  • [x] closes #1337
  • [x] tests added / passed
  • [x] passes git diff upstream/master | flake8 --diff
  • [x] whatsnew entry

A small fix for the bug reported in #1337, where unselected coords were wrongly assigned as data_vars by sel_points. I hope I did not forget anything.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1400/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
220520879 MDExOlB1bGxSZXF1ZXN0MTE1MDE0NTkw 1364 Fix a typo fujiisoup 6815844 closed 0     4 2017-04-10T02:14:56Z 2017-07-09T23:53:03Z 2017-04-10T02:24:00Z MEMBER   0 pydata/xarray/pulls/1364
  • [x] closes #1363

Fixes typos in reshaping.rst. Is there a good way to check docs before merge?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1364/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
229370997 MDExOlB1bGxSZXF1ZXN0MTIxMDcxMTA3 1412 Multiindex scalar coords, fixes #1408 fujiisoup 6815844 closed 0     9 2017-05-17T14:25:50Z 2017-05-25T11:04:55Z 2017-05-25T11:04:55Z MEMBER   0 pydata/xarray/pulls/1412
  • [x] Closes #1408
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

To fix #1408, This modification works, but actually I do not fully satisfied yet. There are if statements in many places.

The major changes I made are 1. variable.__getitem__ now returns an OrderedDict if a single element is selected from MultiIndex. 2. indexing.remap_level_indexers also returns selected_dims which is a map from the original dimension to the selected dims which will be a scalar coordinate.

Change 1 keeps level-coordinates even after ds.isel(yx=0). Change 2 enables to track which levels are selected, then the selected levels are changed to a scalar coordinate.

I guess much smarter solution should exist. I would be happy if anyone gives me a comment.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1412/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
218745734 MDExOlB1bGxSZXF1ZXN0MTEzODA3NDE4 1347 Support for DataArray.expand_dims() fujiisoup 6815844 closed 0     9 2017-04-02T06:36:37Z 2017-04-10T02:05:38Z 2017-04-10T01:01:54Z MEMBER   0 pydata/xarray/pulls/1347
  • [x] closes #1326
  • [x] tests added / passed
  • [x] passes git diff upstream/master | flake8 --diff
  • [x] whatsnew entry

I added a DataArray's method expand_dims based on the discussion in #1326 . The proposed API is similar to numpy.expand_dims and slightly different from Variables.expand_dims, which requires whole sequences of dims of the result array.

My concern is that I do not yet fully understand the lazy data manipulation in xarray. Does Variable.expand_dims do it?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1347/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 28.048ms · About: xarray-datasette