home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

78 rows where user = 6815844

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
106899035 MDExOlB1bGxSZXF1ZXN0MTA2ODk5MDM1 1277 closed 0 Restored dim order in DataArray.rolling().reduce() fujiisoup 6815844 - [x] closes #1125 - [x] tests added / passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] whatsnew Added 1 line to fix #1125. I hope this is enough. If another care is necessary, please let me know. 2017-02-19T12:14:55Z 2017-07-09T23:53:15Z 2017-02-27T17:11:02Z 2017-02-27T17:11:02Z 5e50c0dc4d0e8238437963cd79d31daaddd41cd8     0 8aa40159f34464fc561bcd189f0f7c418fdabba0 1cafb14cb4726da14abfb8976d22e6e2b5f3ae24 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1277  
108710689 MDExOlB1bGxSZXF1ZXN0MTA4NzEwNjg5 1289 closed 0 Added a support for Dataset.rolling. fujiisoup 6815844 - [x] closes #859 - [x] tests added / passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] whatsnew entry There seems to be two approaches to realize Dataset.rolling, 1. Apply rolling in each DataArrays and then combine them. 2. Apply Dataset directoly with some DataArrays that do not depend on `dim` kept aside, then merge them later. I chose the latter approach to reuse existing `Rolling` object as much as possible, but it results in some duplicates in `ImplementsRollingDatasetReduce`. Any feedbacks and comments are very welcome. 2017-03-02T08:40:03Z 2017-07-09T23:53:13Z 2017-03-31T03:10:45Z 2017-03-31T03:10:45Z 09ef2c280677c45593d4f93a67962afc42abacf1     0 37c58f4c84f0d8743e3e175d0d2ca982bedb4425 371d034372bc7522098a142a0debf93916c49102 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1289  
113807418 MDExOlB1bGxSZXF1ZXN0MTEzODA3NDE4 1347 closed 0 Support for DataArray.expand_dims() fujiisoup 6815844 - [x] closes #1326 - [x] tests added / passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] whatsnew entry I added a DataArray's method `expand_dims` based on the discussion in #1326 . The proposed API is similar to `numpy.expand_dims` and slightly different from `Variables.expand_dims`, which requires whole sequences of `dims` of the result array. My concern is that I do not yet fully understand the lazy data manipulation in xarray. Does Variable.expand_dims do it? 2017-04-02T06:36:37Z 2017-04-10T02:05:38Z 2017-04-10T01:01:54Z 2017-04-10T01:01:54Z 444fce8a7ae26546e283a6876f003aafb84b7552     0 dd9f573112f376ad5ff061756c2fa599058899d9 79b61ccdd0c9c1822fbec52d1dc488a4dfd0c8af MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1347  
115014590 MDExOlB1bGxSZXF1ZXN0MTE1MDE0NTkw 1364 closed 0 Fix a typo fujiisoup 6815844 - [x] closes #1363 Fixes typos in reshaping.rst. Is there a good way to check docs before merge? 2017-04-10T02:14:56Z 2017-07-09T23:53:03Z 2017-04-10T02:24:00Z 2017-04-10T02:24:00Z f87bb0beadd937e3e9657e6d686a20b2bb288d2b     0 e65faf553d6c2a61d847aade9f4399eb536734ae 444fce8a7ae26546e283a6876f003aafb84b7552 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1364  
119303797 MDExOlB1bGxSZXF1ZXN0MTE5MzAzNzk3 1400 closed 0 Patch isel points fujiisoup 6815844 - [x] closes #1337 - [x] tests added / passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] whatsnew entry A small fix for the bug reported in #1337, where unselected coords were wrongly assigned as `data_vars` by `sel_points`. I hope I did not forget anything. 2017-05-06T14:59:51Z 2017-07-09T23:53:06Z 2017-05-09T02:31:52Z 2017-05-09T02:31:52Z e1982faf8e906ccdcb16b07462ffa77fd13bf69c     0 a5e9e62125f2681d668fab1a6b1d420481b6109e a9a12b0aca862d5ab19180594f616b8efab13308 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1400  
121071107 MDExOlB1bGxSZXF1ZXN0MTIxMDcxMTA3 1412 closed 0 Multiindex scalar coords, fixes #1408 fujiisoup 6815844 - [x] Closes #1408 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API To fix #1408, This modification works, but actually I do not fully satisfied yet. There are `if` statements in many places. The major changes I made are 1. `variable.__getitem__` now returns an OrderedDict if a single element is selected from MultiIndex. 2. `indexing.remap_level_indexers` also returns `selected_dims` which is a map from the original dimension to the selected dims which will be a scalar coordinate. Change 1 keeps level-coordinates even after `ds.isel(yx=0)`. Change 2 enables to track which levels are selected, then the selected levels are changed to a scalar coordinate. I guess much smarter solution should exist. I would be happy if anyone gives me a comment. 2017-05-17T14:25:50Z 2017-05-25T11:04:55Z 2017-05-25T11:04:55Z   aace43cba61ca4f45e2ec4e53571d604f77dd0a1     0 185abd0ed70996fafea5ad23f36d867703b81203 d5c7e0612e8243c0a716460da0b74315f719f2df MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1412  
122418207 MDExOlB1bGxSZXF1ZXN0MTIyNDE4MjA3 1426 closed 0 scalar_level in MultiIndex fujiisoup 6815844 - [x] Closes #1408 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API [Edit for more clarity] I restarted a new branch to fix #1408 (I closed the older one #1412). ![my_proposal](https://cloud.githubusercontent.com/assets/6815844/26553065/f6562366-44c4-11e7-8c9c-3ef7facfe056.png) Because the changes I made is relatively large, here I summarize this PR. # Sumamry In this PR, I newly added two kinds of levels in MultiIndex, `index-level` and `scalar-level`. `index-level` is an ordinary level in MultiIndex (as in current implementation), while `scalar-level` indicates dropped level (which is newly added in this PR). # Changes in behaviors. 1. Indexing a scalar at a particular level changes that level to `scalar-level` instead of dropping that level (changed from #767). 2. Indexing a scalar from a MultiIndex, the selected value now becomes a `MultiIndex-scalar` rather than a scalar of tuple. 3. Enabled indexing along a `index-level` if the MultiIndex has only a single `index-level`. Examples of the output are shown below. Any suggestions for these behaviors are welcome. ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) ...: ds2 = xr.Dataset({'foo': (('x',), [4, 5, 6])}, {'x': [1, 2, 3], 'y': 'b'}) ...: # example data ...: ds = xr.concat([ds1, ds2], dim='y').stack(yx=['y', 'x']) ...: ds Out[1]: <xarray.Dataset> Dimensions: (yx: 6) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'a' 'a' 'b' 'b' 'b' # <--- this is index-level - x (yx) int64 1 2 3 1 2 3 # <--- this is also index-level Data variables: foo (yx) int64 1 2 3 4 5 6 In [2]: # 1. indexing a scalar converts `index-level` x to `scalar-level`. ...: ds.sel(x=1) Out[2]: <xarray.Dataset> Dimensions: (yx: … 2017-05-25T11:03:05Z 2019-01-14T21:20:28Z 2019-01-14T21:20:27Z   5821b1de3713a3513bdce890e77999fd4c4b0688     0 38dbbbca748b0f22d1c49d63e5e5524ac093295f bb87a9441d22b390e069d0fde58f297a054fd98a MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1426  
128471998 MDExOlB1bGxSZXF1ZXN0MTI4NDcxOTk4 1469 closed 0 Argmin indexes fujiisoup 6815844 - [x] Closes #1388 - [x] Tests added / passed - [x] Passes ``git diff master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API With this PR, ValueError raises if `argmin()` is called by a multi-dimensional array. `argmin_indexes()` method is also added for `xr.DataArray`. Current API design for `argmin_indexes()` returns the argmin-indexes as an `OrderedDict` of `DataArray`s. Example: ```python In [1]: import xarray as xr ...: da = xr.DataArray([[1, 2], [-1, 40], [5, 6]], ...: [('x', ['c', 'b', 'a']), ('y', [1, 0])]) ...: ...: da.argmin_indexes() ...: Out[1]: OrderedDict([('x', <xarray.DataArray 'x' ()> array(1)), ('y', <xarray.DataArray 'y' ()> array(0))]) In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', <xarray.DataArray 'y' (x: 3)> array([0, 0, 0]) Coordinates: * x (x) <U1 'c' 'b' 'a')]) ``` (Because the returned object is an `OrderedDict`, it is not beautifully printed. The returned type can be a `xr.Dataset` if we want.) Although in #1388 `argmin_indexes()` was originally suggested so that we can pass the result into `isel_point`, ```python da.isel_points(**da.argmin_indexes()) ``` current implementation of `isel_points` does **NOT** work for this case. This is mainly because 1. `isel_points` currently does not work for 0-dimensional or multi-dimensional input. 2. Even for 1-dimensional input (the second one in the above examples), we should also pass `x` as an indexer rather than the coordinate of indexer. For 1, I have prepared modification of `isel_points` to accept multi-dimensional arrays, but I guess it should be in another PR after the API decision. (It is related in #475, and #974.) For 2, we should either + change API of `argmin_indexes` to return not only the indicated dimension but also all the dimensions, like ```python In [2]: da.argmin_i… 2017-07-01T01:23:31Z 2020-06-29T19:36:25Z 2020-06-29T19:36:25Z   ea61e19d4afcb3988eecbafdad28e1320995ce2c     0 81c61b733ea8892464c69b5c75aabb57b5e60989 bb87a9441d22b390e069d0fde58f297a054fd98a MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1469  
137907421 MDExOlB1bGxSZXF1ZXN0MTM3OTA3NDIx 1530 closed 0 Deprecate old pandas support fujiisoup 6815844 - [x] Closes #1512 - [x] Tests passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Explicitly deprecated old pandas (< 0.18) and old numpy (< 1.11) supports. Some backported functions in `npcompat` are removed because numpy == 1.11 already has them. 2017-08-28T09:40:02Z 2017-11-04T09:51:51Z 2017-08-31T17:25:10Z 2017-08-31T17:25:10Z 0b2424a1813bf1af712780c360a94a5588523adf   0.10 2415632 0 da5c16e98193addd1d856e6772b0f521a66ef209 b190501a011f3427ae6a3220d72a8d972cb7c203 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1530  
140043201 MDExOlB1bGxSZXF1ZXN0MTQwMDQzMjAx 1564 closed 0 Uint support in reduce methods with skipna fujiisoup 6815844 - [x] Closes #1562 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes #1562 2017-09-08T13:54:54Z 2017-11-04T09:51:49Z 2017-09-08T16:12:23Z 2017-09-08T16:12:23Z a993317be46e6cba96424faa9fbcc54d3753d571     0 f15ae04686f592038b1a8672b403181ae5758595 3a81942eb0cc38129208a52c391f7150af6f2538 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1564  
143131673 MDExOlB1bGxSZXF1ZXN0MTQzMTMxNjcz 1594 closed 0 Remove unused version check for pandas. fujiisoup 6815844 - [x] Closes #1593 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [n.a.] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Currently some tests fail due to dask bug in #1591 2017-09-26T13:16:42Z 2017-11-04T09:51:45Z 2017-09-27T02:10:58Z 2017-09-27T02:10:58Z 25d1855e737444c156f50d1f37a67d9674a8bac5     0 a4625c678913a3908a841f2774202037bd16a73d 3a91442afe2a805b6aea5a3b9be3f72eb7245354 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1594  
146498388 MDExOlB1bGxSZXF1ZXN0MTQ2NDk4Mzg4 1632 closed 0 Support autocompletion dictionary access in ipython. fujiisoup 6815844 - [x] Closes #1628 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Support #1628. 2017-10-13T16:19:35Z 2017-11-04T16:05:02Z 2017-10-22T17:49:21Z 2017-10-22T17:49:21Z 9763a66e0e4675e7adc3fff3830c62f0e31a2bb3     0 93130463edc5e27e0b63bad4fa8e1fbc69ac7f6d 2949558b75a65404a500a237ec54834fd6946d07 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1632  
147688319 MDExOlB1bGxSZXF1ZXN0MTQ3Njg4MzE5 1639 closed 0 indexing with broadcasting fujiisoup 6815844 - [x] Closes #1444, #1436 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This is a duplicate of #1473 originally opened by @shoyer Thanks, @shoyer, for giving me github's credit. I enjoyed this PR. I really appreciate your help to finish up this PR. 2017-10-19T23:22:14Z 2017-11-04T08:29:55Z 2017-10-19T23:52:50Z 2017-10-19T23:52:50Z 9a0c744c8015345a6e892039d73eff40119bb66b     0 170abc515bfc7112c212032ab8cecd50804acdb6 4c3c3328a7ea8269e1411c5119dd0b3d4d972cc4 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1639  
149814156 MDExOlB1bGxSZXF1ZXN0MTQ5ODE0MTU2 1676 closed 0 Support orthogonal indexing in MemoryCachedArray (Fix for #1429) fujiisoup 6815844 - [x] Closes #1429 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This bug originates from the complicated structure around the array wrappers and their indexing, i.e. different array wrappers support different indexing types, and moreover, some can store another array wrapper in it. I made some cleanups. + Now every array wrapper is a subclass of `NDArrayIndexable` + Every array wrapper should implement its own `__getitem__` or just store another `NDArrayIndexable`. I think I added enough test for it, but I am not yet fully accustomed with xarray's backend. There might be many combinations of their hierarchical relation. I would appreciate any comments. 2017-10-31T15:10:59Z 2017-11-09T13:47:38Z 2017-11-06T17:21:56Z 2017-11-06T17:21:55Z 2a1d3928a0aa0e66fe0a2211a6c9f1d079404dff     0 7bba356573e778692b397f0d0a095fcc04a40819 acae757d869af776a4b2bd980fb77a1873f4c510 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1676  
149933325 MDExOlB1bGxSZXF1ZXN0MTQ5OTMzMzI1 1677 closed 0 Removed `.T` from __dir__ explicitly fujiisoup 6815844 - [x] Closes #1675 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Remved `T` from `xr.Dataset.__dir__` to suppress a deprecation warning in Ipython autocompletion. 2017-10-31T23:43:42Z 2017-11-04T09:51:21Z 2017-11-01T00:48:42Z 2017-11-01T00:48:42Z f83361c76b6aa8cdba8923080bb6b98560cf3a96     0 41755910dead906ab0011b298bb56e8945e045ef 17956ea5de2cf5029992e8f83460fcc878e3d024 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1677  
150669865 MDExOlB1bGxSZXF1ZXN0MTUwNjY5ODY1 1692 closed 0 Bugfix in broadcast indexes fujiisoup 6815844 - [x] Closes #1688 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes #1688. It is caused that `Variable._broadcast_indexes` returns a wrong type of `Indexer`. Now it supports the orthogonal-indexing with `LazilyIndexedArray`. 2017-11-04T09:49:11Z 2017-11-04T09:51:37Z 2017-11-04T09:49:22Z   fa2d863c009507f58ec608091e1e68e0ceb9c961     0 6615838eef90f1bf9bd46976842fab37c68bf942 f83361c76b6aa8cdba8923080bb6b98560cf3a96 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1692  
150670128 MDExOlB1bGxSZXF1ZXN0MTUwNjcwMTI4 1693 closed 0 Bugfix in broadcast_indexes fujiisoup 6815844 - [x] Closes #1688, #1694 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes #1688. It is caused that `Variable._broadcast_indexes` returns a wrong type of `Indexer`. Now it supports the orthogonal-indexing with `LazilyIndexedArray`. 2017-11-04T09:58:43Z 2017-11-07T20:41:53Z 2017-11-07T20:41:44Z 2017-11-07T20:41:44Z fb6e13ec15e85bbeceedbcd754e063f6e5696bf7     0 307c84ad3c7c3b1bf52d246061e5cc06f3e9a97e 2a1d3928a0aa0e66fe0a2211a6c9f1d079404dff MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1693  
151356843 MDExOlB1bGxSZXF1ZXN0MTUxMzU2ODQz 1700 closed 0 Add dropna test. fujiisoup 6815844 - [x] Closes #1694 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This PR simply adds a particular test pointed out in #1694 . 2017-11-08T11:25:18Z 2017-11-09T07:56:19Z 2017-11-09T07:56:13Z 2017-11-09T07:56:13Z dbf7b01cb4a4d9fb00882e0457523e4bb806820c     0 eb2dd717cea9fb35bee72408c144c23c13b96884 fb6e13ec15e85bbeceedbcd754e063f6e5696bf7 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1700  
153223232 MDExOlB1bGxSZXF1ZXN0MTUzMjIzMjMy 1724 closed 0 Fix unexpected loading after ``print`` fujiisoup 6815844 - [x] Closes #1720 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Only a single missing underscore causes this issue :) Added tests. 2017-11-17T06:20:28Z 2017-11-17T16:44:40Z 2017-11-17T16:44:40Z 2017-11-17T16:44:40Z 6463504ae7c6fd0c2250237a2a74baf1b707723a     0 0fb6a01376abd84ad9e9b6802f980a4e4013a53c 1a012080e0910f3295d0fc26806ae18885f56751 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1724  
155221247 MDExOlB1bGxSZXF1ZXN0MTU1MjIxMjQ3 1746 closed 0 Fix in vectorized item assignment fujiisoup 6815844 - [x] Closes #1743, #1744 - [x] Tests added / passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Found bugs in `nputils.NumpyVindexAdapter.__setitem__` and `DataArray.__setitem__`. I will add more tests later. Test case suggestions would be appreciated. 2017-11-29T00:37:41Z 2017-12-09T03:29:35Z 2017-12-09T03:29:35Z 2017-12-09T03:29:35Z 5e801894886b2060efa8b28798780a91561a29fd     0 6906eebfc7645d06ee807773f5df9215634addef 4b8339b53f1b9dcd79f2a9060933713328a13b90 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1746  
157856511 MDExOlB1bGxSZXF1ZXN0MTU3ODU2NTEx 1776 closed 0 [WIP] Fix pydap array wrapper fujiisoup 6815844 - [x] Closes #1775 (remove if there is no corresponding issue, which should only be the case for minor changes) - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` (remove if you did not edit any Python files) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I am trying to fix #1775, but tests are still failing. Any help would be appreciated. 2017-12-12T15:22:07Z 2019-09-25T15:44:19Z 2018-01-09T01:48:13Z 2018-01-09T01:48:13Z ab0db05a58fd47fe895d1a85c09c37d96263d3b7   0.10.3 3008859 0 e27c043adb96e027ac51e9d1abdf88e20db8dd7b c368ee734945bbc736c33463ea561311bbdc1e9b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1776  
163657424 MDExOlB1bGxSZXF1ZXN0MTYzNjU3NDI0 1837 closed 0 Rolling window with `as_strided` fujiisoup 6815844 - [x] Closes #1831, #1142, #819 - [x] Tests added - [x] Tests passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I started to work for refactoring rollings. As suggested in [#1831 comment](https://github.com/pydata/xarray/issues/1831#issuecomment-357828636), I implemented `rolling_window` methods based on `as_strided`. I got more than 1,000 times speed up! yey! ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(10000, 3), dims=['x', 'y']) ``` with the master ```python %timeit da.rolling(x=5).reduce(np.mean) 1 loop, best of 3: 9.68 s per loop ``` with the current implementation ```python %timeit da.rolling(x=5).reduce(np.mean) 100 loops, best of 3: 5.29 ms per loop ``` and with the bottleneck ```python %timeit da.rolling(x=5).mean() 100 loops, best of 3: 2.62 ms per loop ``` My current concerns are + Can we expose the new `rolling_window` method of `DataArray` and `Dataset` to the public? I think this method itself is useful for many usecases, such as short-term-FFT and convolution. This also gives more flexible rolling operation, such as windowed moving average, strided rolling, and ND-rolling. + Is there any dask's equivalence to numpy's `as_strided`? Currently, I just use a slice->concatenate path, but I don't think it is very efficient. (Is it already efficient, as dask utilizes out-of-core computation?) Any thoughts are welcome. 2018-01-18T09:18:19Z 2018-06-22T22:27:11Z 2018-03-01T03:39:19Z 2018-03-01T03:39:19Z dc3eebf3a514cfdc1039b63f2a542121d1328ba9     0 aeabdf5fc7ead2f2ae24b59045cc987f6feb5033 f3bbb3ef6badcfe5d1f3b77c231846f0e79a93ea MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1837  
163896260 MDExOlB1bGxSZXF1ZXN0MTYzODk2MjYw 1841 closed 0 Add dtype support for reduce methods. fujiisoup 6815844 - [x] Closes #1838changes) - [x] Tests added - [x] Tests passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes #1838. The new rule for reduce is + If dtype is not None and different from array's dtype, use numpy's aggregation function instead of bottleneck's. + If out is not None, raise an error. as suggested in [this comments](https://github.com/pydata/xarray/issues/1838#issuecomment-358851474). 2018-01-19T06:40:41Z 2018-01-20T18:29:02Z 2018-01-20T18:29:02Z 2018-01-20T18:29:02Z 3bd704a4815ad2281e61eedcee3c7935789d410b     0 0da3a635e2996098dbb35969001c6033a11b26a8 74d8318c68be884134d449afad18dfe731d48b72 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1841  
164452988 MDExOlB1bGxSZXF1ZXN0MTY0NDUyOTg4 1851 closed 0 Indexing benchmarking fujiisoup 6815844 - [x] Relates to #1771 Just added some benchmarks for basic, outer, and vectorized indexing and assignments. 2018-01-23T00:27:29Z 2018-01-24T08:10:19Z 2018-01-24T08:10:19Z 2018-01-24T08:10:19Z 04974b99113d3f449c5592abc01a5701ba2382e4     0 57da6e5c3da679ca7528ec29b570122d20e0727e e31cf43e8d183c63474b2898a0776fda72abc82c MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1851  
165099396 MDExOlB1bGxSZXF1ZXN0MTY1MDk5Mzk2 1858 closed 0 Adding a link to asv benchmark. fujiisoup 6815844 As discussed in #1851, I added a link in doc/installing.rst and a badge on README. 2018-01-25T11:56:56Z 2018-01-25T21:55:24Z 2018-01-25T17:46:12Z 2018-01-25T17:46:12Z 009291139fde0c859ee565141cdb3b6a3d28cba0     0 416ddfacdca2e1946823a4292db41e1d4f2c1aec 0a0593d78fad6c0b776d4c3c6b32a24b2bdfba35 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1858  
166925359 MDExOlB1bGxSZXF1ZXN0MTY2OTI1MzU5 1883 closed 0 Support nan-ops for object-typed arrays fujiisoup 6815844 - [x] Closes #1866 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I am working to add aggregation ops for object-typed arrays, which may make #1837 cleaner. I added some tests but maybe not sufficient. Any other cases which should be considered? e.g. `[True, 3.0, np.nan]` etc... 2018-02-02T23:16:39Z 2018-02-15T22:03:06Z 2018-02-15T22:03:01Z 2018-02-15T22:03:01Z b6a0d60e720f5a19d6e00b11fc7f3d485e52a80c     0 e46d07de2dcaf7df1bf12e94c8ad70aa8a7cb10b 2aa5b8a5c094593569f5bd9ae220d1f2fc0ecda0 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1883  
168214895 MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1 1899 closed 0 Vectorized lazy indexing fujiisoup 6815844 - [x] Closes #1897 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue. My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation. The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time. I appreciate any suggestions. 2018-02-09T11:22:01Z 2018-06-08T01:21:06Z 2018-03-06T22:00:57Z 2018-03-06T22:00:57Z 54468e1924174a03e7ead3be8545f687f084f4dd     0 8e967105194d7b4208bcac22127cd0cb01a7a484 dc3eebf3a514cfdc1039b63f2a542121d1328ba9 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1899  
169631557 MDExOlB1bGxSZXF1ZXN0MTY5NjMxNTU3 1919 closed 0 Remove flake8 from travis fujiisoup 6815844 - [x] Closes #1912 The removal of flake8 from travis would increase the clearer separation between style-issue and test failure. 2018-02-16T14:03:46Z 2018-05-01T07:24:04Z 2018-05-01T07:24:00Z 2018-05-01T07:24:00Z 39b2a37207fc8e6c5199ba9386831ba7eb06d82b     0 04c55cd0a92ae8d274fe4d60f41389ec8e91642e d191352b6c1e15a2b6105b4b76552fe974231396 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1919  
169812105 MDExOlB1bGxSZXF1ZXN0MTY5ODEyMTA1 1922 closed 0 Support indexing with 0d-np.ndarray fujiisoup 6815844 - [x] Closes #1921 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) Now Variable accepts 0d-np.ndarray indexer. 2018-02-18T02:46:27Z 2018-02-18T07:26:33Z 2018-02-18T07:26:30Z 2018-02-18T07:26:30Z 2ff7b4c4e394bfe73445f8cf471f0df8b79417bf     0 ebfc09681441da6b6278a50f5db91350a8308859 e0621c7d66c13b486b1890f67a126caec2990da7 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1922  
171396650 MDExOlB1bGxSZXF1ZXN0MTcxMzk2NjUw 1942 closed 0 Fix precision drop when indexing a datetime64 arrays. fujiisoup 6815844 - [x] Closes #1932 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This precision drop was caused when converting `pd.Timestamp` to `np.array` ```python In [7]: ts = pd.Timestamp(np.datetime64('2018-02-12 06:59:59.999986560')) In [11]: np.asarray(ts, 'datetime64[ns]') Out[11]: array('2018-02-12T06:59:59.999986000', dtype='datetime64[ns]') ``` We need to call `to_datetime64` explicitly. 2018-02-26T14:53:57Z 2018-06-08T01:21:07Z 2018-02-27T01:13:45Z 2018-02-27T01:13:45Z d8ccc7a999dce1a9ac205452e327bab5aa5f99f0     0 f58aaa046d64315ae231fa77d7aa9e6713628742 f530e668fa50665245988be2a00748b9b3ccc0a8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1942  
171557279 MDExOlB1bGxSZXF1ZXN0MTcxNTU3Mjc5 1943 closed 0 Fix rtd link on readme fujiisoup 6815844 Typo in url. 2018-02-27T03:52:56Z 2018-02-27T04:31:59Z 2018-02-27T04:27:24Z 2018-02-27T04:27:24Z 243093cf814ffaae2a9ce08215632500fbebcf52     0 d31024bb4b25b4ab581bae5718d7015b9686e74f d8ccc7a999dce1a9ac205452e327bab5aa5f99f0 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1943  
172394913 MDExOlB1bGxSZXF1ZXN0MTcyMzk0OTEz 1950 closed 0 Fix doc for missing values. fujiisoup 6815844 - [x] Closes #1944 2018-03-02T00:47:23Z 2018-03-03T06:58:33Z 2018-03-02T20:17:29Z 2018-03-02T20:17:29Z 350e97793f89ddd4097b97e0c4af735a5144be24     0 0ca721f51e812eacd04a166238e1a4d72979fd8c dc3eebf3a514cfdc1039b63f2a542121d1328ba9 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1950  
172670528 MDExOlB1bGxSZXF1ZXN0MTcyNjcwNTI4 1957 closed 0 Numpy 1.13 for rtd fujiisoup 6815844 - [x] Partly closes #1944 I noticed [this](https://github.com/pydata/xarray/pull/1950#issuecomment-370125253) is due to the use of old numpy on rtd. xref #1956 2018-03-03T14:51:21Z 2018-03-03T22:22:54Z 2018-03-03T22:22:49Z   7d63d9b43c9f4ebf02c3af846bd09a3150fdea73     0 b035efc79252545015a5b0be9ea9667d91c7a664 350e97793f89ddd4097b97e0c4af735a5144be24 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1957  
173170675 MDExOlB1bGxSZXF1ZXN0MTczMTcwNjc1 1968 closed 0 einsum for xarray fujiisoup 6815844 - [x] Closes #1951 - [x] Tests added - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) Currently, lazy-einsum for dask is not yet working. @shoyer I think `apply_ufunc` supports lazy computation, but I did not yet figure out how to do this. Can you give me a help? 2018-03-06T14:18:22Z 2018-03-12T06:42:12Z 2018-03-12T06:42:08Z 2018-03-12T06:42:08Z 8271dffc63ec2b12fa81b11381981c9f900449e7     0 2bd06ef56d6a4aca4fc742fd9a6ad85d9f3e25bd aa83d0ec5a0da9e8880d3194864ff212d5990d6b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1968  
175403318 MDExOlB1bGxSZXF1ZXN0MTc1NDAzMzE4 1994 closed 0 Make constructing slices lazily. fujiisoup 6815844 - [x] Closes #1993 - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes. Quick fix of #1993. With this fix, the script shown in #1993 runs Bottleneck: 0.08317923545837402 s Pandas: 1.3338768482208252 s Xarray: 1.1349339485168457 s 2018-03-15T23:15:26Z 2018-03-18T08:56:31Z 2018-03-18T08:56:27Z 2018-03-18T08:56:27Z 1d0fbe6fe36d5e8a650d416cce85e7994b32e796     0 3f8aad371b8b06ebe2e620952954e6568b345fb2 e1dc51572e971567fd3562db0e9f662e3de80898 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/1994  
184495028 MDExOlB1bGxSZXF1ZXN0MTg0NDk1MDI4 2087 closed 0 Drop conflicted coordinate when assignment. fujiisoup 6815844 - [x] Closes #2068 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) After this, when assigning a dataarray to a dataset, non-dimensional and conflicted coordinates of the dataarray are dropped. example ``` In [2]: ds = xr.Dataset({'da': ('x', [0, 1, 2])}, ...: coords={'y': (('x',), [0.1, 0.2, 0.3])}) ...: ds ...: Out[2]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 In [3]: other = ds['da'] ...: other['y'] = 'x', [0, 1, 2] # conflicted non-dimensional coordinate ...: ds['da'] = other ...: ds ...: Out[3]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 # 'y' is not overwritten Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 ``` 2018-04-27T00:12:43Z 2018-05-02T05:58:41Z 2018-05-02T02:31:02Z 2018-05-02T02:31:02Z 0cc64a08c672e6361d05acea3fea9f34308b62ed     0 6e26ad1df7ba1309cd547896b3c571e2dd5b2a40 d1e1440dc5d0bc9c341da20fde85b56f2a3c1b5b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2087  
185343180 MDExOlB1bGxSZXF1ZXN0MTg1MzQzMTgw 2100 closed 0 Fix a bug introduced in #2087 fujiisoup 6815844 - [x] Closes #2099 - [x] Tests added - [x] Tests passed A quick fix for #2099 2018-05-02T06:07:01Z 2018-05-14T00:01:15Z 2018-05-02T21:59:34Z 2018-05-02T21:59:34Z b9f40cc1da9c45b3dd33a3434b69c3d8fce57138     0 3a830bf8aeb97a25c40517a54efa4ca66b7e42dd 0cc64a08c672e6361d05acea3fea9f34308b62ed MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2100  
185983977 MDExOlB1bGxSZXF1ZXN0MTg1OTgzOTc3 2104 closed 0 implement interp() fujiisoup 6815844 - [x] Closes #2079 (remove if there is no corresponding issue, which should only be the case for minor changes) - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I started working to add `interpolate_at` to xarray, as discussed in issue #2079 (but without caching). I think I need to take care of more edge cases, but before finishing up this PR, I want to discuss what the best API is. I would like to this method working similar to `isel`, which may support *vectorized* interpolation. Currently, this works as follwos ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) ...: In [2]: # simple linear interpolation ...: da.interpolate_at(x=[0.5, 1.5]) ...: Out[2]: <xarray.DataArray (x: 2)> array([0.05, 0.15]) Coordinates: * x (x) float64 0.5 1.5 In [3]: # with cubic spline interpolation ...: da.interpolate_at(x=[0.5, 1.5], method='cubic') ...: Out[3]: <xarray.DataArray (x: 2)> array([0.0375, 0.1625]) Coordinates: * x (x) float64 0.5 1.5 In [4]: # interpolation at one single position ...: da.interpolate_at(x=0.5) ...: Out[4]: <xarray.DataArray ()> array(0.05) Coordinates: x float64 0.5 In [5]: # interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([[0.5, 1.0], [1.5, 2.0]], dims=['y', 'z'])) ...: Out[5]: <xarray.DataArray (y: 2, z: 2)> array([[0.05, 0.1 ], [0.15, 0.2 ]]) Coordinates: x (y, z) float64 0.5 1.0 1.5 2.0 Dimensions without coordinates: y, z In [6]: da = xr.DataArray([[0, 0.1, 0.2], [1.0, 1.1, 1.2]], ...: dims=… 2018-05-04T13:28:38Z 2018-06-11T13:01:21Z 2018-06-08T00:33:52Z 2018-06-08T00:33:52Z e39729928544204894e65c187d66c1a2b1900fea     0 60e2ca3b056a623b1e35042f7fc3d13668c11fa5 21a9f3d7e3a5dd729aeafd08dda966c365520965 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2104  
187477383 MDExOlB1bGxSZXF1ZXN0MTg3NDc3Mzgz 2119 closed 0 Support keep_attrs for apply_ufunc for xr.Variable fujiisoup 6815844 - [x] Closes #2114 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes 2114. 2018-05-11T14:18:51Z 2018-05-11T22:54:48Z 2018-05-11T22:54:44Z 2018-05-11T22:54:43Z d63001cdbc3bd84f4d6d90bd570a2215ea9e5c2e     0 6f9094f0741403c021774b605eadbc8315dc2630 6d8ac11ca0a785a6fe176eeca9b735c321a35527 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2119  
187600342 MDExOlB1bGxSZXF1ZXN0MTg3NjAwMzQy 2122 closed 0 Fixes centerized rolling with bottleneck fujiisoup 6815844 - [x] Closes #2113 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Two bugs were found and fixed. 1. rolling a dask-array with center=True and bottleneck 2. rolling an integer dask-array with bottleneck 2018-05-12T02:28:21Z 2018-05-13T00:27:56Z 2018-05-12T06:15:55Z 2018-05-12T06:15:55Z a52540505f606bd7619536d82d43f19f2cbe58b5     0 fc1c2f1987079c6e63fabd6b771693f7bd79894f d63001cdbc3bd84f4d6d90bd570a2215ea9e5c2e MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2122  
187657188 MDExOlB1bGxSZXF1ZXN0MTg3NjU3MTg4 2124 closed 0 Raise an Error if a coordinate with wrong size is assigned to a dataarray fujiisoup 6815844 - [x] Closes #2112 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API. Now uses `dataset_merge_method` when a new coordinate is assigned to a xr.DataArray 2018-05-13T07:50:15Z 2018-05-16T02:10:48Z 2018-05-15T16:39:22Z 2018-05-15T16:39:22Z 9a48157b525d9e346e73f358a99ceb52717fd3ea     0 923cd1363d424c1904fbe0b6deac051d81361551 ebe0dd03187a5c3138ea12ca4beb13643679fe21 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2124  
187657264 MDExOlB1bGxSZXF1ZXN0MTg3NjU3MjY0 2125 closed 0 Reduce pad size in rolling fujiisoup 6815844 - [ ] Closes #N.A. - [x] Tests added (for all bug fixes or enhancements) - [ ] Tests N.A. - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I noticed `rolling` with dask array and with bottleneck can be slightly improved by reducing the padding depth in `da.ghost.ghost(a, depth=depth, boundary=boundary)`. @jhamman , can you kindly review this? 2018-05-13T07:52:50Z 2018-05-14T22:43:24Z 2018-05-13T22:37:48Z 2018-05-13T22:37:48Z f861186cbd11bdbfb2aab8289118a59283a2d7af     0 a7adc7e20dd8909220a4bee79e15c7e1aeb95733 ebe0dd03187a5c3138ea12ca4beb13643679fe21 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2125  
190509999 MDExOlB1bGxSZXF1ZXN0MTkwNTA5OTk5 2185 closed 0 weighted rolling mean -> weighted rolling sum fujiisoup 6815844 An example of weighted rolling mean in doc is actually weighted rolling *sum*. It is a little bit misleading [SO](https://stackoverflow.com/questions/50520835/xarray-simple-weighted-rolling-mean-example-using-construct/50524093#50524093), so I propose to change `weighted rolling mean` -> `weighted rolling sum` 2018-05-25T08:03:59Z 2018-05-25T10:38:52Z 2018-05-25T10:38:48Z 2018-05-25T10:38:48Z 04df50efefecaea729133c14082eb5e24491633e     0 428b970fb28e11ebc17a1d1780a107307ab00daa b48e0969670f17857a314b5a755b1a1bf7ee38df MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2185  
191653297 MDExOlB1bGxSZXF1ZXN0MTkxNjUzMjk3 2205 closed 0 Support dot with older dask fujiisoup 6815844 - [x] Related with #2203 - [x] Tests added - [x] Tests passed - [x] Fully documented Related with #2203, I think it is better if `xr.DataArray.dot()` is working even with older dask, at least in the simpler case (as this is a very primary operation). The cost is a slight complication of the code. Any comments are welcome. 2018-05-31T06:13:48Z 2018-06-01T01:01:37Z 2018-06-01T01:01:34Z 2018-06-01T01:01:34Z 9d60897a6544d3a2d4b9b3b64008b2bc316d8b98     0 ea5d4e90e286b807f0289fee9b7605f08b1b5e55 7036eb5b629f2112da9aa13538aecb07f0f83f5a MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2205  
193486763 MDExOlB1bGxSZXF1ZXN0MTkzNDg2NzYz 2220 closed 0 Reduce memory usage in doc.interpolation.rst fujiisoup 6815844 I noticed an example I added to doc in #2104 consumes more than 1 GB memory, and it results in the failing in readthedocs build. This PR changes this to a much lighter example. 2018-06-08T01:23:13Z 2018-06-08T01:45:11Z 2018-06-08T01:31:19Z 2018-06-08T01:31:19Z 98e6a4b84dd2cf4296a3e0aa9710bb79411354e4     0 843893c4167eb85bfe2b70db33fd38b56b6743b4 e39729928544204894e65c187d66c1a2b1900fea MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2220  
193762231 MDExOlB1bGxSZXF1ZXN0MTkzNzYyMjMx 2222 closed 0 implement interp_like fujiisoup 6815844 - [x] Closes #2218 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API. This adds `interp_like`, that behaves like `reindex_like` but using interpolation. 2018-06-09T06:46:48Z 2018-06-20T01:39:40Z 2018-06-20T01:39:24Z 2018-06-20T01:39:23Z 59ad782f29a0f4766bac7802be6650be61f018b8     0 134bf835f010cc86e59b299f23914d013565d1f9 66be9c5db7d86ea385c3a4cd4295bfce67e3f25b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2222  
195508617 MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3 2236 closed 0 Refactor nanops fujiisoup 6815844 - [x] Closes #2230 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) In #2230, the addition of `min_count` keywords for our reduction methods was discussed, but our `duck_array_ops` module is becoming messy (mainly due to nan-aggregation methods for dask, bottleneck and numpy) and it looks a little hard to maintain them. I tried to refactor them by moving nan-aggregation methods to `nanops` module. I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation. Note: In my implementation, **bottleneck is not used when `skipna=False`**. bottleneck would be advantageous when `skipna=True` as numpy needs to copy the entire array once, but I think numpy's method is still OK if `skipna=False`. 2018-06-18T12:27:31Z 2018-09-26T12:42:55Z 2018-08-16T06:59:33Z 2018-08-16T06:59:33Z 0b9ab2d12ae866a27050724d94facae6e56f5927     0 b72a1c852add254a4cdd49408fe4e9c934ceece6 4df048c146b8da7093faf96b3e59fb4d56945ec5 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2236  
204585059 MDExOlB1bGxSZXF1ZXN0MjA0NTg1MDU5 2326 closed 0 fix doc build error after #2312 fujiisoup 6815844 I merged #2312 without making sure the building test passing, but there was a typo. Ths PR fixes it. 2018-07-28T09:15:20Z 2018-07-28T10:05:53Z 2018-07-28T10:05:50Z 2018-07-28T10:05:50Z ded0a684136540962bcc409e6272b1cebb5af30a     0 326dc820ea662972599d522f36bdbf1b7565f21c 2fa9dded34e06104379ad1a12c6967913998889b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2326  
206224273 MDExOlB1bGxSZXF1ZXN0MjA2MjI0Mjcz 2342 closed 0 apply_ufunc now raises a ValueError when the size of input_core_dims is inconsistent with number of argument fujiisoup 6815844 - [x] Closes #2341 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now raises a ValueError when the size of input_core_dims is inconsistent with number of argument. 2018-08-05T06:20:03Z 2018-08-06T22:38:57Z 2018-08-06T22:38:53Z 2018-08-06T22:38:53Z 0b181226bbb1c26adfdd5d47d567fb78d0a450fa     0 1e64344507c9db30ba746e29369d299fda39e61d 56381ef444c5e699443e8b4e08611060ad5c9507 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2342  
206226854 MDExOlB1bGxSZXF1ZXN0MjA2MjI2ODU0 2343 closed 0 local flake8 fujiisoup 6815844 Trivial changes to pass local flake8 tests. 2018-08-05T07:47:38Z 2018-08-05T23:47:00Z 2018-08-05T23:47:00Z 2018-08-05T23:47:00Z f217a7d8675062aff14f3dc6fb008af0cba8da49     0 1b275b5737287860d6c68614d32f891150bf1f11 56381ef444c5e699443e8b4e08611060ad5c9507 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2343  
206537474 MDExOlB1bGxSZXF1ZXN0MjA2NTM3NDc0 2349 closed 0 dask.ghost -> dask.overlap fujiisoup 6815844 Dask renamed `dask.ghost` -> `dask.overlap` in dask/dask#3830. This PR follows up this. 2018-08-06T22:54:46Z 2018-08-08T01:14:04Z 2018-08-08T01:14:02Z 2018-08-08T01:14:02Z 04458670782c0b6fdba7e7021055155b2a6f284a     0 108381f9c88526e676fff193a4a7f70e7c9204ec 0b181226bbb1c26adfdd5d47d567fb78d0a450fa MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2349  
206864758 MDExOlB1bGxSZXF1ZXN0MjA2ODY0NzU4 2353 closed 0 Raises a ValueError for a confliction between dimension names and level names fujiisoup 6815844 - [x] Closes #2299 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API. Now it raises an Error to assign new dimension with the name conflicting with an existing level name. Therefore, it is not allowed ```python b = xr.Dataset(coords={'dim0': ['a', 'b'], 'dim1': [0, 1]}) b = b.stack(dim_stacked=['dim0', 'dim1']) # This should raise an errors even though its length is consistent with `b['dim0']` b['c'] = (('dim0',), [10, 11, 12, 13]) # This is OK b['c'] = (('dim_stacked',), [10, 11, 12, 13]) ``` 2018-08-08T00:52:29Z 2018-08-13T22:16:36Z 2018-08-13T22:16:31Z 2018-08-13T22:16:31Z e3350fd724c30bb3695f755316f9b840445a0af6     0 82475aff193036c4b1493081414fc66befbfc150 846e28f8862b150352512f8e3d05bcb9db57a1a3 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2353  
206867230 MDExOlB1bGxSZXF1ZXN0MjA2ODY3MjMw 2354 closed 0 Mark some tests related to cdat-lite as xfail fujiisoup 6815844 I just mark some to_cdms2 tests xfail. See #2332 for the details. It is a temporal workaround and we may need to keep #2332 open until it is solved. 2018-08-08T01:13:25Z 2018-08-10T16:09:30Z 2018-08-10T16:09:30Z 2018-08-10T16:09:30Z fe99a22ca7bcb1f854c22f5f6894d3c5d40774a6     0 b81ca434f6b93c8eca5d44b16f5a03ec060db382 04458670782c0b6fdba7e7021055155b2a6f284a MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2354  
208144841 MDExOlB1bGxSZXF1ZXN0MjA4MTQ0ODQx 2366 closed 0 Future warning for default reduction dimension of groupby fujiisoup 6815844 - [ ] Closes #xxxx - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Started to fix #2363. Now warns a futurewarning in groupby if default reduction dimension is not specified. As a side effect, I added `xarray.ALL_DIMS`. With `dim=ALL_DIMS` always reduces along all the dimensions. 2018-08-14T01:16:34Z 2018-09-28T06:54:30Z 2018-09-28T06:54:30Z 2018-09-28T06:54:30Z 638b251c622359b665208276a2cb23b0fbc5141b     0 68d7c04e5ebf1e6c7c34a71ee73da6a7f30ca4a2 04253f271c66a12366a82d357c2a889dd3eea42f MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2366  
209078448 MDExOlB1bGxSZXF1ZXN0MjA5MDc4NDQ4 2372 closed 0 [MAINT] Avoid using duck typing fujiisoup 6815844 - [x] Closes #2179 - [x] Tests passed - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) 2018-08-17T08:26:31Z 2018-08-20T01:13:26Z 2018-08-20T01:13:16Z 2018-08-20T01:13:16Z 8378d3af259d7d1907359fc087dd0a6ca7e5ef17     0 6b206c771b7ebe1bf6eeed7ef0cb50fffbf8df9e 0b9ab2d12ae866a27050724d94facae6e56f5927 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2372  
209145472 MDExOlB1bGxSZXF1ZXN0MjA5MTQ1NDcy 2373 closed 0 More support of non-string dimension names fujiisoup 6815844 - [x] Tests passed (for all non-documentation changes) Following to #2174 In some methods, consistency of the dictionary arguments and keyword arguments are checked twice in `Dataset` and `Variable`. Can we change the API of Variable so that it does not take kwargs-type argument for dimension names? 2018-08-17T13:18:18Z 2018-08-20T01:13:02Z 2018-08-20T01:12:37Z 2018-08-20T01:12:37Z 725bd57ffa64d7e391ceef2b056fa8122ec09e8d     0 48a2f3170b907f2e2253fd484d1b323a1f1b51ad 0b9ab2d12ae866a27050724d94facae6e56f5927 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2373  
212889732 MDExOlB1bGxSZXF1ZXN0MjEyODg5NzMy 2398 closed 0 implement Gradient fujiisoup 6815844 - [x] Closes #1332 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added `xr.gradient`, `xr.DataArray.gradient`, and `xr.Dataset.gradient` according to #1332. 2018-09-04T08:11:52Z 2018-09-21T20:02:43Z 2018-09-21T20:02:43Z 2018-09-21T20:02:43Z ab96954883200f764a0dd50870e4db240c119265     0 528bcab00920a40a49643d412ea0d9c8a2d2102c 66a8f8dd7f5a2997ff614f3966d1951587915e7e MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2398  
218706745 MDExOlB1bGxSZXF1ZXN0MjE4NzA2NzQ1 2446 closed 0 fix:2445 fujiisoup 6815844 - [x] Closes #2445 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes It is a regression after #2360. 2018-09-27T16:00:17Z 2018-09-28T18:24:42Z 2018-09-28T18:24:36Z 2018-09-28T18:24:35Z 23d1cda3b7da5c73a5f561a5c953b50beaa2bfe6     0 a49e5d7afa60b53a4c4ee1f65443231577fecbcd c2b09d697c741b5d6ddede0ba01076c0cb09cf19 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2446  
218721452 MDExOlB1bGxSZXF1ZXN0MjE4NzIxNDUy 2447 closed 0 restore ddof support in std fujiisoup 6815844 - [x] Closes #2440 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes It looks that I wrongly remove `ddof` option for `nanstd` in #2236. This PR fixes this. 2018-09-27T16:51:44Z 2018-10-03T12:44:55Z 2018-09-28T13:44:29Z 2018-09-28T13:44:29Z 458cf51ce20e8d924b38b59c8fbc3bb10f39148e     0 a50d8ac2f39ed996b25793c50946ccef90ce5974 78058e2c1f39cbfae6eddb30e3b7d4a81b54ad8b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2447  
220272833 MDExOlB1bGxSZXF1ZXN0MjIwMjcyODMz 2462 closed 0 pep8speaks fujiisoup 6815844 - [x] Closes #2428 I installed pep8speaks as suggested in #2428. It looks they do not need a yml file, but it may be safer to add this (just renamed from `.stickler.yml`) 2018-10-04T07:17:34Z 2018-10-07T22:40:15Z 2018-10-07T22:40:08Z 2018-10-07T22:40:08Z cf1e6c73d0366124485c1d767b89ac1cc301705b     0 9b620892593672e881cb91f22431179ddde05508 bb87a9441d22b390e069d0fde58f297a054fd98a MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2462  
221311770 MDExOlB1bGxSZXF1ZXN0MjIxMzExNzcw 2477 closed 0 Inhouse LooseVersion fujiisoup 6815844 - [x] Closes #2468 - [x] Tests added - [N.A.] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) A fix for #2468. 2018-10-09T05:23:56Z 2018-10-10T13:47:31Z 2018-10-10T13:47:23Z 2018-10-10T13:47:23Z 7f20a20aa278d2bb056403d665c10e29968755cd     0 7aec9fe1517c74f2711289d073b40d46fff0e233 289b377129b18e7dc6da8336e958a85be868acbe MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2477  
238972759 MDExOlB1bGxSZXF1ZXN0MjM4OTcyNzU5 2612 closed 0 Added Coarsen fujiisoup 6815844 - [x] Closes #2525 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Started to implement `corsen`. The API is currently something like ```python actual = ds.coarsen(time=2, x=3, side='right', coordinate_func={'time': np.max}).max() ``` Currently, it is not working for a datetime coordinate, since `mean` does not work for this dtype. e.g. ```python da = xr.DataArray(np.linspace(0, 365, num=365), dims='time', coords={'time': pd.date_range('15/12/1999', periods=365)}) da['time'].mean() # -> TypeError: ufunc add cannot use operands with types dtype('<M8[ns]') and dtype('<M8[ns]') ``` I am not familiar with datetime things. Any advice will be appreciated. 2018-12-16T15:28:31Z 2019-01-06T09:13:56Z 2019-01-06T09:13:46Z 2019-01-06T09:13:46Z ede3e0101bae2f45c3f4634a1e1ecb8e2ccd0258     0 1523292b876ec5578b806c9c2cc43ce80d73a061 dba299befbdf19b02612573b218bcc1e97d4e010 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2612  
239784815 MDExOlB1bGxSZXF1ZXN0MjM5Nzg0ODE1 2621 closed 0 Fix multiindex selection fujiisoup 6815844 - [x] Closes #2619 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fix using ` MultiIndex.remove_unused_levels()` 2018-12-19T10:30:15Z 2018-12-24T15:37:27Z 2018-12-24T15:37:27Z 2018-12-24T15:37:27Z b5059a538ee2efda4d753cc9a49f8c09cd026c19     0 61d1d494ab370126519fbb9285014f947f2dfe2b c2ce5ea83b5924302653c8dfba7de68c7d98c942 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2621  
242435203 MDExOlB1bGxSZXF1ZXN0MjQyNDM1MjAz 2653 closed 0 Implement integrate fujiisoup 6815844 - [x] Closes #1288 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I would like to add `integrate`, which is essentially an xarray-version of `np.trapz`. I know there was variety of discussions in #1288, but I think it would be nice to limit us within that numpy provides by `np.trapz`, i.e., 1. only for `trapz` not `rectangle` or `simps` 2. do not care `np.nan` 3. do not support `bounds` Most of them (except for 1) can be solved by combining several existing methods. 2019-01-05T11:22:10Z 2019-01-31T17:31:31Z 2019-01-31T17:30:31Z 2019-01-31T17:30:31Z 492303924f4573173029aa9cf5a785413ee9d2ed     0 056111372d4c26cefe7d3bb9a40df86c406ec037 ede3e0101bae2f45c3f4634a1e1ecb8e2ccd0258 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2653  
244162181 MDExOlB1bGxSZXF1ZXN0MjQ0MTYyMTgx 2668 closed 0 fix datetime_to_numeric and Variable._to_numeric fujiisoup 6815844 - [x] Closes #2667 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Started to fixing #2667 2019-01-11T22:02:07Z 2019-02-11T11:58:22Z 2019-02-11T09:47:09Z 2019-02-11T09:47:09Z 4cd56a9edb083a3eb8d11e7a367dfb9bda76fc2e     0 0b266156c839a66166a06c110ffdc5e18fbe7571 6d2076688d4f5466cf77ace2b196e910c1c0fbb8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2668  
276346147 MDExOlB1bGxSZXF1ZXN0Mjc2MzQ2MTQ3 2942 closed 0 Fix rolling operation with dask and bottleneck fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #2940 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fix for #2940 It looks that there was a bug in the previous logic, but I am not sure why it was working... 2019-05-06T21:23:41Z 2019-06-30T00:34:57Z 2019-06-30T00:34:57Z   7ba929d15cc77c718c8dbb4f96582820fa98a861     0 ca96cc3b709ef043a7fa54030c6ddf26da8b4089 5aaa6547cd14a713f89dfc7c22643d86fce87916 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/2942  
340541733 MDExOlB1bGxSZXF1ZXN0MzQwNTQxNzMz 3520 closed 0 Fix set_index when an existing dimension becomes a level fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3512 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API There was a bug in `set_index`, where an old dimension was not updated if it becomes a level of MultiIndex. 2019-11-13T16:06:50Z 2019-11-14T11:56:25Z 2019-11-14T11:56:18Z 2019-11-14T11:56:18Z c0ef2f616e87e9f924425bcd373ac265f14203cb     0 18fa5ec46da318d76488ea2994e9654e9683bce9 8b240376fd91352a80b068af606850e8d57d1090 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3520  
341746408 MDExOlB1bGxSZXF1ZXN0MzQxNzQ2NDA4 3541 closed 0 Added fill_value for unstack fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3518 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added an option `fill_value` for `unstack`. I am trying to add `sparse` option too, but it may take longer. Probably better to do in a separate PR? 2019-11-16T11:10:56Z 2019-11-16T14:42:31Z 2019-11-16T14:36:44Z 2019-11-16T14:36:43Z 56c16e4bf45a3771fd9acba76d802c0199c14519     0 5c574ecebc76df7f1f55811acd7b7531ed8dba86 52d48450f6291716a90f4f7e93e15847942e0da0 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3541  
341761585 MDExOlB1bGxSZXF1ZXN0MzQxNzYxNTg1 3542 closed 0 sparse option to reindex and unstack fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3518 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added `sparse` option to `reindex` and `unstack`. I just added a minimal set of codes necessary to `unstack` and `reindex`. There is still a lot of space to complete the sparse support as discussed in #3245. 2019-11-16T14:41:00Z 2019-11-19T22:40:34Z 2019-11-19T16:23:34Z 2019-11-19T16:23:34Z 220adbc65e0b8c46feddaa6984df4a3a1ce0af6b     0 92ce6cdbfc0ff59a1963933bdb46612908ab4de2 56c16e4bf45a3771fd9acba76d802c0199c14519 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3542  
344805747 MDExOlB1bGxSZXF1ZXN0MzQ0ODA1NzQ3 3566 closed 0 Make 0d-DataArray compatible for indexing. fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3562 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now 0d-DataArray can be used for indexing. 2019-11-23T12:43:32Z 2023-08-31T02:06:21Z 2023-08-31T02:06:21Z   ee41a090c44d58d89d2761d92e3ce84ecae3aacb     0 2d738536efcbcbac3ff75aeb5bf680900cd0f886 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3566  
347592715 MDExOlB1bGxSZXF1ZXN0MzQ3NTkyNzE1 3587 open 0 boundary options for rolling.construct fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #2007, #2011 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added some boundary options for rolling.construct. Currently, the option names are inherited from `np.pad`, `['edge' | 'reflect' | 'symmetric' | 'wrap']`. Do we want a more intuitive name, such as `periodic`? 2019-12-02T12:11:44Z 2022-06-09T14:50:17Z     ad596b643eb6e63870b222debe1067821002460f     0 56760379d32efc104541c9f3a0f0133e0fa916a4 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3587  
360395968 MDExOlB1bGxSZXF1ZXN0MzYwMzk1OTY4 3670 closed 0 sel with categorical index fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3669, #3674 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API It is a bit surprising that no members have used xarray with CategoricalIndex... If there is anything missing additionally, please feel free to point it out. 2020-01-08T10:51:06Z 2020-01-25T22:38:28Z 2020-01-25T22:38:21Z 2020-01-25T22:38:20Z cc142f430f9f468c990b6607ddf3424b0facf054     0 27f35059038c6ab74e6352932ac58759f2aca5b0 9c7286639136f52aee877f44de8c89d7c8f41068 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3670  
400511693 MDExOlB1bGxSZXF1ZXN0NDAwNTExNjkz 3953 closed 0 Fix wrong order of coordinate converted from pd.series with MultiIndex fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3951 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API It looks `dataframe.set_index(index).index == index` is not always true. Added a workaround for this... 2020-04-07T21:28:04Z 2020-04-08T05:49:46Z 2020-04-08T02:19:11Z 2020-04-08T02:19:10Z 1eedc5c146d9e6ebd46ab2cc8b271e51b3a25959     0 b79a96e506e02e549255c6afdd8eeefe6c37b102 f07adb293e67ae01d305fd1c8fb42f5bad2238e7 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3953  
413872842 MDExOlB1bGxSZXF1ZXN0NDEzODcyODQy 4036 closed 0 support darkmode fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #4024 - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now it looks like ![image](https://user-images.githubusercontent.com/6815844/81138965-e3f04300-8f9e-11ea-9e5d-7b5b680932d7.png) I'm pretty sure that this workaround is not the best (maybe the second worst), as it only supports the dark mode of vscode but not other environments. I couldn't find a good way to make a workaround for the general dark-mode. Any advice is welcome. 2020-05-06T04:39:07Z 2020-05-21T21:06:15Z 2020-05-07T20:36:32Z 2020-05-07T20:36:32Z 69548df9826cde9df6cbdae9c033c9fb1e62d493     0 6cd140ba5924d067c77a30552c524a3f88206b4d 9ec3f7b44d50ffa2298a9796847e69953ae96cbd MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4036  
418912877 MDExOlB1bGxSZXF1ZXN0NDE4OTEyODc3 4069 closed 0 Improve interp performance fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #2223 - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now n-dimensional interp works sequentially if possible. It may speed up some cases. 2020-05-16T04:23:47Z 2020-05-25T20:02:41Z 2020-05-25T20:02:37Z 2020-05-25T20:02:36Z d1f7cb8fd95d588d3f7a7e90916c25747b90ad5a     0 1a7d738ea82cf714a28b4b2f8dcdc711d5c39fc6 2542a63f6ebed1a464af7fc74b9f3bf302925803 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4069  
447892617 MDExOlB1bGxSZXF1ZXN0NDQ3ODkyNjE3 4219 closed 0 nd-rolling fujiisoup 6815844 - [x] Closes #4196 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` I noticed that the implementation of nd-rolling is straightforward. The core part is implemented but I am wondering what the best API is, with keeping it backward-compatible. Obviously, it is basically should look like ```python da.rolling(x=3, y=3).mean() ``` A problem is other parameters, `centers` and `min_periods`. In principle, they can depend on dimension. For example, we can have `center=True` only for `x` but not for `y`. So, maybe we allow dictionary for them? ```python da.rolling(x=3, y=3, center={'x': True, 'y': False}, min_periods={'x': 1, 'y': None}).mean() ``` The same thing happens for `.construct` method. ```python da.rolling(x=3, y=3).construct(x='x_window', y='y_window', stride={'x': 2, 'y': 1}) ``` I'm afraid if this dictionary argument was a bit too redundant. Does anyone have another idea? 2020-07-12T12:19:19Z 2020-08-08T07:23:51Z 2020-08-08T04:16:27Z 2020-08-08T04:16:27Z 1d3dee08291c83d13c46c9b4ede99020942df2f1     0 f44dd5db5ee54cb01f1c6cb6a3d662f93932cd1d e04e21d6160f43bc44e999b6f54f9fe4682f9b81 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4219  
465085685 MDExOlB1bGxSZXF1ZXN0NDY1MDg1Njg1 4329 closed 0 ndrolling repr fix fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #4328 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` There was a bug in `rolling.__repr__` but it was not tested. Fixed and tests are added. 2020-08-08T23:34:37Z 2020-08-09T13:15:50Z 2020-08-09T11:57:38Z 2020-08-09T11:57:37Z df7b2eae3a26c1e86bd5f1dd7dab9cc8c4e53914     0 3b9cf9819679a9080a26ba469b78563981a3a9d1 f02ca53714de06a4fc035f9dbc75b55be6fa3297 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4329  
581821524 MDExOlB1bGxSZXF1ZXN0NTgxODIxNTI0 4974 closed 0 implemented pad with new-indexes fujiisoup 6815844 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3868 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Now we use a tuple of indexes for `DataArray.pad` and `Dataset.pad`. 2021-03-01T07:50:08Z 2023-09-14T02:47:24Z 2023-09-14T02:47:24Z   1c150b58f2d05749bcec5de1a10889289e390b85     0 30391c64c809686bfefd3bb878ca66eaf86016a5 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4974  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 29.387ms · About: xarray-datasette