id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 274797981,MDU6SXNzdWUyNzQ3OTc5ODE=,1725,Switch our lazy array classes to use Dask instead?,6815844,open,0,,,9,2017-11-17T09:12:34Z,2023-09-15T15:51:41Z,,MEMBER,,,,"Ported from #1724, [comment](https://github.com/pydata/xarray/pull/1724#pullrequestreview-77354985) by @shoyer > In the long term, it would be nice to get ride of these uses of `_data`, maybe by switching entirely from our lazy array classes to Dask. The subtleties of checking `_data` vs `data` are undesirable, e.g., consider the bug on these lines: https://github.com/pydata/xarray/blob/1a012080e0910f3295d0fc26806ae18885f56751/xarray/core/formatting.py#L212-L213 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1725/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 818583834,MDExOlB1bGxSZXF1ZXN0NTgxODIxNTI0,4974,implemented pad with new-indexes,6815844,closed,0,,,8,2021-03-01T07:50:08Z,2023-09-14T02:47:24Z,2023-09-14T02:47:24Z,MEMBER,,0,pydata/xarray/pulls/4974," - [x] Closes #3868 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Now we use a tuple of indexes for `DataArray.pad` and `Dataset.pad`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4974/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 527553050,MDExOlB1bGxSZXF1ZXN0MzQ0ODA1NzQ3,3566,Make 0d-DataArray compatible for indexing.,6815844,closed,0,,,6,2019-11-23T12:43:32Z,2023-08-31T02:06:21Z,2023-08-31T02:06:21Z,MEMBER,,0,pydata/xarray/pulls/3566," - [x] Closes #3562 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now 0d-DataArray can be used for indexing.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3566/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 527237590,MDU6SXNzdWU1MjcyMzc1OTA=,3562,Minimize `.item()` call,6815844,open,0,,,1,2019-11-22T14:44:43Z,2023-06-08T04:48:50Z,,MEMBER,,,,"#### MCVE Code Sample I want to minimize the number of calls `.item()` within my data analysis. It often happens 1. when putting a 0d-DataArray into a slice ```python da = xr.DataArray([0.5, 4.5, 2.5], dims=['x'], coords={'x': [0, 1, 2]}) da[: da.argmax()] ``` -> `TypeError: 'DataArray' object cannot be interpreted as an integer` 2. when using a 0d-DataArray for selecting ```python da = xr.DataArray([0.5, 4.5, 2.5], dims=['x'], coords={'x': [0, 0, 2]}) da.sel(x=da['x'][0]) ``` -> `IndexError: arrays used as indices must be of integer (or boolean) type` Both cases, I need to call '.item()'. It is not a big issue, but I think it would be nice if xarray becomes more self-contained.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3562/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 675482176,MDU6SXNzdWU2NzU0ODIxNzY=,4325,Optimize ndrolling nanreduce,6815844,open,0,,,5,2020-08-08T07:46:53Z,2023-04-13T15:56:52Z,,MEMBER,,,,"In #4219 we added ndrolling. However, nanreduce, such as `ds.rolling(x=3, y=2).mean()` calls `np.nanmean` which copies the strided-array into a full-array. This is memory-inefficient. We can implement inhouse-nanreduce methods for the strided array. For example, our `.nansum` currently does make a strided array -> copy the array -> replace nan by 0 -> sum but we can do instead replace nan by 0 -> make a strided array -> sum This is much more memory efficient. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4325/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 262642978,MDU6SXNzdWUyNjI2NDI5Nzg=,1603,Explicit indexes in xarray's data-model (Future of MultiIndex),6815844,closed,0,,741199,68,2017-10-04T01:51:47Z,2022-09-28T09:24:20Z,2022-09-28T09:24:20Z,MEMBER,,,,"I think we can continue the discussion we have in #1426 about `MultiIndex` here. In [comment](https://github.com/pydata/xarray/pull/1426#issuecomment-304778433) , @shoyer recommended to remove `MultiIndex` from public API. I agree with this, as long as my codes work with this improvement. I think if we could have a list of possible `MultiIndex` use cases here, it would be easier to deeply discuss and arrive at a consensus of the future API. Current limitations of `MultiIndex` are + It drops scalar coordinate after selection #1408, #1491 + It does not support to serialize to NetCDF #1077 + Stack/unstack behaviors are inconsistent #1431","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1603/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 531087939,MDExOlB1bGxSZXF1ZXN0MzQ3NTkyNzE1,3587,boundary options for rolling.construct,6815844,open,0,,,4,2019-12-02T12:11:44Z,2022-06-09T14:50:17Z,,MEMBER,,0,pydata/xarray/pulls/3587," - [x] Closes #2007, #2011 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added some boundary options for rolling.construct. Currently, the option names are inherited from `np.pad`, `['edge' | 'reflect' | 'symmetric' | 'wrap']`. Do we want a more intuitive name, such as `periodic`?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3587/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 655382009,MDU6SXNzdWU2NTUzODIwMDk=,4218,what is the best way to reset an unintentional direct push to the master,6815844,closed,0,,,16,2020-07-12T11:30:45Z,2022-04-17T20:34:32Z,2022-04-17T20:34:32Z,MEMBER,,,,"I am sorry but I unintentionally pushed my working scripts to xarray.master. (I thought it is not allowed and I was not careful.) What is the best way to reset this? I'm thinking to do in my local, and force push again, but I'm afraid that I do another wrong thing... I apologize for my mistake.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 280875330,MDU6SXNzdWUyODA4NzUzMzA=,1772,nonzero method for xr.DataArray,6815844,open,0,,,5,2017-12-11T02:25:11Z,2022-04-01T10:42:20Z,,MEMBER,,,,"`np.nonzero` to `DataArray` returns a wrong result, ```python In [4]: da = xr.DataArray(np.arange(12).reshape(4, 3), dims=['x', 'y'], ...: coords={'x': [0, 1, 2, 3], 'y': ['a', 'b', 'c']}) ...: np.nonzero(da) ...: Out[4]: array([[0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3], [1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2]]) Coordinates: * x (x) int64 0 1 2 3 * y (y) # Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: None python: 3.5.2.final.0 python-bits: 64 OS: Linux OS-release: 4.4.0-101-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.9.6-172-gc58d142 pandas: 0.21.0 numpy: 1.13.1 scipy: 0.19.1 netCDF4: None h5netcdf: None Nio: None bottleneck: 1.2.1 cyordereddict: None dask: 0.16.0 matplotlib: 2.0.2 cartopy: None seaborn: 0.7.1 setuptools: 36.5.0 pip: 9.0.1 conda: 4.3.30 pytest: 3.2.3 IPython: 6.0.0 sphinx: 1.6.3 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1772/reactions"", ""total_count"": 6, ""+1"": 6, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 898657012,MDU6SXNzdWU4OTg2NTcwMTI=,5361,Inconsistent behavior in grouby depending on the dimension order,6815844,open,0,,,1,2021-05-21T23:11:37Z,2022-03-29T11:45:32Z,,MEMBER,,,," `groupby` works inconsistently depending on the dimension order of a `DataArray`. Furthermore, in some cases, this causes a corrupted object. ```python In [4]: data = xr.DataArray( ...: np.random.randn(4, 2), ...: dims=['x', 'z'], ...: coords={'x': ['a', 'b', 'a', 'c'], 'y': ('x', [0, 1, 0, 2])} ...: ) ...: ...: data.groupby('x').mean() Out[4]: array([[ 0.95447186, -1.14467028], [ 0.76294958, 0.3751244 ], [-0.41030223, -1.35344548]]) Coordinates: * x (x) object 'a' 'b' 'c' Dimensions without coordinates: z ``` `groupby` works fine (although this drops nondimensional coordinate `y`, related to #3745). However, `groupby` does not give a correct result if we work on the second dimension, ```python In [5]: data.T.groupby('x').mean() # <--- change the dimension order, and do the same thing Out[5]: array([[ 0.95447186, 0.76294958, -0.41030223], [-1.14467028, 0.3751244 , -1.35344548]]) Coordinates: * x (x) object 'a' 'b' 'c' y (x) int64 0 1 0 2 # <-- the size must be 3!! Dimensions without coordinates: z ``` The bug has been discussed in #2944 and solved, but I found this is still there.
Output of xr.show_versions() INSTALLED VERSIONS ------------------ commit: 09d8a4a785fa6521314924fd785740f2d13fb8ee python: 3.7.7 (default, Mar 23 2020, 22:36:06) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-72-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.16.1.dev30+g1d3dee08.d20200808 pandas: 1.1.3 numpy: 1.18.1 scipy: 1.5.2 netCDF4: 1.4.2 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.6.0 distributed: 2.7.0 matplotlib: 3.2.2 cartopy: None seaborn: 0.10.1 numbagg: None pint: None setuptools: 46.1.1.post20200323 pip: 20.0.2 conda: None pytest: 5.2.1 IPython: 7.13.0 sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5361/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 228295383,MDU6SXNzdWUyMjgyOTUzODM=,1408,.sel does not keep selected coordinate value in case with MultiIndex,6815844,closed,0,,,8,2017-05-12T13:40:34Z,2022-03-17T17:11:41Z,2022-03-17T17:11:41Z,MEMBER,,,,"`.sel` method usually keeps selected coordinate value as a scalar coordinate ```python In[4] ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) Out[4]: Dimensions: () Coordinates: y Dimensions: (y: 2) Coordinates: * y (y) object 'a' 'b' Data variables: foo (y) int64 2 5 ``` x is gone. Is it a desired behavior? ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1408/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 359240638,MDU6SXNzdWUzNTkyNDA2Mzg=,2410,Updated text for indexing page,6815844,open,0,,,11,2018-09-11T22:01:39Z,2021-11-15T21:17:14Z,,MEMBER,,,,"We have a bunch of terms to describe the xarray structure, such as *dimension*, *coordinate*, *dimension coordinate*, etc.. Although it has been discussed in #1295 and we tried to use the consistent terminology in our docs, it looks still not easy for users to understand our functionalities. In #2399, @horta wrote a list of definitions (https://drive.google.com/file/d/1uJ_U6nedkNe916SMViuVKlkGwPX-mGK7/view?usp=sharing). I think it would be nice to have something like this in our docs. Any thought?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2410/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 441088452,MDU6SXNzdWU0NDEwODg0NTI=,2944,`groupby` does not correctly handle non-dimensional coordinate,6815844,closed,0,,,3,2019-05-07T07:47:17Z,2021-05-21T23:12:21Z,2021-05-21T23:12:21Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible ```python >>> import numpy as np >>> import xarray as xr >>> >>> da = xr.DataArray(np.arange(12).reshape(3, 4), dims=['x', 'y'], ... coords={'x': [0, 1, 1], 'x2': ('x', ['a', 'b', 'c'])}) >>> grouped = da.groupby('x').mean('x') >>> grouped array([[0., 1., 2., 3.], [6., 7., 8., 9.]]) Coordinates: * x (x) int64 0 1 x2 (x) array([[0., 1., 2., 3.], [6., 7., 8., 9.]]) Coordinates: * x (x) int64 0 1 Dimensions without coordinates: y ``` Compute mean of the coordinate. If not possible, drop it.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2944/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 254927382,MDU6SXNzdWUyNTQ5MjczODI=,1553,Multidimensional reindex,6815844,open,0,,,2,2017-09-04T03:29:39Z,2020-12-19T16:00:00Z,,MEMBER,,,,"From a discussion in #1473 [comment](https://github.com/pydata/xarray/pull/1473#issuecomment-326776669) It would be convenient if we have multi-dimensional `reindex` method, where we consider dimensions and coordinates of indexers. The proposed outline by @shoyer is + Given `reindex` arguments of the form `dim=array` where `array` is a 1D unlabeled array/list, convert them into `DataArray(array, [(dim, array)])`. + Do multi-dimensional indexing with broadcasting like `sel`, but fill in `NaN` for missing values (we could allow for customizing this with a `fill_value` argument). + Join coordinates like for `sel`, but coordinates from the indexers take precedence over coordinates from the object being indexed. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1553/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 216621142,MDU6SXNzdWUyMTY2MjExNDI=,1323,Image related methods,6815844,closed,0,,,9,2017-03-24T01:39:52Z,2020-10-08T16:00:18Z,2020-06-21T19:25:18Z,MEMBER,,,,"Currently I'm using xarray to handle multiple images (typically, a sequence of images), and I feel it would be convenient if xarray supports image related functions. There may be many possibilities, but particular methods I want to have in xarray are 1. xr.open_image(File) Currently, I open image by PILLOW, convert to np.ndarray, extract its attributes, then construct xr.DataArray from them. If I can do it by 1 line, it would be very great. 2. xr.DataArray.expand_dims(dim) I want to add additional channel dimension to grey scale images (size [W x H] -> [W x H x 1]), in order to pass them into convolutional neural network, which usually accepts 4-dimensional tensor [Batch x W x H x channel]. Image (possibly also video?) is naturally high-dimensional and I guess it would fit xarray's concept. Is this sufficiently broad interest?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1323/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 675604714,MDExOlB1bGxSZXF1ZXN0NDY1MDg1Njg1,4329,ndrolling repr fix,6815844,closed,0,,,6,2020-08-08T23:34:37Z,2020-08-09T13:15:50Z,2020-08-09T11:57:38Z,MEMBER,,0,pydata/xarray/pulls/4329," - [x] Closes #4328 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` There was a bug in `rolling.__repr__` but it was not tested. Fixed and tests are added.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4329/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 655389649,MDExOlB1bGxSZXF1ZXN0NDQ3ODkyNjE3,4219,nd-rolling,6815844,closed,0,,,16,2020-07-12T12:19:19Z,2020-08-08T07:23:51Z,2020-08-08T04:16:27Z,MEMBER,,0,pydata/xarray/pulls/4219," - [x] Closes #4196 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` I noticed that the implementation of nd-rolling is straightforward. The core part is implemented but I am wondering what the best API is, with keeping it backward-compatible. Obviously, it is basically should look like ```python da.rolling(x=3, y=3).mean() ``` A problem is other parameters, `centers` and `min_periods`. In principle, they can depend on dimension. For example, we can have `center=True` only for `x` but not for `y`. So, maybe we allow dictionary for them? ```python da.rolling(x=3, y=3, center={'x': True, 'y': False}, min_periods={'x': 1, 'y': None}).mean() ``` The same thing happens for `.construct` method. ```python da.rolling(x=3, y=3).construct(x='x_window', y='y_window', stride={'x': 2, 'y': 1}) ``` I'm afraid if this dictionary argument was a bit too redundant. Does anyone have another idea? ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4219/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 338662554,MDU6SXNzdWUzMzg2NjI1NTQ=,2269,A special function for unpickling old xarray object?,6815844,closed,0,,,6,2018-07-05T17:27:28Z,2020-07-11T02:55:38Z,2020-07-11T02:55:38Z,MEMBER,,,,"I noticed that some users experiencing troubles to restore xarray objects that is created xarray < 0.8. Is there any possibility to add a function to support unpickling old object, such as `xr.unpickle_legacy(file, version)`? We previously recommended using pickle to store xarray objects (at least for the short term use, maybe). xref (private repo) gafusion/OMFIT-source#2652","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2269/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 239918314,MDExOlB1bGxSZXF1ZXN0MTI4NDcxOTk4,1469,Argmin indexes,6815844,closed,0,,,6,2017-07-01T01:23:31Z,2020-06-29T19:36:25Z,2020-06-29T19:36:25Z,MEMBER,,0,pydata/xarray/pulls/1469," - [x] Closes #1388 - [x] Tests added / passed - [x] Passes ``git diff master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API With this PR, ValueError raises if `argmin()` is called by a multi-dimensional array. `argmin_indexes()` method is also added for `xr.DataArray`. Current API design for `argmin_indexes()` returns the argmin-indexes as an `OrderedDict` of `DataArray`s. Example: ```python In [1]: import xarray as xr ...: da = xr.DataArray([[1, 2], [-1, 40], [5, 6]], ...: [('x', ['c', 'b', 'a']), ('y', [1, 0])]) ...: ...: da.argmin_indexes() ...: Out[1]: OrderedDict([('x', array(1)), ('y', array(0))]) In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', array([0, 0, 0]) Coordinates: * x (x) - [x] Closes #2223 - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now n-dimensional interp works sequentially if possible. It may speed up some cases.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4069/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 619347681,MDU6SXNzdWU2MTkzNDc2ODE=,4068,utility function to save complex values as a netCDF file,6815844,closed,0,,,3,2020-05-16T01:19:16Z,2020-05-25T08:36:59Z,2020-05-25T08:36:58Z,MEMBER,,,,"Currently, we disallow to save complex values to a netCDF file. Maybe netCDF itself does not support complex values, but there may be some workarounds. It would be very handy for me. The most naive workaround may be to split each complex value into a real and imaginary part, add some flags, and restore it when loading them from the file. Maybe we may add a special suffix to the variable name? ```python >>> ds = xr.Dataset({'a': ('x': [1+2j, 2+3j])}, coords={'x': [0, 1]}) >>> ds.to_netcdf('tmp.nc', encode_complex=True) >>> xr.load_netcdf('tmp.nc') Dimensions: (x: 2) Coordinates: * x (x) int64 0 1 Data variables: a__real__ (x) int64 1 2 a__imag__ (x) int64 2 3 >>> xr.load_netcdf('tmp.nc', decode_complex=True) Dimensions: (x: 2) Coordinates: * x (x) int64 0 1 Data variables: a (x) complex128 (1+2j) (2+3j) ``` I think there may be a better way. Any thoughts are welcome :) p.s. I just found that `engine=h5netcdf` can save complex values, but the file becomes an invalid netcdf file. I'm not sure if it worth the trouble just to make a valid netCDF file.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4068/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 613044689,MDExOlB1bGxSZXF1ZXN0NDEzODcyODQy,4036,support darkmode,6815844,closed,0,,,5,2020-05-06T04:39:07Z,2020-05-21T21:06:15Z,2020-05-07T20:36:32Z,MEMBER,,0,pydata/xarray/pulls/4036," - [x] Closes #4024 - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now it looks like ![image](https://user-images.githubusercontent.com/6815844/81138965-e3f04300-8f9e-11ea-9e5d-7b5b680932d7.png) I'm pretty sure that this workaround is not the best (maybe the second worst), as it only supports the dark mode of vscode but not other environments. I couldn't find a good way to make a workaround for the general dark-mode. Any advice is welcome. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4036/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 611643130,MDU6SXNzdWU2MTE2NDMxMzA=,4024,small contrast of html view in VScode darkmode,6815844,closed,0,,,6,2020-05-04T06:53:32Z,2020-05-07T20:36:32Z,2020-05-07T20:36:32Z,MEMBER,,,," If using xarray inside VScode with darkmode, the new html repr has a small contrast of the text color and background. ![image](https://user-images.githubusercontent.com/6815844/80942121-fa6f9080-8e1e-11ea-90e1-a9091b678eee.png) Maybe the text color comes from the default setting, but the background color is not. In light mode, it looks nice. #### Versions
Output of xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.7.5 (default, Oct 25 2019, 15:51:11) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-1080-oem machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.15.1 pandas: 0.25.3 numpy: 1.17.4 scipy: 1.3.2 netCDF4: 1.4.2 pydap: None h5netcdf: 0.8.0 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.9.0 distributed: 2.9.0 matplotlib: 3.1.1 cartopy: None seaborn: 0.9.0 numbagg: None setuptools: 42.0.2.post20191203 pip: 19.3.1 conda: None pytest: 5.3.2 IPython: 7.10.2 sphinx: 2.3.0
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4024/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 596163034,MDExOlB1bGxSZXF1ZXN0NDAwNTExNjkz,3953,Fix wrong order of coordinate converted from pd.series with MultiIndex,6815844,closed,0,,,2,2020-04-07T21:28:04Z,2020-04-08T05:49:46Z,2020-04-08T02:19:11Z,MEMBER,,0,pydata/xarray/pulls/3953," - [x] Closes #3951 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API It looks `dataframe.set_index(index).index == index` is not always true. Added a workaround for this...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3953/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",,,13221727,pull 207962322,MDU6SXNzdWUyMDc5NjIzMjI=,1271,Attrs are lost in mathematical computation,6815844,closed,0,,,7,2017-02-15T23:27:51Z,2020-04-05T19:00:14Z,2017-02-18T11:03:42Z,MEMBER,,,,"Related to #138 Why is `keep_attrs` option in reduce method set to FALSE by default? I feel it is more natural to keep all the attrs after some computation that returns xr.DaraArray Such as `data*100`. (By it is not possible to set this option TRUE when using an operator.) Is it an option to move this option to __init__ method, in case of TRUE all the attrs are tracked after computations of the object and also the object generated from this object?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1271/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 546784890,MDExOlB1bGxSZXF1ZXN0MzYwMzk1OTY4,3670,sel with categorical index,6815844,closed,0,,,7,2020-01-08T10:51:06Z,2020-01-25T22:38:28Z,2020-01-25T22:38:21Z,MEMBER,,0,pydata/xarray/pulls/3670," - [x] Closes #3669, #3674 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API It is a bit surprising that no members have used xarray with CategoricalIndex... If there is anything missing additionally, please feel free to point it out.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3670/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 523853001,MDExOlB1bGxSZXF1ZXN0MzQxNzYxNTg1,3542,sparse option to reindex and unstack,6815844,closed,0,,,2,2019-11-16T14:41:00Z,2019-11-19T22:40:34Z,2019-11-19T16:23:34Z,MEMBER,,0,pydata/xarray/pulls/3542," - [x] Closes #3518 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added `sparse` option to `reindex` and `unstack`. I just added a minimal set of codes necessary to `unstack` and `reindex`. There is still a lot of space to complete the sparse support as discussed in #3245. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3542/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 523831612,MDExOlB1bGxSZXF1ZXN0MzQxNzQ2NDA4,3541,Added fill_value for unstack,6815844,closed,0,,,3,2019-11-16T11:10:56Z,2019-11-16T14:42:31Z,2019-11-16T14:36:44Z,MEMBER,,0,pydata/xarray/pulls/3541," - [x] Closes #3518 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added an option `fill_value` for `unstack`. I am trying to add `sparse` option too, but it may take longer. Probably better to do in a separate PR? ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3541/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 522319360,MDExOlB1bGxSZXF1ZXN0MzQwNTQxNzMz,3520,Fix set_index when an existing dimension becomes a level,6815844,closed,0,,,2,2019-11-13T16:06:50Z,2019-11-14T11:56:25Z,2019-11-14T11:56:18Z,MEMBER,,0,pydata/xarray/pulls/3520," - [x] Closes #3512 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API There was a bug in `set_index`, where an old dimension was not updated if it becomes a level of MultiIndex.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3520/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 521317260,MDU6SXNzdWU1MjEzMTcyNjA=,3512,selection from MultiIndex does not work properly,6815844,closed,0,,,0,2019-11-12T04:12:12Z,2019-11-14T11:56:18Z,2019-11-14T11:56:18Z,MEMBER,,,,"#### MCVE Code Sample ```python da = xr.DataArray([0, 1], dims=['x'], coords={'x': [0, 1], 'y': 'a'}) db = xr.DataArray([2, 3], dims=['x'], coords={'x': [0, 1], 'y': 'b'}) data = xr.concat([da, db], dim='x').set_index(xy=['x', 'y']) data.sel(y='a') >>> >>> array([0, 1, 2, 3]) >>> Coordinates: >>> * x (x) int64 0 1 ``` #### Expected Output ```python >>> >>> array([0, 1]) >>> Coordinates: >>> * x (x) int64 0 1 ``` #### Problem Description Should select the array #### Output of ``xr.show_versions()``
INSTALLED VERSIONS ------------------ commit: None python: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-957.10.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.14.0 pandas: 0.24.2 numpy: 1.15.4 scipy: 1.2.1 netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: None distributed: None matplotlib: 3.0.2 cartopy: None seaborn: 0.9.0 numbagg: None setuptools: 40.8.0 pip: 19.0.3 conda: None pytest: 5.0.0 IPython: 7.3.0 sphinx: None
Sorry for being quiet for a long time. I hope I could send a fix for this in a few days...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3512/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 345090013,MDU6SXNzdWUzNDUwOTAwMTM=,2318,Failing test by dask==0.18.2,6815844,closed,0,,,2,2018-07-27T04:52:08Z,2019-11-10T04:37:15Z,2019-11-10T04:37:15Z,MEMBER,,,,"Tests are failing, which is caused by new release of dask==0.18.2. xref: dask/dask#3822 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2318/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 280673215,MDU6SXNzdWUyODA2NzMyMTU=,1771,Needs performance check / improvements in value assignment of DataArray,6815844,open,0,,,1,2017-12-09T03:42:41Z,2019-10-28T14:53:24Z,,MEMBER,,,,"https://github.com/pydata/xarray/blob/5e801894886b2060efa8b28798780a91561a29fd/xarray/core/dataarray.py#L482-L489 In #1746, we added a validation in `xr.DataArray.__setitem__` whether the coordinates consistency of array, key, and values are checked. In the current implementation, we call `xr.DataArray.__getitem__` to use the existing coordinate validation logic, but it does unnecessary indexing and it may decrease the `__setitem__` performance if the arrray is multidimensional. We may need to optimize the logic here. Is it reasonable to constantly monitor the performance of basic operations, such as `Dataset` construction, alignment, indexing, and assignment? (or are these operations too light to make a performance monitor?) cc @jhamman @shoyer ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1771/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 349855157,MDU6SXNzdWUzNDk4NTUxNTc=,2362,Wrong behavior of DataArray.resample,6815844,closed,0,,,0,2018-08-13T00:02:47Z,2019-10-22T19:42:08Z,2019-10-22T19:42:08Z,MEMBER,,,,"From #2356, I noticed resample and groupby works nice for Dataset but not for DataArray #### Code Sample, a copy-pastable example if possible ```python In [14]: import numpy as np ...: import xarray as xr ...: import pandas as pd ...: ...: time = pd.date_range('2000-01-01', freq='6H', periods=365 * 4) ...: ds = xr.Dataset({'foo': (('time', 'x'), np.random.randn(365 * 4, 5)), 'time': time, ...: 'x': np.arange(5)}) In [15]: ds Out[15]: Dimensions: (time: 1460, x: 5) Coordinates: * time (time) datetime64[ns] 2000-01-01 ... 2000-12-30T18:00:00 * x (x) int64 0 1 2 3 4 Data variables: foo (time, x) float64 -0.6916 -1.247 0.5376 ... -0.2197 -0.8479 -0.6719 ``` `ds.resample(time='M').mean()['foo']` and `ds['foo'].resample(time='M').mean()['foo']` should be the same, but currently not ```python In [16]: ds.resample(time='M').mean()['foo'] Out[16]: array([[-0.005705, 0.018112, 0.22818 , -0.11093 , -0.031283], [-0.007595, 0.040065, -0.099885, -0.123539, -0.013808], [ 0.112108, -0.040783, -0.023187, -0.107504, 0.082927], [-0.007728, 0.031719, 0.155191, -0.030439, 0.095658], [ 0.140944, -0.050645, 0.116619, -0.044866, -0.242026], [ 0.029198, -0.002858, 0.13024 , -0.096648, -0.170336], [-0.062954, 0.116073, 0.111285, -0.009656, -0.164599], [ 0.030806, 0.051327, -0.031282, 0.129056, -0.085851], [ 0.099617, -0.021049, 0.137962, -0.04432 , 0.050743], [ 0.117366, 0.24129 , -0.086894, 0.066012, 0.004789], [ 0.063861, -0.015472, 0.069508, 0.026725, -0.124712], [-0.058683, 0.154761, 0.028861, -0.139571, -0.037268]]) Coordinates: * time (time) datetime64[ns] 2000-01-31 2000-02-29 ... 2000-12-31 * x (x) int64 0 1 2 3 4 ``` ```python In [17]: ds['foo'].resample(time='M').mean() # dimension x is gone Out[17]: array([ 0.019675, -0.040952, 0.004712, 0.04888 , -0.015995, -0.022081, -0.00197 , 0.018811, 0.044591, 0.068512, 0.003982, -0.01038 ]) Coordinates: * time (time) datetime64[ns] 2000-01-31 2000-02-29 ... 2000-12-31 ``` #### Problem description resample should work identically for DataArray and Dataset #### Expected Output ```python ds.resample(time='M').mean()['foo'] == ds['foo'].resample(time='M').mean() ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2362/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 281423161,MDExOlB1bGxSZXF1ZXN0MTU3ODU2NTEx,1776,[WIP] Fix pydap array wrapper,6815844,closed,0,,3008859,6,2017-12-12T15:22:07Z,2019-09-25T15:44:19Z,2018-01-09T01:48:13Z,MEMBER,,0,pydata/xarray/pulls/1776," - [x] Closes #1775 (remove if there is no corresponding issue, which should only be the case for minor changes) - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` (remove if you did not edit any Python files) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I am trying to fix #1775, but tests are still failing. Any help would be appreciated.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1776/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 440900618,MDExOlB1bGxSZXF1ZXN0Mjc2MzQ2MTQ3,2942,Fix rolling operation with dask and bottleneck,6815844,closed,0,,,7,2019-05-06T21:23:41Z,2019-06-30T00:34:57Z,2019-06-30T00:34:57Z,MEMBER,,0,pydata/xarray/pulls/2942," - [x] Closes #2940 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fix for #2940 It looks that there was a bug in the previous logic, but I am not sure why it was working...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2942/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 432019600,MDU6SXNzdWU0MzIwMTk2MDA=,2887,Safely open / close netCDF files without resource locking,6815844,closed,0,,,9,2019-04-11T13:19:45Z,2019-05-16T15:28:30Z,2019-05-16T15:28:30Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible (essentially the same to #1629) Opening netCDF file via `xr.open_dataset` locks a resource, preventing to write a file with the same name (as pointed out and answered as an expected behavior in #1629). ```python import xarray as xr ds = xr.Dataset({'var': ('x', [0, 1, 2])}) ds.to_netcdf('test.nc') ds_read = xr.open_dataset('test.nc') ds.to_netcdf('test.nc') # -> PermissionError ``` ```python ds_read = xr.open_dataset('test.nc').load() ds.to_netcdf('test.nc') # -> PermissionError ``` ```python ds_read = xr.open_dataset('test.nc').load() ds_read.close() ds.to_netcdf('test.nc') # no error ``` #### Problem description Another program cannot write the same netCDF file that xarray has opened, unless `close` method is not called. ----- -- EDIT -- `close()` method does not return the object, thus it cannot be put in the chain call, such as ```python some_function(xr.open_dataset('test.nc').close()) ``` ----- It is understandable when we do not want to load the entire file into the memory. However, sometimes I want to read the file that will be updated soon by another program. Also, I think that many users who are not accustomed to netCDF may expect this behavior (as `np.loadtxt` does) and will be surprised after getting `PermissionError`. I think it would be nice to have an option such as `load_all=True` or even make it a default? #### Expected Output No error #### Output of ``xr.show_versions()``
# Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: None python: 3.7.1 (default, Oct 23 2018, 19:19:42) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-1035-oem machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.1 xarray: 0.12.0+11.g7d0e895f.dirty pandas: 0.23.4 numpy: 1.15.4 scipy: 1.2.0 netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: 2.8.0 Nio: None zarr: None cftime: 1.0.2.1 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.0.0 distributed: 1.25.0 matplotlib: 2.2.2 cartopy: None seaborn: 0.9.0 setuptools: 40.5.0 pip: 18.1 conda: None pytest: 4.0.1 IPython: 7.1.1 sphinx: 1.8.2
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2887/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 398468139,MDExOlB1bGxSZXF1ZXN0MjQ0MTYyMTgx,2668,fix datetime_to_numeric and Variable._to_numeric,6815844,closed,0,,,14,2019-01-11T22:02:07Z,2019-02-11T11:58:22Z,2019-02-11T09:47:09Z,MEMBER,,0,pydata/xarray/pulls/2668," - [x] Closes #2667 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Started to fixing #2667","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2668/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 396157243,MDExOlB1bGxSZXF1ZXN0MjQyNDM1MjAz,2653,Implement integrate,6815844,closed,0,,,2,2019-01-05T11:22:10Z,2019-01-31T17:31:31Z,2019-01-31T17:30:31Z,MEMBER,,0,pydata/xarray/pulls/2653," - [x] Closes #1288 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I would like to add `integrate`, which is essentially an xarray-version of `np.trapz`. I know there was variety of discussions in #1288, but I think it would be nice to limit us within that numpy provides by `np.trapz`, i.e., 1. only for `trapz` not `rectangle` or `simps` 2. do not care `np.nan` 3. do not support `bounds` Most of them (except for 1) can be solved by combining several existing methods. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2653/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 231308952,MDExOlB1bGxSZXF1ZXN0MTIyNDE4MjA3,1426,scalar_level in MultiIndex,6815844,closed,0,,,10,2017-05-25T11:03:05Z,2019-01-14T21:20:28Z,2019-01-14T21:20:27Z,MEMBER,,0,pydata/xarray/pulls/1426," - [x] Closes #1408 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API [Edit for more clarity] I restarted a new branch to fix #1408 (I closed the older one #1412). ![my_proposal](https://cloud.githubusercontent.com/assets/6815844/26553065/f6562366-44c4-11e7-8c9c-3ef7facfe056.png) Because the changes I made is relatively large, here I summarize this PR. # Sumamry In this PR, I newly added two kinds of levels in MultiIndex, `index-level` and `scalar-level`. `index-level` is an ordinary level in MultiIndex (as in current implementation), while `scalar-level` indicates dropped level (which is newly added in this PR). # Changes in behaviors. 1. Indexing a scalar at a particular level changes that level to `scalar-level` instead of dropping that level (changed from #767). 2. Indexing a scalar from a MultiIndex, the selected value now becomes a `MultiIndex-scalar` rather than a scalar of tuple. 3. Enabled indexing along a `index-level` if the MultiIndex has only a single `index-level`. Examples of the output are shown below. Any suggestions for these behaviors are welcome. ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) ...: ds2 = xr.Dataset({'foo': (('x',), [4, 5, 6])}, {'x': [1, 2, 3], 'y': 'b'}) ...: # example data ...: ds = xr.concat([ds1, ds2], dim='y').stack(yx=['y', 'x']) ...: ds Out[1]: Dimensions: (yx: 6) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'a' 'a' 'b' 'b' 'b' # <--- this is index-level - x (yx) int64 1 2 3 1 2 3 # <--- this is also index-level Data variables: foo (yx) int64 1 2 3 4 5 6 In [2]: # 1. indexing a scalar converts `index-level` x to `scalar-level`. ...: ds.sel(x=1) Out[2]: Dimensions: (yx: 2) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'b' # <--- this is index-level - x int64 1 # <--- this is scalar-level Data variables: foo (yx) int64 1 4 In [3]: # 2. indexing a single element from MultiIndex makes a `MultiIndex-scalar` ...: ds.isel(yx=0) Out[3]: Dimensions: () Coordinates: yx MultiIndex # <--- this is MultiIndex-scalar - y Dimensions: (yx: 2) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'b' - x int64 1 Data variables: foo (yx) int64 1 4 ``` # Changes in the public APIs Some changes were necessary to the public APIs, though I tried to minimize them. + `level_names`, `get_level_values` methods were moved from `IndexVariable` to `Variable`. This is because `IndexVariable` cannnot handle 0-d array, which I want to support in 2. + `scalar_level_names` and `all_level_names` properties were added to `Variable` + `reset_levels` method was added to `Variable` class to control `scalar-level` and `index-level`. # Implementation summary The main changes in the implementation is the addition of our own wrapper of `pd.MultiIndex`, `PandasMultiIndexAdapter`. This does most of `MultiIndex`-related operations, such as indexing, concatenation, conversion between 'scalar-level` and `index-level`. # What we can do now The main merit of this proposal is that it enables us to handle `MultiIndex` more consistent way to the normal `Variable`. Now we can + recover the MultiIndex with dropped level. ```python In [5]: ds.sel(x=1).expand_dims('x') Out[5]: Dimensions: (yx: 2) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'b' - x (yx) int64 1 1 Data variables: foo (yx) int64 1 4 ``` + construct a MultiIndex by concatenation of MultiIndex-scalar. ```python In [8]: xr.concat([ds.isel(yx=i) for i in range(len(ds['yx']))], dim='yx') Out[8]: Dimensions: (yx: 6) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'a' 'a' 'b' 'b' 'b' - x (yx) int64 1 2 3 1 2 3 Data variables: foo (yx) int64 1 2 3 4 5 6 ``` # What we cannot do now With the current implementation, we can do ```python ds.sel(y='a').rolling(x=2) ``` but with this PR we cannot, because `x` is not yet an ordinary coordinate, but a MultiIndex with a single `index-level`. I think it is better if we can handle such a MultiIndex with a single `index-level` as very similar way to an ordinary coordinate. Similary, we can neither do `ds.sel(y='a').mean(dim='x')`. Also, `ds.sel(y='a').to_netcdf('file')` (#719) # What are to be decided + How to `repr` these new levels (Current formatting is shown in Out[2] and Out[3] above.) + Terminologies such as `index-level`, `scalar-level`, `MultiIndex-scalar` are clear enough? + How much operations should we support for a single `index-level` MultiIndex? Do we support `ds.sel(y='a').rolling(x=2)` and `ds.sel(y='a').mean(dim='x')`? # TODOs - [ ] Support indexing with DataAarray, `ds.sel(x=ds.x[0])` - [ ] Support `stack`, `unstack`, `set_index`, `reset_index` methods with `scalar-level` MultiIndex. - [ ] Add a full document - [ ] Clean up the code related to MultiIndex - [ ] Fix issues (#1428, #1430, #1431) related to MultiIndex ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1426/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 391477755,MDExOlB1bGxSZXF1ZXN0MjM4OTcyNzU5,2612,Added Coarsen,6815844,closed,0,,,16,2018-12-16T15:28:31Z,2019-01-06T09:13:56Z,2019-01-06T09:13:46Z,MEMBER,,0,pydata/xarray/pulls/2612," - [x] Closes #2525 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Started to implement `corsen`. The API is currently something like ```python actual = ds.coarsen(time=2, x=3, side='right', coordinate_func={'time': np.max}).max() ``` Currently, it is not working for a datetime coordinate, since `mean` does not work for this dtype. e.g. ```python da = xr.DataArray(np.linspace(0, 365, num=365), dims='time', coords={'time': pd.date_range('15/12/1999', periods=365)}) da['time'].mean() # -> TypeError: ufunc add cannot use operands with types dtype(' array([[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan]]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 * y (y) int64 0 1 2 3 4 ``` #### Problem description After unstack, there are still values that are not selected by the previous `isel`. Probably the upstream bug? #### Expected Output ```python Out[1]: array([[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.]]) Coordinates: * x (x) int64 0 1 2 3 * y (y) int64 0 1 2 3 4 ``` #### Output of ``xr.show_versions()``
# Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: None python: 3.7.1.final.0 python-bits: 64 OS: Linux OS-release: 4.15.0-42-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.9 pandas: 0.23.4 numpy: 1.15.4 scipy: 1.1.0 netCDF4: 1.4.2 h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.2.1 PseudonetCDF: None rasterio: None iris: None bottleneck: None cyordereddict: None dask: 1.0.0 distributed: 1.25.0 matplotlib: 3.0.1 cartopy: None seaborn: None setuptools: 40.5.0 pip: 18.1 conda: None pytest: 4.0.1 IPython: 7.1.1 sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2619/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 392535505,MDExOlB1bGxSZXF1ZXN0MjM5Nzg0ODE1,2621,Fix multiindex selection,6815844,closed,0,,,7,2018-12-19T10:30:15Z,2018-12-24T15:37:27Z,2018-12-24T15:37:27Z,MEMBER,,0,pydata/xarray/pulls/2621," - [x] Closes #2619 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fix using ` MultiIndex.remove_unused_levels()`","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2621/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 368045263,MDExOlB1bGxSZXF1ZXN0MjIxMzExNzcw,2477,Inhouse LooseVersion,6815844,closed,0,,,2,2018-10-09T05:23:56Z,2018-10-10T13:47:31Z,2018-10-10T13:47:23Z,MEMBER,,0,pydata/xarray/pulls/2477," - [x] Closes #2468 - [x] Tests added - [N.A.] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) A fix for #2468.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2477/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 366653476,MDExOlB1bGxSZXF1ZXN0MjIwMjcyODMz,2462,pep8speaks,6815844,closed,0,,,14,2018-10-04T07:17:34Z,2018-10-07T22:40:15Z,2018-10-07T22:40:08Z,MEMBER,,0,pydata/xarray/pulls/2462," - [x] Closes #2428 I installed pep8speaks as suggested in #2428. It looks they do not need a yml file, but it may be safer to add this (just renamed from `.stickler.yml`)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2462/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 364565122,MDExOlB1bGxSZXF1ZXN0MjE4NzIxNDUy,2447,restore ddof support in std,6815844,closed,0,,,3,2018-09-27T16:51:44Z,2018-10-03T12:44:55Z,2018-09-28T13:44:29Z,MEMBER,,0,pydata/xarray/pulls/2447," - [x] Closes #2440 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes It looks that I wrongly remove `ddof` option for `nanstd` in #2236. This PR fixes this. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2447/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 364545910,MDExOlB1bGxSZXF1ZXN0MjE4NzA2NzQ1,2446,fix:2445,6815844,closed,0,,,0,2018-09-27T16:00:17Z,2018-09-28T18:24:42Z,2018-09-28T18:24:36Z,MEMBER,,0,pydata/xarray/pulls/2446," - [x] Closes #2445 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes It is a regression after #2360.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2446/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 364008818,MDU6SXNzdWUzNjQwMDg4MTg=,2440,ddof does not working with 0.10.9,6815844,closed,0,,,0,2018-09-26T12:42:18Z,2018-09-28T13:44:29Z,2018-09-28T13:44:29Z,MEMBER,,,,"Copied from issue#2236 [comments](https://github.com/pydata/xarray/pull/2236#issuecomment-424697772), by @st-bender Hi, just to let you know that .std() does not accept the ddof keyword anymore (it worked in 0.10.8) Should I open a new bugreport? Edit: It fails with: ```python ~/Work/miniconda3/envs/stats/lib/python3.6/site-packages/xarray/core/duck_array_ops.py in f(values, axis, skipna, **kwargs) 234 235 try: --> 236 return func(values, axis=axis, **kwargs) 237 except AttributeError: 238 if isinstance(values, dask_array_type): TypeError: nanstd() got an unexpected keyword argument 'ddof' ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2440/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 349857086,MDU6SXNzdWUzNDk4NTcwODY=,2363,"Reduction APIs for groupby, groupby_bins, resample, rolling",6815844,closed,0,,,1,2018-08-13T00:30:10Z,2018-09-28T06:54:30Z,2018-09-28T06:54:30Z,MEMBER,,,,"From #2356 APIs for `groupby`, `groupby_bins`, `resample`, `rolling` are different, especially for multi-dimensional array. ```python import numpy as np import xarray as xr import pandas as pd time = pd.date_range('2000-01-01', freq='6H', periods=365 * 4) ds = xr.Dataset({'foo': (('time', 'x'), np.random.randn(365 * 4, 5)), 'time': time, 'x': [0, 1, 2, 1, 0]}) ds.rolling(time=2).mean() # result dims : ('time', 'x') ds.resample(time='M').mean() # result dims : ('time', 'x') ds['foo'].resample(time='M').mean() # result dims : ('time', ) maybe a bug #2362 ds.groupby('time.month').mean() # result dims : ('month', ) ds.groupby_bins('time', 3).mean() # result dims : ('time_bins', ) ``` + In `rolling` and `resample`(for Dataset), reduction without argument is carried out along grouped dimension + In `rolling`, reduction along other dimesnion is not possible + In `groupby` and `groupby_bins`, reduction is applied to the *grouped* objects and if without argument, it reduces alongall the dimensions of each grouped object. I think `rolling`s API is most clean, but I am not sure it is worth to change these APIs. The possible options would be 1. Change APIs of `groupby` and `groupby_bins` so that they share similar API with `rolling`. 2. Document clearly how to perform `resample` or `groupby` with multidimensional arrays.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2363/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 350247452,MDExOlB1bGxSZXF1ZXN0MjA4MTQ0ODQx,2366,Future warning for default reduction dimension of groupby,6815844,closed,0,,,1,2018-08-14T01:16:34Z,2018-09-28T06:54:30Z,2018-09-28T06:54:30Z,MEMBER,,0,pydata/xarray/pulls/2366," - [ ] Closes #xxxx - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Started to fix #2363. Now warns a futurewarning in groupby if default reduction dimension is not specified. As a side effect, I added `xarray.ALL_DIMS`. With `dim=ALL_DIMS` always reduces along all the dimensions.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2366/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 333248242,MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3,2236,Refactor nanops,6815844,closed,0,,,19,2018-06-18T12:27:31Z,2018-09-26T12:42:55Z,2018-08-16T06:59:33Z,MEMBER,,0,pydata/xarray/pulls/2236," - [x] Closes #2230 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) In #2230, the addition of `min_count` keywords for our reduction methods was discussed, but our `duck_array_ops` module is becoming messy (mainly due to nan-aggregation methods for dask, bottleneck and numpy) and it looks a little hard to maintain them. I tried to refactor them by moving nan-aggregation methods to `nanops` module. I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation. Note: In my implementation, **bottleneck is not used when `skipna=False`**. bottleneck would be advantageous when `skipna=True` as numpy needs to copy the entire array once, but I think numpy's method is still OK if `skipna=False`. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2236/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 356698348,MDExOlB1bGxSZXF1ZXN0MjEyODg5NzMy,2398,implement Gradient,6815844,closed,0,,,19,2018-09-04T08:11:52Z,2018-09-21T20:02:43Z,2018-09-21T20:02:43Z,MEMBER,,0,pydata/xarray/pulls/2398," - [x] Closes #1332 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Added `xr.gradient`, `xr.DataArray.gradient`, and `xr.Dataset.gradient` according to #1332.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2398/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 351502921,MDExOlB1bGxSZXF1ZXN0MjA5MDc4NDQ4,2372,[MAINT] Avoid using duck typing,6815844,closed,0,,,1,2018-08-17T08:26:31Z,2018-08-20T01:13:26Z,2018-08-20T01:13:16Z,MEMBER,,0,pydata/xarray/pulls/2372," - [x] Closes #2179 - [x] Tests passed - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2372/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 351591072,MDExOlB1bGxSZXF1ZXN0MjA5MTQ1NDcy,2373,More support of non-string dimension names,6815844,closed,0,,,2,2018-08-17T13:18:18Z,2018-08-20T01:13:02Z,2018-08-20T01:12:37Z,MEMBER,,0,pydata/xarray/pulls/2373," - [x] Tests passed (for all non-documentation changes) Following to #2174 In some methods, consistency of the dictionary arguments and keyword arguments are checked twice in `Dataset` and `Variable`. Can we change the API of Variable so that it does not take kwargs-type argument for dimension names?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2373/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 348536270,MDExOlB1bGxSZXF1ZXN0MjA2ODY0NzU4,2353,Raises a ValueError for a confliction between dimension names and level names,6815844,closed,0,,,0,2018-08-08T00:52:29Z,2018-08-13T22:16:36Z,2018-08-13T22:16:31Z,MEMBER,,0,pydata/xarray/pulls/2353," - [x] Closes #2299 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API. Now it raises an Error to assign new dimension with the name conflicting with an existing level name. Therefore, it is not allowed ```python b = xr.Dataset(coords={'dim0': ['a', 'b'], 'dim1': [0, 1]}) b = b.stack(dim_stacked=['dim0', 'dim1']) # This should raise an errors even though its length is consistent with `b['dim0']` b['c'] = (('dim0',), [10, 11, 12, 13]) # This is OK b['c'] = (('dim_stacked',), [10, 11, 12, 13]) ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2353/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 348539667,MDExOlB1bGxSZXF1ZXN0MjA2ODY3MjMw,2354,Mark some tests related to cdat-lite as xfail,6815844,closed,0,,,2,2018-08-08T01:13:25Z,2018-08-10T16:09:30Z,2018-08-10T16:09:30Z,MEMBER,,0,pydata/xarray/pulls/2354,"I just mark some to_cdms2 tests xfail. See #2332 for the details. It is a temporal workaround and we may need to keep #2332 open until it is solved.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2354/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 348516727,MDU6SXNzdWUzNDg1MTY3Mjc=,2352,Failing test for python=3.6 dask-dev,6815844,closed,0,,,3,2018-08-07T23:02:02Z,2018-08-08T01:45:45Z,2018-08-08T01:45:45Z,MEMBER,,,,"Recently, dask renamed `dask.ghost` to `dask.overlap`. We use them around `rolling`. For the patch, see #2349 . BTW, there is another faling test in python=2.7 dev, claiming that > iris and pynio which gives various errors of arrays not being equal for test_to_and_from_cdms2_sgrid and test_to_and_from_cdms2_ugrid Is anyone working on this? If not, I think we can temporally skip these tests for python 2.7.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2352/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 348108577,MDExOlB1bGxSZXF1ZXN0MjA2NTM3NDc0,2349,dask.ghost -> dask.overlap,6815844,closed,0,,,0,2018-08-06T22:54:46Z,2018-08-08T01:14:04Z,2018-08-08T01:14:02Z,MEMBER,,0,pydata/xarray/pulls/2349,"Dask renamed `dask.ghost` -> `dask.overlap` in dask/dask#3830. This PR follows up this.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2349/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 347672994,MDExOlB1bGxSZXF1ZXN0MjA2MjI0Mjcz,2342,apply_ufunc now raises a ValueError when the size of input_core_dims is inconsistent with number of argument,6815844,closed,0,,,0,2018-08-05T06:20:03Z,2018-08-06T22:38:57Z,2018-08-06T22:38:53Z,MEMBER,,0,pydata/xarray/pulls/2342," - [x] Closes #2341 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Now raises a ValueError when the size of input_core_dims is inconsistent with number of argument. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2342/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 347662610,MDU6SXNzdWUzNDc2NjI2MTA=,2341,apply_ufunc silently neglects arguments if `len(input_core_dims) < args`,6815844,closed,0,,,1,2018-08-05T02:16:00Z,2018-08-06T22:38:53Z,2018-08-06T22:38:53Z,MEMBER,,,,"From [SO](https://stackoverflow.com/questions/51680659/disparity-between-result-of-numpy-gradient-applied-directly-and-applied-using-xa/51690873#51690873) In the following script, the second argument is silently neglected, ```python da = xr.DataArray(np.random.randn(4, 3), coords={'x': [5, 7, 9, 11]}, dims=('x', 'y')) xr.apply_ufunc(np.gradient, da, da.coords['x'].values, kwargs={'axis': -1}, input_core_dims=[['x']], output_core_dims=[['x']], output_dtypes=[da.dtype]) ``` This is because we need to the same number of `input_core_dims` to the number of arguments, https://github.com/pydata/xarray/blob/56381ef444c5e699443e8b4e08611060ad5c9507/xarray/core/computation.py#L535-L538 The correct scipt might be `input_core_dims=[['x']]` -> `input_core_dims=[['x'], []]`. I think we can raise a more friendly error if the size of `input_core_dims` are wrong. EDIT: Or we can automatically insert an empty tuple or `None` for non-xarray object? `input_core_dims` for non-xarray object sounds a little strange. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2341/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 347677525,MDExOlB1bGxSZXF1ZXN0MjA2MjI2ODU0,2343,local flake8,6815844,closed,0,,,0,2018-08-05T07:47:38Z,2018-08-05T23:47:00Z,2018-08-05T23:47:00Z,MEMBER,,0,pydata/xarray/pulls/2343,Trivial changes to pass local flake8 tests.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2343/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 345434195,MDExOlB1bGxSZXF1ZXN0MjA0NTg1MDU5,2326,fix doc build error after #2312,6815844,closed,0,,,0,2018-07-28T09:15:20Z,2018-07-28T10:05:53Z,2018-07-28T10:05:50Z,MEMBER,,0,pydata/xarray/pulls/2326,"I merged #2312 without making sure the building test passing, but there was a typo. Ths PR fixes it. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2326/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 333480301,MDU6SXNzdWUzMzM0ODAzMDE=,2238,Failing test with dask_distributed,6815844,closed,0,2443309,,5,2018-06-19T00:34:45Z,2018-07-14T16:19:53Z,2018-07-14T16:19:53Z,MEMBER,,,,"Some tests related to dask/distributed are failing in travis. They are raising a `TypeError: can't pickle thread.lock objects`. Could anyone help to look inside? See the travis's log for the current master: https://travis-ci.org/pydata/xarray/builds/392530577","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2238/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 289556132,MDExOlB1bGxSZXF1ZXN0MTYzNjU3NDI0,1837,Rolling window with `as_strided`,6815844,closed,0,,,14,2018-01-18T09:18:19Z,2018-06-22T22:27:11Z,2018-03-01T03:39:19Z,MEMBER,,0,pydata/xarray/pulls/1837," - [x] Closes #1831, #1142, #819 - [x] Tests added - [x] Tests passed - [x] Passes ``git diff upstream/master **/*py | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I started to work for refactoring rollings. As suggested in [#1831 comment](https://github.com/pydata/xarray/issues/1831#issuecomment-357828636), I implemented `rolling_window` methods based on `as_strided`. I got more than 1,000 times speed up! yey! ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(10000, 3), dims=['x', 'y']) ``` with the master ```python %timeit da.rolling(x=5).reduce(np.mean) 1 loop, best of 3: 9.68 s per loop ``` with the current implementation ```python %timeit da.rolling(x=5).reduce(np.mean) 100 loops, best of 3: 5.29 ms per loop ``` and with the bottleneck ```python %timeit da.rolling(x=5).mean() 100 loops, best of 3: 2.62 ms per loop ``` My current concerns are + Can we expose the new `rolling_window` method of `DataArray` and `Dataset` to the public? I think this method itself is useful for many usecases, such as short-term-FFT and convolution. This also gives more flexible rolling operation, such as windowed moving average, strided rolling, and ND-rolling. + Is there any dask's equivalence to numpy's `as_strided`? Currently, I just use a slice->concatenate path, but I don't think it is very efficient. (Is it already efficient, as dask utilizes out-of-core computation?) Any thoughts are welcome. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1837/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 333510121,MDU6SXNzdWUzMzM1MTAxMjE=,2239,Error in docs/plottings,6815844,closed,0,,,1,2018-06-19T03:50:51Z,2018-06-20T16:26:37Z,2018-06-20T16:26:37Z,MEMBER,,,,"There is an error on rtd. http://xarray.pydata.org/en/stable/plotting.html#id4","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2239/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 330859619,MDExOlB1bGxSZXF1ZXN0MTkzNzYyMjMx,2222,implement interp_like,6815844,closed,0,,,4,2018-06-09T06:46:48Z,2018-06-20T01:39:40Z,2018-06-20T01:39:24Z,MEMBER,,0,pydata/xarray/pulls/2222," - [x] Closes #2218 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API. This adds `interp_like`, that behaves like `reindex_like` but using interpolation. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2222/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 330469406,MDU6SXNzdWUzMzA0Njk0MDY=,2218,interp_like,6815844,closed,0,,,0,2018-06-07T23:24:48Z,2018-06-20T01:39:24Z,2018-06-20T01:39:24Z,MEMBER,,,,"Just as a reminder of the remaining extension of #2104 . We might add `interp_like` that behaves like `reindex_like` but using `interp()`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 320275317,MDExOlB1bGxSZXF1ZXN0MTg1OTgzOTc3,2104,implement interp(),6815844,closed,0,,,51,2018-05-04T13:28:38Z,2018-06-11T13:01:21Z,2018-06-08T00:33:52Z,MEMBER,,0,pydata/xarray/pulls/2104," - [x] Closes #2079 (remove if there is no corresponding issue, which should only be the case for minor changes) - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I started working to add `interpolate_at` to xarray, as discussed in issue #2079 (but without caching). I think I need to take care of more edge cases, but before finishing up this PR, I want to discuss what the best API is. I would like to this method working similar to `isel`, which may support *vectorized* interpolation. Currently, this works as follwos ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) ...: In [2]: # simple linear interpolation ...: da.interpolate_at(x=[0.5, 1.5]) ...: Out[2]: array([0.05, 0.15]) Coordinates: * x (x) float64 0.5 1.5 In [3]: # with cubic spline interpolation ...: da.interpolate_at(x=[0.5, 1.5], method='cubic') ...: Out[3]: array([0.0375, 0.1625]) Coordinates: * x (x) float64 0.5 1.5 In [4]: # interpolation at one single position ...: da.interpolate_at(x=0.5) ...: Out[4]: array(0.05) Coordinates: x float64 0.5 In [5]: # interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([[0.5, 1.0], [1.5, 2.0]], dims=['y', 'z'])) ...: Out[5]: array([[0.05, 0.1 ], [0.15, 0.2 ]]) Coordinates: x (y, z) float64 0.5 1.0 1.5 2.0 Dimensions without coordinates: y, z In [6]: da = xr.DataArray([[0, 0.1, 0.2], [1.0, 1.1, 1.2]], ...: dims=['x', 'y'], ...: coords={'x': [0, 1], 'y': [0, 10, 20]}) ...: In [7]: # multidimensional interpolation ...: da.interpolate_at(x=[0.5, 1.5], y=[5, 15]) ...: Out[7]: array([[0.55, 0.65], [ nan, nan]]) Coordinates: * x (x) float64 0.5 1.5 * y (y) int64 5 15 In [8]: # multidimensional interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([0.5, 1.5], dims='z'), ...: y=xr.DataArray([5, 15], dims='z')) ...: Out[8]: array([0.55, nan]) Coordinates: x (z) float64 0.5 1.5 y (z) int64 5 15 Dimensions without coordinates: z ``` #### Design question 1. How many interpolate methods should we support? Currently, I only implemented [scipy.interpolate.interp1d](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html) for 1dimensional interpolation and [scipy.interpolate.RegularGridInterpolator](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html) for multidimensional interpolation. I think 90% usecases are linear, but there are more methods in scipy. 2. How do we handle `nan`? Currently this raises ValueError if nan is present. It may be possible to carry out the interpolation with skipping nan, but in this case the performance would be significantly drops because it cannot be vectorized. 3. Do we support interpolation along *dimension without coordinate*? In that case, do we attach new coordinate to the object? 4. How should we do if new coordinate has the dimensional coordinate for the dimension to be interpolated? e.g. in the following case, ```python da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) rslt = da.interpolate_at(x=xr.DataArray([0.5, 1.5], dims=['x'], coords={'x': [1, 3]}) ``` what would be `rslt['x']`? I appreciate any comments.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2104/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 330487989,MDExOlB1bGxSZXF1ZXN0MTkzNDg2NzYz,2220,Reduce memory usage in doc.interpolation.rst,6815844,closed,0,,,0,2018-06-08T01:23:13Z,2018-06-08T01:45:11Z,2018-06-08T01:31:19Z,MEMBER,,0,pydata/xarray/pulls/2220,"I noticed an example I added to doc in #2104 consumes more than 1 GB memory, and it results in the failing in readthedocs build. This PR changes this to a much lighter example.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2220/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 300268334,MDExOlB1bGxSZXF1ZXN0MTcxMzk2NjUw,1942,Fix precision drop when indexing a datetime64 arrays.,6815844,closed,0,,,2,2018-02-26T14:53:57Z,2018-06-08T01:21:07Z,2018-02-27T01:13:45Z,MEMBER,,0,pydata/xarray/pulls/1942," - [x] Closes #1932 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This precision drop was caused when converting `pd.Timestamp` to `np.array` ```python In [7]: ts = pd.Timestamp(np.datetime64('2018-02-12 06:59:59.999986560')) In [11]: np.asarray(ts, 'datetime64[ns]') Out[11]: array('2018-02-12T06:59:59.999986000', dtype='datetime64[ns]') ``` We need to call `to_datetime64` explicitly. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1942/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 295838143,MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1,1899,Vectorized lazy indexing,6815844,closed,0,,,37,2018-02-09T11:22:02Z,2018-06-08T01:21:06Z,2018-03-06T22:00:57Z,MEMBER,,0,pydata/xarray/pulls/1899," - [x] Closes #1897 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue. My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation. The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time. I appreciate any suggestions.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1899/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 328006764,MDExOlB1bGxSZXF1ZXN0MTkxNjUzMjk3,2205,Support dot with older dask,6815844,closed,0,,,0,2018-05-31T06:13:48Z,2018-06-01T01:01:37Z,2018-06-01T01:01:34Z,MEMBER,,0,pydata/xarray/pulls/2205," - [x] Related with #2203 - [x] Tests added - [x] Tests passed - [x] Fully documented Related with #2203, I think it is better if `xr.DataArray.dot()` is working even with older dask, at least in the simpler case (as this is a very primary operation). The cost is a slight complication of the code. Any comments are welcome. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2205/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 326352018,MDU6SXNzdWUzMjYzNTIwMTg=,2184,Alighment is not working in Dataset.__setitem__ and Dataset.update,6815844,closed,0,,,1,2018-05-25T01:38:25Z,2018-05-26T09:32:50Z,2018-05-26T09:32:50Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible from #2180 , [comment](https://github.com/pydata/xarray/issues/2180#issuecomment-391914654) ```python a = Dataset({ 'x': [10, 20], 'd1': ('x', [100, 200]), 'd2': ('x', [300, 400]) }) b = Dataset({ 'x': [15], 'd1': ('x', [500]), }) a.update(b) ``` ```python Dataset({ 'x': [10, 20, 15], 'd1': ('x', [nan, nan, 500]), 'd2': ('x', [300, 400, nan]) }) ``` In the above, with anything but an outer join you're destroying d2 - which doesn't even exist in the rhs dataset! A sane, desirable outcome should be #### Problem description Alignment should work #### Expected Output ```python Dataset({ 'x': [10, 20, 15], 'd1': ('x', [100, 200, 500]), 'd2': ('x', [300, 400, nan]) }) ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2184/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 326420749,MDExOlB1bGxSZXF1ZXN0MTkwNTA5OTk5,2185,weighted rolling mean -> weighted rolling sum,6815844,closed,0,,,0,2018-05-25T08:03:59Z,2018-05-25T10:38:52Z,2018-05-25T10:38:48Z,MEMBER,,0,pydata/xarray/pulls/2185,"An example of weighted rolling mean in doc is actually weighted rolling *sum*. It is a little bit misleading [SO](https://stackoverflow.com/questions/50520835/xarray-simple-weighted-rolling-mean-example-using-construct/50524093#50524093), so I propose to change `weighted rolling mean` -> `weighted rolling sum` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2185/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 322572723,MDExOlB1bGxSZXF1ZXN0MTg3NjU3MTg4,2124,Raise an Error if a coordinate with wrong size is assigned to a dataarray,6815844,closed,0,,,1,2018-05-13T07:50:15Z,2018-05-16T02:10:48Z,2018-05-15T16:39:22Z,MEMBER,,0,pydata/xarray/pulls/2124," - [x] Closes #2112 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API. Now uses `dataset_merge_method` when a new coordinate is assigned to a xr.DataArray ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2124/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 321796423,MDU6SXNzdWUzMjE3OTY0MjM=,2112,Sanity check when assigning a coordinate to DataArray,6815844,closed,0,,,0,2018-05-10T03:22:18Z,2018-05-15T16:39:22Z,2018-05-15T16:39:22Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible I think we can raise an Error if the newly assigned coordinate to a DataArray has an invalid shape. ```python In [1]: import xarray as xr ...: ...: da = xr.DataArray([0, 1, 2], dims='x') ...: da['x'] = [0, 1, 2, 3] # no error ...: da ...: Out[1]: array([0, 1, 2]) Coordinates: * x (x) int64 0 1 2 3 ``` #### Problem description It is more user-friendly if we make some sanity checks when a new coordinate is assigned to a xr.DataArray. Dataset raises an appropriate error, ```python In [2]: ds = xr.Dataset({'da': ('x', [0, 1, 2])}) ...: ds['x'] = [0, 1, 2, 3] # -> raises ValueError ``` #### Expected Output ValueError ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2112/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 322572858,MDExOlB1bGxSZXF1ZXN0MTg3NjU3MjY0,2125,Reduce pad size in rolling,6815844,closed,0,,,2,2018-05-13T07:52:50Z,2018-05-14T22:43:24Z,2018-05-13T22:37:48Z,MEMBER,,0,pydata/xarray/pulls/2125," - [ ] Closes #N.A. - [x] Tests added (for all bug fixes or enhancements) - [ ] Tests N.A. - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I noticed `rolling` with dask array and with bottleneck can be slightly improved by reducing the padding depth in `da.ghost.ghost(a, depth=depth, boundary=boundary)`. @jhamman , can you kindly review this?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2125/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 319420201,MDExOlB1bGxSZXF1ZXN0MTg1MzQzMTgw,2100,Fix a bug introduced in #2087,6815844,closed,0,,,1,2018-05-02T06:07:01Z,2018-05-14T00:01:15Z,2018-05-02T21:59:34Z,MEMBER,,0,pydata/xarray/pulls/2100," - [x] Closes #2099 - [x] Tests added - [x] Tests passed A quick fix for #2099","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2100/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 322475569,MDExOlB1bGxSZXF1ZXN0MTg3NjAwMzQy,2122,Fixes centerized rolling with bottleneck,6815844,closed,0,,,2,2018-05-12T02:28:21Z,2018-05-13T00:27:56Z,2018-05-12T06:15:55Z,MEMBER,,0,pydata/xarray/pulls/2122," - [x] Closes #2113 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Two bugs were found and fixed. 1. rolling a dask-array with center=True and bottleneck 2. rolling an integer dask-array with bottleneck","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2122/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 322314146,MDExOlB1bGxSZXF1ZXN0MTg3NDc3Mzgz,2119,Support keep_attrs for apply_ufunc for xr.Variable,6815844,closed,0,,,0,2018-05-11T14:18:51Z,2018-05-11T22:54:48Z,2018-05-11T22:54:44Z,MEMBER,,0,pydata/xarray/pulls/2119," - [x] Closes #2114 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes 2114.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2119/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 321928898,MDU6SXNzdWUzMjE5Mjg4OTg=,2114,keep_attrs=True does not work `apply_ufunc` with xr.Variable,6815844,closed,0,,,2,2018-05-10T13:21:07Z,2018-05-11T22:54:44Z,2018-05-11T22:54:44Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible `keep_attrs=True` works nice for xr.DataArray, but it is neglected for `xr.Variable` ```python In [2]: import numpy as np In [3]: import xarray as xr In [4]: da = xr.DataArray([0, 1, 2], dims='x', attrs={'foo': 'var'}) In [5]: func = lambda x: x*2 In [6]: xr.apply_ufunc(func, da, keep_attrs=True, input_core_dims=[['x']], outpu ...: t_core_dims=[['z']]) Out[6]: # attrs are tracked for xr.DataArray array([0, 2, 4]) Dimensions without coordinates: z Attributes: foo: var In [7]: xr.apply_ufunc(func, da.variable, keep_attrs=True, input_core_dims=[['x' ...: ]], output_core_dims=[['z']]) Out[7]: # attrs are dropped array([0, 2, 4]) ``` #### Problem description `keep_attrs=True` should work also with `xr.Variable` #### Expected Output ```python # attrs are dropped array([0, 2, 4]) Attributes: foo: var ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2114/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 319419699,MDU6SXNzdWUzMTk0MTk2OTk=,2099,Dataset.update wrongly handles the coordinate,6815844,closed,0,,,0,2018-05-02T06:04:02Z,2018-05-02T21:59:34Z,2018-05-02T21:59:34Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible I noticed a bug introduced by #2087 (my PR) ```python import xarray as xr ds = xr.Dataset({'var': ('x', [1, 2, 3])}, coords={'x': [0, 1, 2], 'z1': ('x', [1, 2, 3]), 'z2': ('x', [1, 2, 3])}) ds['var'] = ds['var'] * 2 ``` It claims a ValueError. #### Problem description https://github.com/pydata/xarray/blob/0cc64a08c672e6361d05acea3fea9f34308b62ed/xarray/core/merge.py#L564 Here should be ```python other[k] = obj.drop(coord_names) ``` not ```python other[k] = obj.drop(*coord_names) ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2099/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 318237397,MDExOlB1bGxSZXF1ZXN0MTg0NDk1MDI4,2087,Drop conflicted coordinate when assignment.,6815844,closed,0,,,1,2018-04-27T00:12:43Z,2018-05-02T05:58:41Z,2018-05-02T02:31:02Z,MEMBER,,0,pydata/xarray/pulls/2087," - [x] Closes #2068 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) After this, when assigning a dataarray to a dataset, non-dimensional and conflicted coordinates of the dataarray are dropped. example ``` In [2]: ds = xr.Dataset({'da': ('x', [0, 1, 2])}, ...: coords={'y': (('x',), [0.1, 0.2, 0.3])}) ...: ds ...: Out[2]: Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 In [3]: other = ds['da'] ...: other['y'] = 'x', [0, 1, 2] # conflicted non-dimensional coordinate ...: ds['da'] = other ...: ds ...: Out[3]: Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 # 'y' is not overwritten Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2087/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 297794911,MDExOlB1bGxSZXF1ZXN0MTY5NjMxNTU3,1919,Remove flake8 from travis,6815844,closed,0,,,10,2018-02-16T14:03:46Z,2018-05-01T07:24:04Z,2018-05-01T07:24:00Z,MEMBER,,0,pydata/xarray/pulls/1919," - [x] Closes #1912 The removal of flake8 from travis would increase the clearer separation between style-issue and test failure.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1919/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 316660970,MDU6SXNzdWUzMTY2NjA5NzA=,2075,apply_ufunc can generate an invalid object.,6815844,closed,0,,,2,2018-04-23T04:52:25Z,2018-04-23T05:08:02Z,2018-04-23T05:08:02Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible `apply_ufunc` can generate an invalid object if the size of the array is changed by ufunc, ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray([1, 2, 3], dims=['x'], coords={'x': [0, 0.5, 1]}) ...: ...: def tile(x): ...: return np.tile(da, [2]) ...: ...: tiled = xr.apply_ufunc(tile, da, input_core_dims=[['x']], ...: output_core_dims=[['x']]) ...: tiled ...: Out[1]: array([1, 2, 3, 1, 2, 3]) Coordinates: * x (x) float64 0.0 0.5 1.0 ``` In the above example, `tiled.shape = (6, )` but `tiled['x'].shape = (3,)`. I think we need a sanity check and drop coordinate if necessary. #### Problem description Any of our function should not generate invalid xarray objects. #### Expected Output ```python Out[1]: array([1, 2, 3, 1, 2, 3]) Dimensions without coordinates: x ``` or raise an Error. #### Output of ``xr.show_versions()``
# Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 64 OS: Linux OS-release: 4.4.0-119-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.3+dev2.ga0bdbfb pandas: 0.22.0 numpy: 1.14.0 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: None h5py: None Nio: None zarr: 2.1.4 bottleneck: 1.2.1 cyordereddict: None dask: 0.17.1 distributed: 1.21.1 matplotlib: 2.1.2 cartopy: 0.16.0 seaborn: 0.8.1 setuptools: 38.4.0 pip: 9.0.1 conda: None pytest: 3.3.2 IPython: 6.2.1 sphinx: 1.7.1
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2075/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 314653502,MDU6SXNzdWUzMTQ2NTM1MDI=,2062,__contains__ does not work with DataArray,6815844,closed,0,,,2,2018-04-16T13:34:30Z,2018-04-16T15:51:30Z,2018-04-16T15:51:29Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible ```python >>> da = xr.DataArray([0, 1, 2], dims='x') >>> 0 in da ( warning omitted ) False >>> 0 in da.values True ``` #### Problem description `__contains__` should work as np.ndarray does. #### Expected Output ```python >>> 0 in da True ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2062/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 305751269,MDExOlB1bGxSZXF1ZXN0MTc1NDAzMzE4,1994,Make constructing slices lazily.,6815844,closed,0,,,1,2018-03-15T23:15:26Z,2018-03-18T08:56:31Z,2018-03-18T08:56:27Z,MEMBER,,0,pydata/xarray/pulls/1994," - [x] Closes #1993 - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes. Quick fix of #1993. With this fix, the script shown in #1993 runs Bottleneck: 0.08317923545837402 s Pandas: 1.3338768482208252 s Xarray: 1.1349339485168457 s","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1994/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 300486064,MDU6SXNzdWUzMDA0ODYwNjQ=,1944,building doc is failing for the release 0.10.1,6815844,closed,0,,,9,2018-02-27T04:01:28Z,2018-03-12T20:36:58Z,2018-03-12T20:35:31Z,MEMBER,,,,"I found the following page fails http://xarray.pydata.org/en/stable/examples/weather-data.html","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1944/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 302718231,MDExOlB1bGxSZXF1ZXN0MTczMTcwNjc1,1968,einsum for xarray,6815844,closed,0,,,5,2018-03-06T14:18:22Z,2018-03-12T06:42:12Z,2018-03-12T06:42:08Z,MEMBER,,0,pydata/xarray/pulls/1968," - [x] Closes #1951 - [x] Tests added - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) Currently, lazy-einsum for dask is not yet working. @shoyer I think `apply_ufunc` supports lazy computation, but I did not yet figure out how to do this. Can you give me a help?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1968/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 301657312,MDU6SXNzdWUzMDE2NTczMTI=,1951,einsum for xarray,6815844,closed,0,,,1,2018-03-02T05:25:23Z,2018-03-12T06:42:08Z,2018-03-12T06:42:08Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible I sometimes want to make more flexible dot product of two data arrays, where we sum up along a part of common dimensions. ```python # Your code here da_vals = np.arange(6 * 5 * 4).reshape((6, 5, 4)) da = DataArray(da_vals, dims=['x', 'y', 'z']) dm_vals = np.arange(6 * 4).reshape((6, 4)) dm = DataArray(dm_vals, dims=['x', 'z']) # I want something like this da.dot(dm, 'z') # -> dimensions of the output array: ['x', 'y'] ``` It's an intermediate path of `np.matmul` and `np.tensordot`. Is this feature sufficiently universal? EDIT: I just noticed dask does not have `einsum` yet (dask/dask#732). We maybe need to wait or decide to support only numpy arrays.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1951/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 304042598,MDU6SXNzdWUzMDQwNDI1OTg=,1979,Tests are failing caused by zarr 2.2.0,6815844,closed,0,,,2,2018-03-10T05:02:39Z,2018-03-12T05:37:02Z,2018-03-12T05:37:02Z,MEMBER,,,,"#### Problem description Tests are failing due to the release of zarr 2.2.0 Travis's log https://travis-ci.org/pydata/xarray/jobs/351566529 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1979/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 302001772,MDU6SXNzdWUzMDIwMDE3NzI=,1956,numpy 1.11 support for apply_ufunc,6815844,closed,0,,,1,2018-03-03T14:23:40Z,2018-03-07T16:41:54Z,2018-03-07T16:41:54Z,MEMBER,,,,"I noticed the failing in rtd http://xarray.pydata.org/en/stable/computation.html#missing-values is because it still uses numpy=1.11 which does not support `signature` argument for `np.vectorize`. This can be easily fixed (just bumping up numpy's version on rtd), but as our minimum requirement is numpy==1.11, we may need to take care of this in `xr.apply_ufunc`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1956/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 302003819,MDExOlB1bGxSZXF1ZXN0MTcyNjcwNTI4,1957,Numpy 1.13 for rtd,6815844,closed,0,,,4,2018-03-03T14:51:21Z,2018-03-03T22:22:54Z,2018-03-03T22:22:49Z,MEMBER,,0,pydata/xarray/pulls/1957," - [x] Partly closes #1944 I noticed [this](https://github.com/pydata/xarray/pull/1950#issuecomment-370125253) is due to the use of old numpy on rtd. xref #1956 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1957/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 301613959,MDExOlB1bGxSZXF1ZXN0MTcyMzk0OTEz,1950,Fix doc for missing values.,6815844,closed,0,,,4,2018-03-02T00:47:23Z,2018-03-03T06:58:33Z,2018-03-02T20:17:29Z,MEMBER,,0,pydata/xarray/pulls/1950, - [x] Closes #1944 ,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1950/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 288567090,MDU6SXNzdWUyODg1NjcwOTA=,1831,Slow performance of rolling.reduce,6815844,closed,0,,,4,2018-01-15T11:44:47Z,2018-03-01T03:39:19Z,2018-03-01T03:39:19Z,MEMBER,,,,"#### Code Sample, a copy-pastable example if possible ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(1000, 100), dims=['x', 'y'], ...: coords={'x': np.arange(1000)}) ...: In [2]: %%timeit ...: da.rolling(x=10).reduce(np.sum) ...: 2.04 s ± 8.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` #### Problem description In `DataArray.rolling`, we index by `.isel` method for every window, constructing huge number of `xr.DataArray` instances. This is very inefficient. Of course, we can use bottleneck methods if available, but this provides only a limited functions. (This also limits possible extensions of rolling, such as ND-rolling (#819), window type (#1142), strides (#819).) I am wondering if we could skip any sanity checks in our `DataArray.isel -> Variable.isel` path in indexing. Or can we directly construct a single large `DataArray` instead of a lot of small `DataArray`s?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1831/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 300484822,MDExOlB1bGxSZXF1ZXN0MTcxNTU3Mjc5,1943,Fix rtd link on readme,6815844,closed,0,,,1,2018-02-27T03:52:56Z,2018-02-27T04:31:59Z,2018-02-27T04:27:24Z,MEMBER,,0,pydata/xarray/pulls/1943,Typo in url.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1943/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 299606951,MDU6SXNzdWUyOTk2MDY5NTE=,1937,`isnull` loads dask array,6815844,closed,0,,,0,2018-02-23T05:54:58Z,2018-02-25T20:52:16Z,2018-02-25T20:52:16Z,MEMBER,,,,"From gitter cc. @davidh-ssec ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.arange(100), dims='x').chunk({'x': 10}) ...: da.isnull() ...: Out[1]: array([False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], dtype=bool) Dimensions without coordinates: x ``` #### Problem description `DataArray.isnull()` automatically computes dask arrays. #### Expected Output ```python Out[2]: dask.array Dimensions without coordinates: x ``` #### Cause https://github.com/pydata/xarray/blob/697cc74b9af5fbfedadd54fd07019ce7684553ec/xarray/core/ops.py#L322-L324 Here, `getattr(pd, name)` should be `getattr(duck_array_ops, name)`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1937/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 298054181,MDExOlB1bGxSZXF1ZXN0MTY5ODEyMTA1,1922,Support indexing with 0d-np.ndarray,6815844,closed,0,,,0,2018-02-18T02:46:27Z,2018-02-18T07:26:33Z,2018-02-18T07:26:30Z,MEMBER,,0,pydata/xarray/pulls/1922," - [x] Closes #1921 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) Now Variable accepts 0d-np.ndarray indexer.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1922/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 298012981,MDU6SXNzdWUyOTgwMTI5ODE=,1921,BUG: Indexing by 0-dimensional array,6815844,closed,0,,,0,2018-02-17T15:36:31Z,2018-02-18T07:26:30Z,2018-02-18T07:26:30Z,MEMBER,,,,"```python In [1]: import xarray as xr ...: import numpy as np ...: ...: a = np.arange(10) ...: a[np.array(0)] ...: Out[1]: 0 In [2]: da = xr.DataArray(a, dims='x') ...: da[np.array(0)] ...: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 1 da = xr.DataArray(a, dims='x') ----> 2 da[np.array(0)] /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataarray.pyc in __getitem__(self, key) 478 else: 479 # xarray-style array indexing --> 480 return self.isel(**self._item_key_to_dict(key)) 481 482 def __setitem__(self, key, value): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataarray.pyc in isel(self, drop, **indexers) 759 DataArray.sel 760 """""" --> 761 ds = self._to_temp_dataset().isel(drop=drop, **indexers) 762 return self._from_temp_dataset(ds) 763 /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataset.py in isel(self, drop, **indexers) 1390 for name, var in iteritems(self._variables): 1391 var_indexers = {k: v for k, v in indexers_list if k in var.dims} -> 1392 new_var = var.isel(**var_indexers) 1393 if not (drop and name in var_indexers): 1394 variables[name] = new_var /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in isel(self, **indexers) 851 if dim in indexers: 852 key[i] = indexers[dim] --> 853 return self[tuple(key)] 854 855 def squeeze(self, dim=None): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in __getitem__(self, key) 619 array `x.values` directly. 620 """""" --> 621 dims, indexer, new_order = self._broadcast_indexes(key) 622 data = as_indexable(self._data)[indexer] 623 if new_order: /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in _broadcast_indexes(self, key) 477 # key can be mapped as an OuterIndexer. 478 if all(not isinstance(k, Variable) for k in key): --> 479 return self._broadcast_indexes_outer(key) 480 481 # If all key is 1-dimensional and there are no duplicate labels, /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in _broadcast_indexes_outer(self, key) 542 new_key.append(k) 543 --> 544 return dims, OuterIndexer(tuple(new_key)), None 545 546 def _nonzero(self): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/indexing.py in __init__(self, key) 368 raise TypeError('invalid indexer array for {}, must have ' 369 'exactly 1 dimension: ' --> 370 .format(type(self).__name__, k)) 371 k = np.asarray(k, dtype=np.int64) 372 else: TypeError: invalid indexer array for OuterIndexer, must have exactly 1 dimension: ``` Indexing by a 0d-array should be identical to the indexing by a scalar.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1921/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 294052591,MDExOlB1bGxSZXF1ZXN0MTY2OTI1MzU5,1883,Support nan-ops for object-typed arrays,6815844,closed,0,,,0,2018-02-02T23:16:39Z,2018-02-15T22:03:06Z,2018-02-15T22:03:01Z,MEMBER,,0,pydata/xarray/pulls/1883," - [x] Closes #1866 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I am working to add aggregation ops for object-typed arrays, which may make #1837 cleaner. I added some tests but maybe not sufficient. Any other cases which should be considered? e.g. `[True, 3.0, np.nan]` etc... ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1883/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 292633789,MDU6SXNzdWUyOTI2MzM3ODk=,1866,aggregation ops for object-dtype are missing,6815844,closed,0,,,0,2018-01-30T02:40:27Z,2018-02-15T22:03:01Z,2018-02-15T22:03:01Z,MEMBER,,,,"This issue arises in [#1837 comment](https://github.com/pydata/xarray/pull/1837#discussion_r163999738), where we need to make a summation of object-dtype array, such as ```python xr.DataArray(np.array([True, True, False, np.nan], dtype=object), dims='x').sum('x', skipna=True) ``` Currently, it raises NotImplementedError. pandas support this by having their own nan-aggregation methods.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1866/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue