issues
131 rows where user = 6815844 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: milestone, comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
274797981 | MDU6SXNzdWUyNzQ3OTc5ODE= | 1725 | Switch our lazy array classes to use Dask instead? | fujiisoup 6815844 | open | 0 | 9 | 2017-11-17T09:12:34Z | 2023-09-15T15:51:41Z | MEMBER | Ported from #1724, comment by @shoyer
The subtleties of checking |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
818583834 | MDExOlB1bGxSZXF1ZXN0NTgxODIxNTI0 | 4974 | implemented pad with new-indexes | fujiisoup 6815844 | closed | 0 | 8 | 2021-03-01T07:50:08Z | 2023-09-14T02:47:24Z | 2023-09-14T02:47:24Z | MEMBER | 0 | pydata/xarray/pulls/4974 |
Now we use a tuple of indexes for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
527553050 | MDExOlB1bGxSZXF1ZXN0MzQ0ODA1NzQ3 | 3566 | Make 0d-DataArray compatible for indexing. | fujiisoup 6815844 | closed | 0 | 6 | 2019-11-23T12:43:32Z | 2023-08-31T02:06:21Z | 2023-08-31T02:06:21Z | MEMBER | 0 | pydata/xarray/pulls/3566 |
Now 0d-DataArray can be used for indexing. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
527237590 | MDU6SXNzdWU1MjcyMzc1OTA= | 3562 | Minimize `.item()` call | fujiisoup 6815844 | open | 0 | 1 | 2019-11-22T14:44:43Z | 2023-06-08T04:48:50Z | MEMBER | MCVE Code SampleI want to minimize the number of calls
Both cases, I need to call '.item()'. It is not a big issue, but I think it would be nice if xarray becomes more self-contained. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
675482176 | MDU6SXNzdWU2NzU0ODIxNzY= | 4325 | Optimize ndrolling nanreduce | fujiisoup 6815844 | open | 0 | 5 | 2020-08-08T07:46:53Z | 2023-04-13T15:56:52Z | MEMBER | In #4219 we added ndrolling.
However, nanreduce, such as We can implement inhouse-nanreduce methods for the strided array.
For example, our |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
262642978 | MDU6SXNzdWUyNjI2NDI5Nzg= | 1603 | Explicit indexes in xarray's data-model (Future of MultiIndex) | fujiisoup 6815844 | closed | 0 | 1.0 741199 | 68 | 2017-10-04T01:51:47Z | 2022-09-28T09:24:20Z | 2022-09-28T09:24:20Z | MEMBER | I think we can continue the discussion we have in #1426 about In comment , @shoyer recommended to remove I agree with this, as long as my codes work with this improvement. I think if we could have a list of possible Current limitations of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
531087939 | MDExOlB1bGxSZXF1ZXN0MzQ3NTkyNzE1 | 3587 | boundary options for rolling.construct | fujiisoup 6815844 | open | 0 | 4 | 2019-12-02T12:11:44Z | 2022-06-09T14:50:17Z | MEMBER | 0 | pydata/xarray/pulls/3587 |
Added some boundary options for rolling.construct.
Currently, the option names are inherited from |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3587/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
655382009 | MDU6SXNzdWU2NTUzODIwMDk= | 4218 | what is the best way to reset an unintentional direct push to the master | fujiisoup 6815844 | closed | 0 | 16 | 2020-07-12T11:30:45Z | 2022-04-17T20:34:32Z | 2022-04-17T20:34:32Z | MEMBER | I am sorry but I unintentionally pushed my working scripts to xarray.master. (I thought it is not allowed and I was not careful.) What is the best way to reset this? I'm thinking to do in my local, and force push again, but I'm afraid that I do another wrong thing... I apologize for my mistake. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
280875330 | MDU6SXNzdWUyODA4NzUzMzA= | 1772 | nonzero method for xr.DataArray | fujiisoup 6815844 | open | 0 | 5 | 2017-12-11T02:25:11Z | 2022-04-01T10:42:20Z | MEMBER |
Problem descriptionApparently, the dimensions and the coordinates conflict each other.
I think we can have our own Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1772/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
898657012 | MDU6SXNzdWU4OTg2NTcwMTI= | 5361 | Inconsistent behavior in grouby depending on the dimension order | fujiisoup 6815844 | open | 0 | 1 | 2021-05-21T23:11:37Z | 2022-03-29T11:45:32Z | MEMBER |
However, The bug has been discussed in #2944 and solved, but I found this is still there. Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: 09d8a4a785fa6521314924fd785740f2d13fb8ee python: 3.7.7 (default, Mar 23 2020, 22:36:06) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-72-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.16.1.dev30+g1d3dee08.d20200808 pandas: 1.1.3 numpy: 1.18.1 scipy: 1.5.2 netCDF4: 1.4.2 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.6.0 distributed: 2.7.0 matplotlib: 3.2.2 cartopy: None seaborn: 0.10.1 numbagg: None pint: None setuptools: 46.1.1.post20200323 pip: 20.0.2 conda: None pytest: 5.2.1 IPython: 7.13.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
228295383 | MDU6SXNzdWUyMjgyOTUzODM= | 1408 | .sel does not keep selected coordinate value in case with MultiIndex | fujiisoup 6815844 | closed | 0 | 8 | 2017-05-12T13:40:34Z | 2022-03-17T17:11:41Z | 2022-03-17T17:11:41Z | MEMBER |
```python In[4] ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) Out[4]: <xarray.Dataset> Dimensions: () Coordinates: y <U1 'a' x int64 1 Data variables: foo int64 1 ``` But in MultiIndex case, does not.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1408/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
359240638 | MDU6SXNzdWUzNTkyNDA2Mzg= | 2410 | Updated text for indexing page | fujiisoup 6815844 | open | 0 | 11 | 2018-09-11T22:01:39Z | 2021-11-15T21:17:14Z | MEMBER | We have a bunch of terms to describe the xarray structure, such as dimension, coordinate, dimension coordinate, etc.. Although it has been discussed in #1295 and we tried to use the consistent terminology in our docs, it looks still not easy for users to understand our functionalities. In #2399, @horta wrote a list of definitions (https://drive.google.com/file/d/1uJ_U6nedkNe916SMViuVKlkGwPX-mGK7/view?usp=sharing). I think it would be nice to have something like this in our docs. Any thought? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
441088452 | MDU6SXNzdWU0NDEwODg0NTI= | 2944 | `groupby` does not correctly handle non-dimensional coordinate | fujiisoup 6815844 | closed | 0 | 3 | 2019-05-07T07:47:17Z | 2021-05-21T23:12:21Z | 2021-05-21T23:12:21Z | MEMBER | Code Sample, a copy-pastable example if possible```python
Problem description
Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
254927382 | MDU6SXNzdWUyNTQ5MjczODI= | 1553 | Multidimensional reindex | fujiisoup 6815844 | open | 0 | 2 | 2017-09-04T03:29:39Z | 2020-12-19T16:00:00Z | MEMBER | From a discussion in #1473 comment It would be convenient if we have multi-dimensional
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1553/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
216621142 | MDU6SXNzdWUyMTY2MjExNDI= | 1323 | Image related methods | fujiisoup 6815844 | closed | 0 | 9 | 2017-03-24T01:39:52Z | 2020-10-08T16:00:18Z | 2020-06-21T19:25:18Z | MEMBER | Currently I'm using xarray to handle multiple images (typically, a sequence of images), and I feel it would be convenient if xarray supports image related functions. There may be many possibilities, but particular methods I want to have in xarray are
1. xr.open_image(File)
Image (possibly also video?) is naturally high-dimensional and I guess it would fit xarray's concept. Is this sufficiently broad interest? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
675604714 | MDExOlB1bGxSZXF1ZXN0NDY1MDg1Njg1 | 4329 | ndrolling repr fix | fujiisoup 6815844 | closed | 0 | 6 | 2020-08-08T23:34:37Z | 2020-08-09T13:15:50Z | 2020-08-09T11:57:38Z | MEMBER | 0 | pydata/xarray/pulls/4329 |
There was a bug in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
655389649 | MDExOlB1bGxSZXF1ZXN0NDQ3ODkyNjE3 | 4219 | nd-rolling | fujiisoup 6815844 | closed | 0 | 16 | 2020-07-12T12:19:19Z | 2020-08-08T07:23:51Z | 2020-08-08T04:16:27Z | MEMBER | 0 | pydata/xarray/pulls/4219 |
I noticed that the implementation of nd-rolling is straightforward. The core part is implemented but I am wondering what the best API is, with keeping it backward-compatible. Obviously, it is basically should look like
A problem is other parameters, So, maybe we allow dictionary for them?
The same thing happens for Does anyone have another idea? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
338662554 | MDU6SXNzdWUzMzg2NjI1NTQ= | 2269 | A special function for unpickling old xarray object? | fujiisoup 6815844 | closed | 0 | 6 | 2018-07-05T17:27:28Z | 2020-07-11T02:55:38Z | 2020-07-11T02:55:38Z | MEMBER | I noticed that some users experiencing troubles to restore xarray objects that is created xarray < 0.8. Is there any possibility to add a function to support unpickling old object, such as
xref (private repo) gafusion/OMFIT-source#2652 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
239918314 | MDExOlB1bGxSZXF1ZXN0MTI4NDcxOTk4 | 1469 | Argmin indexes | fujiisoup 6815844 | closed | 0 | 6 | 2017-07-01T01:23:31Z | 2020-06-29T19:36:25Z | 2020-06-29T19:36:25Z | MEMBER | 0 | pydata/xarray/pulls/1469 |
With this PR, ValueError raises if
Example: ```python In [1]: import xarray as xr ...: da = xr.DataArray([[1, 2], [-1, 40], [5, 6]], ...: [('x', ['c', 'b', 'a']), ('y', [1, 0])]) ...: ...: da.argmin_indexes() ...: Out[1]: OrderedDict([('x', <xarray.DataArray 'x' ()> array(1)), ('y', <xarray.DataArray 'y' ()> array(0))]) In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', <xarray.DataArray 'y' (x: 3)> array([0, 0, 0]) Coordinates: * x (x) <U1 'c' 'b' 'a')]) ``` (Because the returned object is an Although in #1388
This is mainly because
1. For 1, I have prepared modification of For 2, we should either
+ change API of ```python In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', array([0, 0, 0]), 'x', array(['c' 'b' 'a']))
I originally worked with the second option for the modification of Another alternertive is to
+ change API of Any comments are welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
619374891 | MDExOlB1bGxSZXF1ZXN0NDE4OTEyODc3 | 4069 | Improve interp performance | fujiisoup 6815844 | closed | 0 | 2 | 2020-05-16T04:23:47Z | 2020-05-25T20:02:41Z | 2020-05-25T20:02:37Z | MEMBER | 0 | pydata/xarray/pulls/4069 |
Now n-dimensional interp works sequentially if possible. It may speed up some cases. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
619347681 | MDU6SXNzdWU2MTkzNDc2ODE= | 4068 | utility function to save complex values as a netCDF file | fujiisoup 6815844 | closed | 0 | 3 | 2020-05-16T01:19:16Z | 2020-05-25T08:36:59Z | 2020-05-25T08:36:58Z | MEMBER | Currently, we disallow to save complex values to a netCDF file. Maybe netCDF itself does not support complex values, but there may be some workarounds. It would be very handy for me. The most naive workaround may be to split each complex value into a real and imaginary part, add some flags, and restore it when loading them from the file. Maybe we may add a special suffix to the variable name? ```python
I think there may be a better way. Any thoughts are welcome :) p.s.
I just found that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
613044689 | MDExOlB1bGxSZXF1ZXN0NDEzODcyODQy | 4036 | support darkmode | fujiisoup 6815844 | closed | 0 | 5 | 2020-05-06T04:39:07Z | 2020-05-21T21:06:15Z | 2020-05-07T20:36:32Z | MEMBER | 0 | pydata/xarray/pulls/4036 |
Now it looks like
I'm pretty sure that this workaround is not the best (maybe the second worst), as it only supports the dark mode of vscode but not other environments. I couldn't find a good way to make a workaround for the general dark-mode. Any advice is welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4036/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
611643130 | MDU6SXNzdWU2MTE2NDMxMzA= | 4024 | small contrast of html view in VScode darkmode | fujiisoup 6815844 | closed | 0 | 6 | 2020-05-04T06:53:32Z | 2020-05-07T20:36:32Z | 2020-05-07T20:36:32Z | MEMBER | If using xarray inside VScode with darkmode, the new html repr has a small contrast of the text color and background. Maybe the text color comes from the default setting, but the background color is not. In light mode, it looks nice. VersionsOutput of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.5 (default, Oct 25 2019, 15:51:11) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-1080-oem machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.15.1 pandas: 0.25.3 numpy: 1.17.4 scipy: 1.3.2 netCDF4: 1.4.2 pydap: None h5netcdf: 0.8.0 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.9.0 distributed: 2.9.0 matplotlib: 3.1.1 cartopy: None seaborn: 0.9.0 numbagg: None setuptools: 42.0.2.post20191203 pip: 19.3.1 conda: None pytest: 5.3.2 IPython: 7.10.2 sphinx: 2.3.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4024/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
596163034 | MDExOlB1bGxSZXF1ZXN0NDAwNTExNjkz | 3953 | Fix wrong order of coordinate converted from pd.series with MultiIndex | fujiisoup 6815844 | closed | 0 | 2 | 2020-04-07T21:28:04Z | 2020-04-08T05:49:46Z | 2020-04-08T02:19:11Z | MEMBER | 0 | pydata/xarray/pulls/3953 |
It looks
Added a workaround for this... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3953/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 } |
xarray 13221727 | pull | |||||
207962322 | MDU6SXNzdWUyMDc5NjIzMjI= | 1271 | Attrs are lost in mathematical computation | fujiisoup 6815844 | closed | 0 | 7 | 2017-02-15T23:27:51Z | 2020-04-05T19:00:14Z | 2017-02-18T11:03:42Z | MEMBER | Related to #138 Why is |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
546784890 | MDExOlB1bGxSZXF1ZXN0MzYwMzk1OTY4 | 3670 | sel with categorical index | fujiisoup 6815844 | closed | 0 | 7 | 2020-01-08T10:51:06Z | 2020-01-25T22:38:28Z | 2020-01-25T22:38:21Z | MEMBER | 0 | pydata/xarray/pulls/3670 |
It is a bit surprising that no members have used xarray with CategoricalIndex... If there is anything missing additionally, please feel free to point it out. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
523853001 | MDExOlB1bGxSZXF1ZXN0MzQxNzYxNTg1 | 3542 | sparse option to reindex and unstack | fujiisoup 6815844 | closed | 0 | 2 | 2019-11-16T14:41:00Z | 2019-11-19T22:40:34Z | 2019-11-19T16:23:34Z | MEMBER | 0 | pydata/xarray/pulls/3542 |
Added There is still a lot of space to complete the sparse support as discussed in #3245. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3542/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
523831612 | MDExOlB1bGxSZXF1ZXN0MzQxNzQ2NDA4 | 3541 | Added fill_value for unstack | fujiisoup 6815844 | closed | 0 | 3 | 2019-11-16T11:10:56Z | 2019-11-16T14:42:31Z | 2019-11-16T14:36:44Z | MEMBER | 0 | pydata/xarray/pulls/3541 |
Added an option |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3541/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
522319360 | MDExOlB1bGxSZXF1ZXN0MzQwNTQxNzMz | 3520 | Fix set_index when an existing dimension becomes a level | fujiisoup 6815844 | closed | 0 | 2 | 2019-11-13T16:06:50Z | 2019-11-14T11:56:25Z | 2019-11-14T11:56:18Z | MEMBER | 0 | pydata/xarray/pulls/3520 |
There was a bug in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
521317260 | MDU6SXNzdWU1MjEzMTcyNjA= | 3512 | selection from MultiIndex does not work properly | fujiisoup 6815844 | closed | 0 | 0 | 2019-11-12T04:12:12Z | 2019-11-14T11:56:18Z | 2019-11-14T11:56:18Z | MEMBER | MCVE Code Sample```python da = xr.DataArray([0, 1], dims=['x'], coords={'x': [0, 1], 'y': 'a'}) db = xr.DataArray([2, 3], dims=['x'], coords={'x': [0, 1], 'y': 'b'}) data = xr.concat([da, db], dim='x').set_index(xy=['x', 'y']) data.sel(y='a')
Expected Output```python
Problem DescriptionShould select the array Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
345090013 | MDU6SXNzdWUzNDUwOTAwMTM= | 2318 | Failing test by dask==0.18.2 | fujiisoup 6815844 | closed | 0 | 2 | 2018-07-27T04:52:08Z | 2019-11-10T04:37:15Z | 2019-11-10T04:37:15Z | MEMBER | Tests are failing, which is caused by new release of dask==0.18.2. xref: dask/dask#3822 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
280673215 | MDU6SXNzdWUyODA2NzMyMTU= | 1771 | Needs performance check / improvements in value assignment of DataArray | fujiisoup 6815844 | open | 0 | 1 | 2017-12-09T03:42:41Z | 2019-10-28T14:53:24Z | MEMBER | In #1746, we added a validation in We may need to optimize the logic here. Is it reasonable to constantly monitor the performance of basic operations, such as cc @jhamman @shoyer |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
349855157 | MDU6SXNzdWUzNDk4NTUxNTc= | 2362 | Wrong behavior of DataArray.resample | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-13T00:02:47Z | 2019-10-22T19:42:08Z | 2019-10-22T19:42:08Z | MEMBER | From #2356, I noticed resample and groupby works nice for Dataset but not for DataArray Code Sample, a copy-pastable example if possible```python In [14]: import numpy as np ...: import xarray as xr ...: import pandas as pd ...: ...: time = pd.date_range('2000-01-01', freq='6H', periods=365 * 4) ...: ds = xr.Dataset({'foo': (('time', 'x'), np.random.randn(365 * 4, 5)), 'time': time, ...: 'x': np.arange(5)}) In [15]: ds
Out[15]:
<xarray.Dataset>
Dimensions: (time: 1460, x: 5)
Coordinates:
* time (time) datetime64[ns] 2000-01-01 ... 2000-12-30T18:00:00
* x (x) int64 0 1 2 3 4
Data variables:
foo (time, x) float64 -0.6916 -1.247 0.5376 ... -0.2197 -0.8479 -0.6719
Problem descriptionresample should work identically for DataArray and Dataset Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
281423161 | MDExOlB1bGxSZXF1ZXN0MTU3ODU2NTEx | 1776 | [WIP] Fix pydap array wrapper | fujiisoup 6815844 | closed | 0 | 0.10.3 3008859 | 6 | 2017-12-12T15:22:07Z | 2019-09-25T15:44:19Z | 2018-01-09T01:48:13Z | MEMBER | 0 | pydata/xarray/pulls/1776 |
I am trying to fix #1775, but tests are still failing. Any help would be appreciated. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
440900618 | MDExOlB1bGxSZXF1ZXN0Mjc2MzQ2MTQ3 | 2942 | Fix rolling operation with dask and bottleneck | fujiisoup 6815844 | closed | 0 | 7 | 2019-05-06T21:23:41Z | 2019-06-30T00:34:57Z | 2019-06-30T00:34:57Z | MEMBER | 0 | pydata/xarray/pulls/2942 |
Fix for #2940 It looks that there was a bug in the previous logic, but I am not sure why it was working... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
432019600 | MDU6SXNzdWU0MzIwMTk2MDA= | 2887 | Safely open / close netCDF files without resource locking | fujiisoup 6815844 | closed | 0 | 9 | 2019-04-11T13:19:45Z | 2019-05-16T15:28:30Z | 2019-05-16T15:28:30Z | MEMBER | Code Sample, a copy-pastable example if possible(essentially the same to #1629) Opening netCDF file via ds_read = xr.open_dataset('test.nc')
ds.to_netcdf('test.nc') # -> PermissionError
Problem descriptionAnother program cannot write the same netCDF file that xarray has opened, unless -- EDIT --
It is understandable when we do not want to load the entire file into the memory.
However, sometimes I want to read the file that will be updated soon by another program.
Also, I think that many users who are not accustomed to netCDF may expect this behavior (as I think it would be nice to have an option such as Expected OutputNo error Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2887/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
398468139 | MDExOlB1bGxSZXF1ZXN0MjQ0MTYyMTgx | 2668 | fix datetime_to_numeric and Variable._to_numeric | fujiisoup 6815844 | closed | 0 | 14 | 2019-01-11T22:02:07Z | 2019-02-11T11:58:22Z | 2019-02-11T09:47:09Z | MEMBER | 0 | pydata/xarray/pulls/2668 |
Started to fixing #2667 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
396157243 | MDExOlB1bGxSZXF1ZXN0MjQyNDM1MjAz | 2653 | Implement integrate | fujiisoup 6815844 | closed | 0 | 2 | 2019-01-05T11:22:10Z | 2019-01-31T17:31:31Z | 2019-01-31T17:30:31Z | MEMBER | 0 | pydata/xarray/pulls/2653 |
I would like to add |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
231308952 | MDExOlB1bGxSZXF1ZXN0MTIyNDE4MjA3 | 1426 | scalar_level in MultiIndex | fujiisoup 6815844 | closed | 0 | 10 | 2017-05-25T11:03:05Z | 2019-01-14T21:20:28Z | 2019-01-14T21:20:27Z | MEMBER | 0 | pydata/xarray/pulls/1426 |
[Edit for more clarity] I restarted a new branch to fix #1408 (I closed the older one #1412). Because the changes I made is relatively large, here I summarize this PR. SumamryIn this PR, I newly added two kinds of levels in MultiIndex, Changes in behaviors.
Examples of the output are shown below. Any suggestions for these behaviors are welcome. ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) ...: ds2 = xr.Dataset({'foo': (('x',), [4, 5, 6])}, {'x': [1, 2, 3], 'y': 'b'}) ...: # example data ...: ds = xr.concat([ds1, ds2], dim='y').stack(yx=['y', 'x']) ...: ds Out[1]: <xarray.Dataset> Dimensions: (yx: 6) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'a' 'a' 'b' 'b' 'b' # <--- this is index-level - x (yx) int64 1 2 3 1 2 3 # <--- this is also index-level Data variables: foo (yx) int64 1 2 3 4 5 6 In [2]: # 1. indexing a scalar converts In [3]: # 2. indexing a single element from MultiIndex makes a In [6]: # 3. Enables to selecting along a ``` Changes in the public APIsSome changes were necessary to the public APIs, though I tried to minimize them.
Implementation summaryThe main changes in the implementation is the addition of our own wrapper of What we can do nowThe main merit of this proposal is that it enables us to handle
What we cannot do nowWith the current implementation, we can do
Similary, we can neither do What are to be decided
TODOs
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1426/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
391477755 | MDExOlB1bGxSZXF1ZXN0MjM4OTcyNzU5 | 2612 | Added Coarsen | fujiisoup 6815844 | closed | 0 | 16 | 2018-12-16T15:28:31Z | 2019-01-06T09:13:56Z | 2019-01-06T09:13:46Z | MEMBER | 0 | pydata/xarray/pulls/2612 |
Started to implement Currently, it is not working for a datetime coordinate, since I am not familiar with datetime things. Any advice will be appreciated. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
392362056 | MDU6SXNzdWUzOTIzNjIwNTY= | 2619 | Selection of MultiIndex makes following `unstack` wrong | fujiisoup 6815844 | closed | 0 | 2 | 2018-12-18T22:26:31Z | 2018-12-24T15:37:27Z | 2018-12-24T15:37:27Z | MEMBER | Code Sample, a copy-pastable example if possible```python import numpy as np import xarray as xr ds = xr.DataArray(np.arange(40).reshape(8, 5), dims=['x', 'y'], Out[1]: <xarray.DataArray (x: 8, y: 5)> array([[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan]]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 * y (y) int64 0 1 2 3 4 ``` Problem descriptionAfter unstack, there are still values that are not selected by the previous Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
392535505 | MDExOlB1bGxSZXF1ZXN0MjM5Nzg0ODE1 | 2621 | Fix multiindex selection | fujiisoup 6815844 | closed | 0 | 7 | 2018-12-19T10:30:15Z | 2018-12-24T15:37:27Z | 2018-12-24T15:37:27Z | MEMBER | 0 | pydata/xarray/pulls/2621 |
Fix using |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2621/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
368045263 | MDExOlB1bGxSZXF1ZXN0MjIxMzExNzcw | 2477 | Inhouse LooseVersion | fujiisoup 6815844 | closed | 0 | 2 | 2018-10-09T05:23:56Z | 2018-10-10T13:47:31Z | 2018-10-10T13:47:23Z | MEMBER | 0 | pydata/xarray/pulls/2477 |
A fix for #2468. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2477/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
366653476 | MDExOlB1bGxSZXF1ZXN0MjIwMjcyODMz | 2462 | pep8speaks | fujiisoup 6815844 | closed | 0 | 14 | 2018-10-04T07:17:34Z | 2018-10-07T22:40:15Z | 2018-10-07T22:40:08Z | MEMBER | 0 | pydata/xarray/pulls/2462 |
I installed pep8speaks as suggested in #2428.
It looks they do not need a yml file, but it may be safer to add this (just renamed from |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
364565122 | MDExOlB1bGxSZXF1ZXN0MjE4NzIxNDUy | 2447 | restore ddof support in std | fujiisoup 6815844 | closed | 0 | 3 | 2018-09-27T16:51:44Z | 2018-10-03T12:44:55Z | 2018-09-28T13:44:29Z | MEMBER | 0 | pydata/xarray/pulls/2447 |
It looks that I wrongly remove |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
364545910 | MDExOlB1bGxSZXF1ZXN0MjE4NzA2NzQ1 | 2446 | fix:2445 | fujiisoup 6815844 | closed | 0 | 0 | 2018-09-27T16:00:17Z | 2018-09-28T18:24:42Z | 2018-09-28T18:24:36Z | MEMBER | 0 | pydata/xarray/pulls/2446 |
It is a regression after #2360. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2446/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
364008818 | MDU6SXNzdWUzNjQwMDg4MTg= | 2440 | ddof does not working with 0.10.9 | fujiisoup 6815844 | closed | 0 | 0 | 2018-09-26T12:42:18Z | 2018-09-28T13:44:29Z | 2018-09-28T13:44:29Z | MEMBER | Copied from issue#2236 comments, by @st-bender Hi, just to let you know that .std() does not accept the ddof keyword anymore (it worked in 0.10.8) Should I open a new bugreport? Edit: It fails with: ```python ~/Work/miniconda3/envs/stats/lib/python3.6/site-packages/xarray/core/duck_array_ops.py in f(values, axis, skipna, kwargs) 234 235 try: --> 236 return func(values, axis=axis, kwargs) 237 except AttributeError: 238 if isinstance(values, dask_array_type): TypeError: nanstd() got an unexpected keyword argument 'ddof' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
349857086 | MDU6SXNzdWUzNDk4NTcwODY= | 2363 | Reduction APIs for groupby, groupby_bins, resample, rolling | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-13T00:30:10Z | 2018-09-28T06:54:30Z | 2018-09-28T06:54:30Z | MEMBER | From #2356 APIs for ```python import numpy as np import xarray as xr import pandas as pd time = pd.date_range('2000-01-01', freq='6H', periods=365 * 4) ds = xr.Dataset({'foo': (('time', 'x'), np.random.randn(365 * 4, 5)), 'time': time, 'x': [0, 1, 2, 1, 0]}) ds.rolling(time=2).mean() # result dims : ('time', 'x') ds.resample(time='M').mean() # result dims : ('time', 'x') ds['foo'].resample(time='M').mean() # result dims : ('time', ) maybe a bug #2362 ds.groupby('time.month').mean() # result dims : ('month', ) ds.groupby_bins('time', 3).mean() # result dims : ('time_bins', ) ```
I think The possible options would be
1. Change APIs of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
350247452 | MDExOlB1bGxSZXF1ZXN0MjA4MTQ0ODQx | 2366 | Future warning for default reduction dimension of groupby | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-14T01:16:34Z | 2018-09-28T06:54:30Z | 2018-09-28T06:54:30Z | MEMBER | 0 | pydata/xarray/pulls/2366 |
Started to fix #2363.
Now warns a futurewarning in groupby if default reduction dimension is not specified.
As a side effect, I added |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
333248242 | MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3 | 2236 | Refactor nanops | fujiisoup 6815844 | closed | 0 | 19 | 2018-06-18T12:27:31Z | 2018-09-26T12:42:55Z | 2018-08-16T06:59:33Z | MEMBER | 0 | pydata/xarray/pulls/2236 |
In #2230, the addition of I tried to refactor them by moving nan-aggregation methods to I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation. Note:
In my implementation, bottleneck is not used when |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
356698348 | MDExOlB1bGxSZXF1ZXN0MjEyODg5NzMy | 2398 | implement Gradient | fujiisoup 6815844 | closed | 0 | 19 | 2018-09-04T08:11:52Z | 2018-09-21T20:02:43Z | 2018-09-21T20:02:43Z | MEMBER | 0 | pydata/xarray/pulls/2398 |
Added |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
351502921 | MDExOlB1bGxSZXF1ZXN0MjA5MDc4NDQ4 | 2372 | [MAINT] Avoid using duck typing | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-17T08:26:31Z | 2018-08-20T01:13:26Z | 2018-08-20T01:13:16Z | MEMBER | 0 | pydata/xarray/pulls/2372 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2372/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
351591072 | MDExOlB1bGxSZXF1ZXN0MjA5MTQ1NDcy | 2373 | More support of non-string dimension names | fujiisoup 6815844 | closed | 0 | 2 | 2018-08-17T13:18:18Z | 2018-08-20T01:13:02Z | 2018-08-20T01:12:37Z | MEMBER | 0 | pydata/xarray/pulls/2373 |
Following to #2174 In some methods, consistency of the dictionary arguments and keyword arguments are checked twice in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
348536270 | MDExOlB1bGxSZXF1ZXN0MjA2ODY0NzU4 | 2353 | Raises a ValueError for a confliction between dimension names and level names | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-08T00:52:29Z | 2018-08-13T22:16:36Z | 2018-08-13T22:16:31Z | MEMBER | 0 | pydata/xarray/pulls/2353 |
Now it raises an Error to assign new dimension with the name conflicting with an existing level name. Therefore, it is not allowed ```python b = xr.Dataset(coords={'dim0': ['a', 'b'], 'dim1': [0, 1]}) b = b.stack(dim_stacked=['dim0', 'dim1']) This should raise an errors even though its length is consistent with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2353/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
348539667 | MDExOlB1bGxSZXF1ZXN0MjA2ODY3MjMw | 2354 | Mark some tests related to cdat-lite as xfail | fujiisoup 6815844 | closed | 0 | 2 | 2018-08-08T01:13:25Z | 2018-08-10T16:09:30Z | 2018-08-10T16:09:30Z | MEMBER | 0 | pydata/xarray/pulls/2354 | I just mark some to_cdms2 tests xfail. See #2332 for the details. It is a temporal workaround and we may need to keep #2332 open until it is solved. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
348516727 | MDU6SXNzdWUzNDg1MTY3Mjc= | 2352 | Failing test for python=3.6 dask-dev | fujiisoup 6815844 | closed | 0 | 3 | 2018-08-07T23:02:02Z | 2018-08-08T01:45:45Z | 2018-08-08T01:45:45Z | MEMBER | Recently, dask renamed BTW, there is another faling test in python=2.7 dev, claiming that
Is anyone working on this? If not, I think we can temporally skip these tests for python 2.7. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2352/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
348108577 | MDExOlB1bGxSZXF1ZXN0MjA2NTM3NDc0 | 2349 | dask.ghost -> dask.overlap | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-06T22:54:46Z | 2018-08-08T01:14:04Z | 2018-08-08T01:14:02Z | MEMBER | 0 | pydata/xarray/pulls/2349 | Dask renamed |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
347672994 | MDExOlB1bGxSZXF1ZXN0MjA2MjI0Mjcz | 2342 | apply_ufunc now raises a ValueError when the size of input_core_dims is inconsistent with number of argument | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-05T06:20:03Z | 2018-08-06T22:38:57Z | 2018-08-06T22:38:53Z | MEMBER | 0 | pydata/xarray/pulls/2342 |
Now raises a ValueError when the size of input_core_dims is inconsistent with number of argument. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
347662610 | MDU6SXNzdWUzNDc2NjI2MTA= | 2341 | apply_ufunc silently neglects arguments if `len(input_core_dims) < args` | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-05T02:16:00Z | 2018-08-06T22:38:53Z | 2018-08-06T22:38:53Z | MEMBER | From SO In the following script, the second argument is silently neglected,
The correct scipt might be
I think we can raise a more friendly error if the size of EDIT:
Or we can automatically insert an empty tuple or |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
347677525 | MDExOlB1bGxSZXF1ZXN0MjA2MjI2ODU0 | 2343 | local flake8 | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-05T07:47:38Z | 2018-08-05T23:47:00Z | 2018-08-05T23:47:00Z | MEMBER | 0 | pydata/xarray/pulls/2343 | Trivial changes to pass local flake8 tests. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
345434195 | MDExOlB1bGxSZXF1ZXN0MjA0NTg1MDU5 | 2326 | fix doc build error after #2312 | fujiisoup 6815844 | closed | 0 | 0 | 2018-07-28T09:15:20Z | 2018-07-28T10:05:53Z | 2018-07-28T10:05:50Z | MEMBER | 0 | pydata/xarray/pulls/2326 | I merged #2312 without making sure the building test passing, but there was a typo. Ths PR fixes it. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
333480301 | MDU6SXNzdWUzMzM0ODAzMDE= | 2238 | Failing test with dask_distributed | fujiisoup 6815844 | closed | 0 | jhamman 2443309 | 5 | 2018-06-19T00:34:45Z | 2018-07-14T16:19:53Z | 2018-07-14T16:19:53Z | MEMBER | Some tests related to dask/distributed are failing in travis.
They are raising a Could anyone help to look inside? See the travis's log for the current master: https://travis-ci.org/pydata/xarray/builds/392530577 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
289556132 | MDExOlB1bGxSZXF1ZXN0MTYzNjU3NDI0 | 1837 | Rolling window with `as_strided` | fujiisoup 6815844 | closed | 0 | 14 | 2018-01-18T09:18:19Z | 2018-06-22T22:27:11Z | 2018-03-01T03:39:19Z | MEMBER | 0 | pydata/xarray/pulls/1837 |
I started to work for refactoring rollings.
As suggested in #1831 comment, I implemented I got more than 1,000 times speed up! yey!
My current concerns are
+ Can we expose the new
Any thoughts are welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1837/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
333510121 | MDU6SXNzdWUzMzM1MTAxMjE= | 2239 | Error in docs/plottings | fujiisoup 6815844 | closed | 0 | 1 | 2018-06-19T03:50:51Z | 2018-06-20T16:26:37Z | 2018-06-20T16:26:37Z | MEMBER | There is an error on rtd. http://xarray.pydata.org/en/stable/plotting.html#id4 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
330859619 | MDExOlB1bGxSZXF1ZXN0MTkzNzYyMjMx | 2222 | implement interp_like | fujiisoup 6815844 | closed | 0 | 4 | 2018-06-09T06:46:48Z | 2018-06-20T01:39:40Z | 2018-06-20T01:39:24Z | MEMBER | 0 | pydata/xarray/pulls/2222 |
This adds |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
330469406 | MDU6SXNzdWUzMzA0Njk0MDY= | 2218 | interp_like | fujiisoup 6815844 | closed | 0 | 0 | 2018-06-07T23:24:48Z | 2018-06-20T01:39:24Z | 2018-06-20T01:39:24Z | MEMBER | Just as a reminder of the remaining extension of #2104 . We might add |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
320275317 | MDExOlB1bGxSZXF1ZXN0MTg1OTgzOTc3 | 2104 | implement interp() | fujiisoup 6815844 | closed | 0 | 51 | 2018-05-04T13:28:38Z | 2018-06-11T13:01:21Z | 2018-06-08T00:33:52Z | MEMBER | 0 | pydata/xarray/pulls/2104 |
I started working to add I think I need to take care of more edge cases, but before finishing up this PR, I want to discuss what the best API is. I would like to this method working similar to ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) ...: In [2]: # simple linear interpolation ...: da.interpolate_at(x=[0.5, 1.5]) ...: Out[2]: <xarray.DataArray (x: 2)> array([0.05, 0.15]) Coordinates: * x (x) float64 0.5 1.5 In [3]: # with cubic spline interpolation ...: da.interpolate_at(x=[0.5, 1.5], method='cubic') ...: Out[3]: <xarray.DataArray (x: 2)> array([0.0375, 0.1625]) Coordinates: * x (x) float64 0.5 1.5 In [4]: # interpolation at one single position ...: da.interpolate_at(x=0.5) ...: Out[4]: <xarray.DataArray ()> array(0.05) Coordinates: x float64 0.5 In [5]: # interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([[0.5, 1.0], [1.5, 2.0]], dims=['y', 'z'])) ...: Out[5]: <xarray.DataArray (y: 2, z: 2)> array([[0.05, 0.1 ], [0.15, 0.2 ]]) Coordinates: x (y, z) float64 0.5 1.0 1.5 2.0 Dimensions without coordinates: y, z In [6]: da = xr.DataArray([[0, 0.1, 0.2], [1.0, 1.1, 1.2]], ...: dims=['x', 'y'], ...: coords={'x': [0, 1], 'y': [0, 10, 20]}) ...: In [7]: # multidimensional interpolation ...: da.interpolate_at(x=[0.5, 1.5], y=[5, 15]) ...: Out[7]: <xarray.DataArray (x: 2, y: 2)> array([[0.55, 0.65], [ nan, nan]]) Coordinates: * x (x) float64 0.5 1.5 * y (y) int64 5 15 In [8]: # multidimensional interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([0.5, 1.5], dims='z'), ...: y=xr.DataArray([5, 15], dims='z')) ...: Out[8]: <xarray.DataArray (z: 2)> array([0.55, nan]) Coordinates: x (z) float64 0.5 1.5 y (z) int64 5 15 Dimensions without coordinates: z ``` Design question
I appreciate any comments. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2104/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
330487989 | MDExOlB1bGxSZXF1ZXN0MTkzNDg2NzYz | 2220 | Reduce memory usage in doc.interpolation.rst | fujiisoup 6815844 | closed | 0 | 0 | 2018-06-08T01:23:13Z | 2018-06-08T01:45:11Z | 2018-06-08T01:31:19Z | MEMBER | 0 | pydata/xarray/pulls/2220 | I noticed an example I added to doc in #2104 consumes more than 1 GB memory, and it results in the failing in readthedocs build. This PR changes this to a much lighter example. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
300268334 | MDExOlB1bGxSZXF1ZXN0MTcxMzk2NjUw | 1942 | Fix precision drop when indexing a datetime64 arrays. | fujiisoup 6815844 | closed | 0 | 2 | 2018-02-26T14:53:57Z | 2018-06-08T01:21:07Z | 2018-02-27T01:13:45Z | MEMBER | 0 | pydata/xarray/pulls/1942 |
This precision drop was caused when converting We need to call |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
295838143 | MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1 | 1899 | Vectorized lazy indexing | fujiisoup 6815844 | closed | 0 | 37 | 2018-02-09T11:22:02Z | 2018-06-08T01:21:06Z | 2018-03-06T22:00:57Z | MEMBER | 0 | pydata/xarray/pulls/1899 |
I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue. My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation. The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time. I appreciate any suggestions. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1899/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
328006764 | MDExOlB1bGxSZXF1ZXN0MTkxNjUzMjk3 | 2205 | Support dot with older dask | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-31T06:13:48Z | 2018-06-01T01:01:37Z | 2018-06-01T01:01:34Z | MEMBER | 0 | pydata/xarray/pulls/2205 |
Related with #2203, I think it is better if The cost is a slight complication of the code. Any comments are welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
326352018 | MDU6SXNzdWUzMjYzNTIwMTg= | 2184 | Alighment is not working in Dataset.__setitem__ and Dataset.update | fujiisoup 6815844 | closed | 0 | 1 | 2018-05-25T01:38:25Z | 2018-05-26T09:32:50Z | 2018-05-26T09:32:50Z | MEMBER | Code Sample, a copy-pastable example if possiblefrom #2180 , comment
In the above, with anything but an outer join you're destroying d2 - which doesn't even exist in the rhs dataset! A sane, desirable outcome should be Problem descriptionAlignment should work Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2184/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
326420749 | MDExOlB1bGxSZXF1ZXN0MTkwNTA5OTk5 | 2185 | weighted rolling mean -> weighted rolling sum | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-25T08:03:59Z | 2018-05-25T10:38:52Z | 2018-05-25T10:38:48Z | MEMBER | 0 | pydata/xarray/pulls/2185 | An example of weighted rolling mean in doc is actually weighted rolling sum. It is a little bit misleading SO, so I propose to change
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322572723 | MDExOlB1bGxSZXF1ZXN0MTg3NjU3MTg4 | 2124 | Raise an Error if a coordinate with wrong size is assigned to a dataarray | fujiisoup 6815844 | closed | 0 | 1 | 2018-05-13T07:50:15Z | 2018-05-16T02:10:48Z | 2018-05-15T16:39:22Z | MEMBER | 0 | pydata/xarray/pulls/2124 |
Now uses |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
321796423 | MDU6SXNzdWUzMjE3OTY0MjM= | 2112 | Sanity check when assigning a coordinate to DataArray | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-10T03:22:18Z | 2018-05-15T16:39:22Z | 2018-05-15T16:39:22Z | MEMBER | Code Sample, a copy-pastable example if possibleI think we can raise an Error if the newly assigned coordinate to a DataArray has an invalid shape.
Problem descriptionIt is more user-friendly if we make some sanity checks when a new coordinate is assigned to a xr.DataArray. Dataset raises an appropriate error,
Expected OutputValueError |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
322572858 | MDExOlB1bGxSZXF1ZXN0MTg3NjU3MjY0 | 2125 | Reduce pad size in rolling | fujiisoup 6815844 | closed | 0 | 2 | 2018-05-13T07:52:50Z | 2018-05-14T22:43:24Z | 2018-05-13T22:37:48Z | MEMBER | 0 | pydata/xarray/pulls/2125 |
I noticed @jhamman , can you kindly review this? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
319420201 | MDExOlB1bGxSZXF1ZXN0MTg1MzQzMTgw | 2100 | Fix a bug introduced in #2087 | fujiisoup 6815844 | closed | 0 | 1 | 2018-05-02T06:07:01Z | 2018-05-14T00:01:15Z | 2018-05-02T21:59:34Z | MEMBER | 0 | pydata/xarray/pulls/2100 |
A quick fix for #2099 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322475569 | MDExOlB1bGxSZXF1ZXN0MTg3NjAwMzQy | 2122 | Fixes centerized rolling with bottleneck | fujiisoup 6815844 | closed | 0 | 2 | 2018-05-12T02:28:21Z | 2018-05-13T00:27:56Z | 2018-05-12T06:15:55Z | MEMBER | 0 | pydata/xarray/pulls/2122 |
Two bugs were found and fixed.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322314146 | MDExOlB1bGxSZXF1ZXN0MTg3NDc3Mzgz | 2119 | Support keep_attrs for apply_ufunc for xr.Variable | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-11T14:18:51Z | 2018-05-11T22:54:48Z | 2018-05-11T22:54:44Z | MEMBER | 0 | pydata/xarray/pulls/2119 |
Fixes 2114. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
321928898 | MDU6SXNzdWUzMjE5Mjg4OTg= | 2114 | keep_attrs=True does not work `apply_ufunc` with xr.Variable | fujiisoup 6815844 | closed | 0 | 2 | 2018-05-10T13:21:07Z | 2018-05-11T22:54:44Z | 2018-05-11T22:54:44Z | MEMBER | Code Sample, a copy-pastable example if possible
```python In [2]: import numpy as np In [3]: import xarray as xr In [4]: da = xr.DataArray([0, 1, 2], dims='x', attrs={'foo': 'var'}) In [5]: func = lambda x: x*2 In [6]: xr.apply_ufunc(func, da, keep_attrs=True, input_core_dims=[['x']], outpu ...: t_core_dims=[['z']]) Out[6]: <xarray.DataArray (z: 3)> # attrs are tracked for xr.DataArray array([0, 2, 4]) Dimensions without coordinates: z Attributes: foo: var In [7]: xr.apply_ufunc(func, da.variable, keep_attrs=True, input_core_dims=[['x' ...: ]], output_core_dims=[['z']]) Out[7]: <xarray.Variable (z: 3)> # attrs are dropped array([0, 2, 4]) ``` Problem description
Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
319419699 | MDU6SXNzdWUzMTk0MTk2OTk= | 2099 | Dataset.update wrongly handles the coordinate | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-02T06:04:02Z | 2018-05-02T21:59:34Z | 2018-05-02T21:59:34Z | MEMBER | Code Sample, a copy-pastable example if possibleI noticed a bug introduced by #2087 (my PR) ```python import xarray as xr ds = xr.Dataset({'var': ('x', [1, 2, 3])}, coords={'x': [0, 1, 2], 'z1': ('x', [1, 2, 3]), 'z2': ('x', [1, 2, 3])}) ds['var'] = ds['var'] * 2 ``` It claims a ValueError. Problem descriptionHere should be
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
318237397 | MDExOlB1bGxSZXF1ZXN0MTg0NDk1MDI4 | 2087 | Drop conflicted coordinate when assignment. | fujiisoup 6815844 | closed | 0 | 1 | 2018-04-27T00:12:43Z | 2018-05-02T05:58:41Z | 2018-05-02T02:31:02Z | MEMBER | 0 | pydata/xarray/pulls/2087 |
After this, when assigning a dataarray to a dataset, non-dimensional and conflicted coordinates of the dataarray are dropped. example ``` In [2]: ds = xr.Dataset({'da': ('x', [0, 1, 2])}, ...: coords={'y': (('x',), [0.1, 0.2, 0.3])}) ...: ds ...: Out[2]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 In [3]: other = ds['da'] ...: other['y'] = 'x', [0, 1, 2] # conflicted non-dimensional coordinate ...: ds['da'] = other ...: ds ...: Out[3]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 # 'y' is not overwritten Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
297794911 | MDExOlB1bGxSZXF1ZXN0MTY5NjMxNTU3 | 1919 | Remove flake8 from travis | fujiisoup 6815844 | closed | 0 | 10 | 2018-02-16T14:03:46Z | 2018-05-01T07:24:04Z | 2018-05-01T07:24:00Z | MEMBER | 0 | pydata/xarray/pulls/1919 |
The removal of flake8 from travis would increase the clearer separation between style-issue and test failure. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
316660970 | MDU6SXNzdWUzMTY2NjA5NzA= | 2075 | apply_ufunc can generate an invalid object. | fujiisoup 6815844 | closed | 0 | 2 | 2018-04-23T04:52:25Z | 2018-04-23T05:08:02Z | 2018-04-23T05:08:02Z | MEMBER | Code Sample, a copy-pastable example if possible
In the above example, Problem descriptionAny of our function should not generate invalid xarray objects. Expected Output
or raise an Error. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
314653502 | MDU6SXNzdWUzMTQ2NTM1MDI= | 2062 | __contains__ does not work with DataArray | fujiisoup 6815844 | closed | 0 | 2 | 2018-04-16T13:34:30Z | 2018-04-16T15:51:30Z | 2018-04-16T15:51:29Z | MEMBER | Code Sample, a copy-pastable example if possible```python
Problem description
Expected Output```python
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
305751269 | MDExOlB1bGxSZXF1ZXN0MTc1NDAzMzE4 | 1994 | Make constructing slices lazily. | fujiisoup 6815844 | closed | 0 | 1 | 2018-03-15T23:15:26Z | 2018-03-18T08:56:31Z | 2018-03-18T08:56:27Z | MEMBER | 0 | pydata/xarray/pulls/1994 |
Quick fix of #1993. With this fix, the script shown in #1993 runs Bottleneck: 0.08317923545837402 s Pandas: 1.3338768482208252 s Xarray: 1.1349339485168457 s |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1994/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
300486064 | MDU6SXNzdWUzMDA0ODYwNjQ= | 1944 | building doc is failing for the release 0.10.1 | fujiisoup 6815844 | closed | 0 | 9 | 2018-02-27T04:01:28Z | 2018-03-12T20:36:58Z | 2018-03-12T20:35:31Z | MEMBER | I found the following page fails http://xarray.pydata.org/en/stable/examples/weather-data.html |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
302718231 | MDExOlB1bGxSZXF1ZXN0MTczMTcwNjc1 | 1968 | einsum for xarray | fujiisoup 6815844 | closed | 0 | 5 | 2018-03-06T14:18:22Z | 2018-03-12T06:42:12Z | 2018-03-12T06:42:08Z | MEMBER | 0 | pydata/xarray/pulls/1968 |
Currently, lazy-einsum for dask is not yet working. @shoyer
I think |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1968/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
301657312 | MDU6SXNzdWUzMDE2NTczMTI= | 1951 | einsum for xarray | fujiisoup 6815844 | closed | 0 | 1 | 2018-03-02T05:25:23Z | 2018-03-12T06:42:08Z | 2018-03-12T06:42:08Z | MEMBER | Code Sample, a copy-pastable example if possibleI sometimes want to make more flexible dot product of two data arrays, where we sum up along a part of common dimensions. ```python Your code hereda_vals = np.arange(6 * 5 * 4).reshape((6, 5, 4)) da = DataArray(da_vals, dims=['x', 'y', 'z']) dm_vals = np.arange(6 * 4).reshape((6, 4)) dm = DataArray(dm_vals, dims=['x', 'z']) I want something like thisda.dot(dm, 'z') # -> dimensions of the output array: ['x', 'y'] ``` It's an intermediate path of Is this feature sufficiently universal? EDIT:
I just noticed dask does not have |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1951/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
304042598 | MDU6SXNzdWUzMDQwNDI1OTg= | 1979 | Tests are failing caused by zarr 2.2.0 | fujiisoup 6815844 | closed | 0 | 2 | 2018-03-10T05:02:39Z | 2018-03-12T05:37:02Z | 2018-03-12T05:37:02Z | MEMBER | Problem descriptionTests are failing due to the release of zarr 2.2.0 Travis's log https://travis-ci.org/pydata/xarray/jobs/351566529 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
302001772 | MDU6SXNzdWUzMDIwMDE3NzI= | 1956 | numpy 1.11 support for apply_ufunc | fujiisoup 6815844 | closed | 0 | 1 | 2018-03-03T14:23:40Z | 2018-03-07T16:41:54Z | 2018-03-07T16:41:54Z | MEMBER | I noticed the failing in rtd
http://xarray.pydata.org/en/stable/computation.html#missing-values
is because it still uses numpy=1.11 which does not support This can be easily fixed (just bumping up numpy's version on rtd),
but as our minimum requirement is numpy==1.11, we may need to take care of this in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
302003819 | MDExOlB1bGxSZXF1ZXN0MTcyNjcwNTI4 | 1957 | Numpy 1.13 for rtd | fujiisoup 6815844 | closed | 0 | 4 | 2018-03-03T14:51:21Z | 2018-03-03T22:22:54Z | 2018-03-03T22:22:49Z | MEMBER | 0 | pydata/xarray/pulls/1957 | { "url": "https://api.github.com/repos/pydata/xarray/issues/1957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
301613959 | MDExOlB1bGxSZXF1ZXN0MTcyMzk0OTEz | 1950 | Fix doc for missing values. | fujiisoup 6815844 | closed | 0 | 4 | 2018-03-02T00:47:23Z | 2018-03-03T06:58:33Z | 2018-03-02T20:17:29Z | MEMBER | 0 | pydata/xarray/pulls/1950 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
288567090 | MDU6SXNzdWUyODg1NjcwOTA= | 1831 | Slow performance of rolling.reduce | fujiisoup 6815844 | closed | 0 | 4 | 2018-01-15T11:44:47Z | 2018-03-01T03:39:19Z | 2018-03-01T03:39:19Z | MEMBER | Code Sample, a copy-pastable example if possible```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(1000, 100), dims=['x', 'y'], ...: coords={'x': np.arange(1000)}) ...: In [2]: %%timeit ...: da.rolling(x=10).reduce(np.sum) ...: 2.04 s ± 8.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Problem descriptionIn Of course, we can use bottleneck methods if available, but this provides only a limited functions. (This also limits possible extensions of rolling, such as ND-rolling (#819), window type (#1142), strides (#819).) I am wondering if we could skip any sanity checks in our |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1831/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
300484822 | MDExOlB1bGxSZXF1ZXN0MTcxNTU3Mjc5 | 1943 | Fix rtd link on readme | fujiisoup 6815844 | closed | 0 | 1 | 2018-02-27T03:52:56Z | 2018-02-27T04:31:59Z | 2018-02-27T04:27:24Z | MEMBER | 0 | pydata/xarray/pulls/1943 | Typo in url. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
299606951 | MDU6SXNzdWUyOTk2MDY5NTE= | 1937 | `isnull` loads dask array | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-23T05:54:58Z | 2018-02-25T20:52:16Z | 2018-02-25T20:52:16Z | MEMBER | From gitter cc. @davidh-ssec
Problem description
Expected Output
CauseHere, |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
298054181 | MDExOlB1bGxSZXF1ZXN0MTY5ODEyMTA1 | 1922 | Support indexing with 0d-np.ndarray | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-18T02:46:27Z | 2018-02-18T07:26:33Z | 2018-02-18T07:26:30Z | MEMBER | 0 | pydata/xarray/pulls/1922 |
Now Variable accepts 0d-np.ndarray indexer. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
298012981 | MDU6SXNzdWUyOTgwMTI5ODE= | 1921 | BUG: Indexing by 0-dimensional array | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-17T15:36:31Z | 2018-02-18T07:26:30Z | 2018-02-18T07:26:30Z | MEMBER | ```python In [1]: import xarray as xr ...: import numpy as np ...: ...: a = np.arange(10) ...: a[np.array(0)] ...: Out[1]: 0 In [2]: da = xr.DataArray(a, dims='x') ...: da[np.array(0)] ...: TypeError Traceback (most recent call last) <ipython-input-2-d30fdfc612ec> in <module>() 1 da = xr.DataArray(a, dims='x') ----> 2 da[np.array(0)] /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataarray.pyc in getitem(self, key) 478 else: 479 # xarray-style array indexing --> 480 return self.isel(**self._item_key_to_dict(key)) 481 482 def setitem(self, key, value): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataarray.pyc in isel(self, drop, indexers) 759 DataArray.sel 760 """ --> 761 ds = self._to_temp_dataset().isel(drop=drop, indexers) 762 return self._from_temp_dataset(ds) 763 /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataset.py in isel(self, drop, indexers) 1390 for name, var in iteritems(self._variables): 1391 var_indexers = {k: v for k, v in indexers_list if k in var.dims} -> 1392 new_var = var.isel(var_indexers) 1393 if not (drop and name in var_indexers): 1394 variables[name] = new_var /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in isel(self, **indexers) 851 if dim in indexers: 852 key[i] = indexers[dim] --> 853 return self[tuple(key)] 854 855 def squeeze(self, dim=None): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in getitem(self, key)
619 array /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in _broadcast_indexes(self, key) 477 # key can be mapped as an OuterIndexer. 478 if all(not isinstance(k, Variable) for k in key): --> 479 return self._broadcast_indexes_outer(key) 480 481 # If all key is 1-dimensional and there are no duplicate labels, /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in _broadcast_indexes_outer(self, key) 542 new_key.append(k) 543 --> 544 return dims, OuterIndexer(tuple(new_key)), None 545 546 def _nonzero(self): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/indexing.py in init(self, key) 368 raise TypeError('invalid indexer array for {}, must have ' 369 'exactly 1 dimension: ' --> 370 .format(type(self).name, k)) 371 k = np.asarray(k, dtype=np.int64) 372 else: TypeError: invalid indexer array for OuterIndexer, must have exactly 1 dimension: ``` Indexing by a 0d-array should be identical to the indexing by a scalar. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
294052591 | MDExOlB1bGxSZXF1ZXN0MTY2OTI1MzU5 | 1883 | Support nan-ops for object-typed arrays | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-02T23:16:39Z | 2018-02-15T22:03:06Z | 2018-02-15T22:03:01Z | MEMBER | 0 | pydata/xarray/pulls/1883 |
I am working to add aggregation ops for object-typed arrays, which may make #1837 cleaner.
I added some tests but maybe not sufficient.
Any other cases which should be considered?
e.g. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1883/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
292633789 | MDU6SXNzdWUyOTI2MzM3ODk= | 1866 | aggregation ops for object-dtype are missing | fujiisoup 6815844 | closed | 0 | 0 | 2018-01-30T02:40:27Z | 2018-02-15T22:03:01Z | 2018-02-15T22:03:01Z | MEMBER | This issue arises in #1837 comment, where we need to make a summation of object-dtype array, such as
pandas support this by having their own nan-aggregation methods. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);