issues
78 rows where type = "pull" and user = 6815844 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
818583834 | MDExOlB1bGxSZXF1ZXN0NTgxODIxNTI0 | 4974 | implemented pad with new-indexes | fujiisoup 6815844 | closed | 0 | 8 | 2021-03-01T07:50:08Z | 2023-09-14T02:47:24Z | 2023-09-14T02:47:24Z | MEMBER | 0 | pydata/xarray/pulls/4974 |
Now we use a tuple of indexes for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
527553050 | MDExOlB1bGxSZXF1ZXN0MzQ0ODA1NzQ3 | 3566 | Make 0d-DataArray compatible for indexing. | fujiisoup 6815844 | closed | 0 | 6 | 2019-11-23T12:43:32Z | 2023-08-31T02:06:21Z | 2023-08-31T02:06:21Z | MEMBER | 0 | pydata/xarray/pulls/3566 |
Now 0d-DataArray can be used for indexing. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
531087939 | MDExOlB1bGxSZXF1ZXN0MzQ3NTkyNzE1 | 3587 | boundary options for rolling.construct | fujiisoup 6815844 | open | 0 | 4 | 2019-12-02T12:11:44Z | 2022-06-09T14:50:17Z | MEMBER | 0 | pydata/xarray/pulls/3587 |
Added some boundary options for rolling.construct.
Currently, the option names are inherited from |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3587/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
675604714 | MDExOlB1bGxSZXF1ZXN0NDY1MDg1Njg1 | 4329 | ndrolling repr fix | fujiisoup 6815844 | closed | 0 | 6 | 2020-08-08T23:34:37Z | 2020-08-09T13:15:50Z | 2020-08-09T11:57:38Z | MEMBER | 0 | pydata/xarray/pulls/4329 |
There was a bug in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
655389649 | MDExOlB1bGxSZXF1ZXN0NDQ3ODkyNjE3 | 4219 | nd-rolling | fujiisoup 6815844 | closed | 0 | 16 | 2020-07-12T12:19:19Z | 2020-08-08T07:23:51Z | 2020-08-08T04:16:27Z | MEMBER | 0 | pydata/xarray/pulls/4219 |
I noticed that the implementation of nd-rolling is straightforward. The core part is implemented but I am wondering what the best API is, with keeping it backward-compatible. Obviously, it is basically should look like
A problem is other parameters, So, maybe we allow dictionary for them?
The same thing happens for Does anyone have another idea? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
239918314 | MDExOlB1bGxSZXF1ZXN0MTI4NDcxOTk4 | 1469 | Argmin indexes | fujiisoup 6815844 | closed | 0 | 6 | 2017-07-01T01:23:31Z | 2020-06-29T19:36:25Z | 2020-06-29T19:36:25Z | MEMBER | 0 | pydata/xarray/pulls/1469 |
With this PR, ValueError raises if
Example: ```python In [1]: import xarray as xr ...: da = xr.DataArray([[1, 2], [-1, 40], [5, 6]], ...: [('x', ['c', 'b', 'a']), ('y', [1, 0])]) ...: ...: da.argmin_indexes() ...: Out[1]: OrderedDict([('x', <xarray.DataArray 'x' ()> array(1)), ('y', <xarray.DataArray 'y' ()> array(0))]) In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', <xarray.DataArray 'y' (x: 3)> array([0, 0, 0]) Coordinates: * x (x) <U1 'c' 'b' 'a')]) ``` (Because the returned object is an Although in #1388
This is mainly because
1. For 1, I have prepared modification of For 2, we should either
+ change API of ```python In [2]: da.argmin_indexes(dims='y') Out[2]: OrderedDict([('y', array([0, 0, 0]), 'x', array(['c' 'b' 'a']))
I originally worked with the second option for the modification of Another alternertive is to
+ change API of Any comments are welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
619374891 | MDExOlB1bGxSZXF1ZXN0NDE4OTEyODc3 | 4069 | Improve interp performance | fujiisoup 6815844 | closed | 0 | 2 | 2020-05-16T04:23:47Z | 2020-05-25T20:02:41Z | 2020-05-25T20:02:37Z | MEMBER | 0 | pydata/xarray/pulls/4069 |
Now n-dimensional interp works sequentially if possible. It may speed up some cases. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
613044689 | MDExOlB1bGxSZXF1ZXN0NDEzODcyODQy | 4036 | support darkmode | fujiisoup 6815844 | closed | 0 | 5 | 2020-05-06T04:39:07Z | 2020-05-21T21:06:15Z | 2020-05-07T20:36:32Z | MEMBER | 0 | pydata/xarray/pulls/4036 |
Now it looks like
I'm pretty sure that this workaround is not the best (maybe the second worst), as it only supports the dark mode of vscode but not other environments. I couldn't find a good way to make a workaround for the general dark-mode. Any advice is welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4036/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
596163034 | MDExOlB1bGxSZXF1ZXN0NDAwNTExNjkz | 3953 | Fix wrong order of coordinate converted from pd.series with MultiIndex | fujiisoup 6815844 | closed | 0 | 2 | 2020-04-07T21:28:04Z | 2020-04-08T05:49:46Z | 2020-04-08T02:19:11Z | MEMBER | 0 | pydata/xarray/pulls/3953 |
It looks
Added a workaround for this... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3953/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 } |
xarray 13221727 | pull | |||||
546784890 | MDExOlB1bGxSZXF1ZXN0MzYwMzk1OTY4 | 3670 | sel with categorical index | fujiisoup 6815844 | closed | 0 | 7 | 2020-01-08T10:51:06Z | 2020-01-25T22:38:28Z | 2020-01-25T22:38:21Z | MEMBER | 0 | pydata/xarray/pulls/3670 |
It is a bit surprising that no members have used xarray with CategoricalIndex... If there is anything missing additionally, please feel free to point it out. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
523853001 | MDExOlB1bGxSZXF1ZXN0MzQxNzYxNTg1 | 3542 | sparse option to reindex and unstack | fujiisoup 6815844 | closed | 0 | 2 | 2019-11-16T14:41:00Z | 2019-11-19T22:40:34Z | 2019-11-19T16:23:34Z | MEMBER | 0 | pydata/xarray/pulls/3542 |
Added There is still a lot of space to complete the sparse support as discussed in #3245. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3542/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
523831612 | MDExOlB1bGxSZXF1ZXN0MzQxNzQ2NDA4 | 3541 | Added fill_value for unstack | fujiisoup 6815844 | closed | 0 | 3 | 2019-11-16T11:10:56Z | 2019-11-16T14:42:31Z | 2019-11-16T14:36:44Z | MEMBER | 0 | pydata/xarray/pulls/3541 |
Added an option |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3541/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
522319360 | MDExOlB1bGxSZXF1ZXN0MzQwNTQxNzMz | 3520 | Fix set_index when an existing dimension becomes a level | fujiisoup 6815844 | closed | 0 | 2 | 2019-11-13T16:06:50Z | 2019-11-14T11:56:25Z | 2019-11-14T11:56:18Z | MEMBER | 0 | pydata/xarray/pulls/3520 |
There was a bug in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
281423161 | MDExOlB1bGxSZXF1ZXN0MTU3ODU2NTEx | 1776 | [WIP] Fix pydap array wrapper | fujiisoup 6815844 | closed | 0 | 0.10.3 3008859 | 6 | 2017-12-12T15:22:07Z | 2019-09-25T15:44:19Z | 2018-01-09T01:48:13Z | MEMBER | 0 | pydata/xarray/pulls/1776 |
I am trying to fix #1775, but tests are still failing. Any help would be appreciated. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
440900618 | MDExOlB1bGxSZXF1ZXN0Mjc2MzQ2MTQ3 | 2942 | Fix rolling operation with dask and bottleneck | fujiisoup 6815844 | closed | 0 | 7 | 2019-05-06T21:23:41Z | 2019-06-30T00:34:57Z | 2019-06-30T00:34:57Z | MEMBER | 0 | pydata/xarray/pulls/2942 |
Fix for #2940 It looks that there was a bug in the previous logic, but I am not sure why it was working... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
398468139 | MDExOlB1bGxSZXF1ZXN0MjQ0MTYyMTgx | 2668 | fix datetime_to_numeric and Variable._to_numeric | fujiisoup 6815844 | closed | 0 | 14 | 2019-01-11T22:02:07Z | 2019-02-11T11:58:22Z | 2019-02-11T09:47:09Z | MEMBER | 0 | pydata/xarray/pulls/2668 |
Started to fixing #2667 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
396157243 | MDExOlB1bGxSZXF1ZXN0MjQyNDM1MjAz | 2653 | Implement integrate | fujiisoup 6815844 | closed | 0 | 2 | 2019-01-05T11:22:10Z | 2019-01-31T17:31:31Z | 2019-01-31T17:30:31Z | MEMBER | 0 | pydata/xarray/pulls/2653 |
I would like to add |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
231308952 | MDExOlB1bGxSZXF1ZXN0MTIyNDE4MjA3 | 1426 | scalar_level in MultiIndex | fujiisoup 6815844 | closed | 0 | 10 | 2017-05-25T11:03:05Z | 2019-01-14T21:20:28Z | 2019-01-14T21:20:27Z | MEMBER | 0 | pydata/xarray/pulls/1426 |
[Edit for more clarity] I restarted a new branch to fix #1408 (I closed the older one #1412). Because the changes I made is relatively large, here I summarize this PR. SumamryIn this PR, I newly added two kinds of levels in MultiIndex, Changes in behaviors.
Examples of the output are shown below. Any suggestions for these behaviors are welcome. ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) ...: ds2 = xr.Dataset({'foo': (('x',), [4, 5, 6])}, {'x': [1, 2, 3], 'y': 'b'}) ...: # example data ...: ds = xr.concat([ds1, ds2], dim='y').stack(yx=['y', 'x']) ...: ds Out[1]: <xarray.Dataset> Dimensions: (yx: 6) Coordinates: * yx (yx) MultiIndex - y (yx) object 'a' 'a' 'a' 'b' 'b' 'b' # <--- this is index-level - x (yx) int64 1 2 3 1 2 3 # <--- this is also index-level Data variables: foo (yx) int64 1 2 3 4 5 6 In [2]: # 1. indexing a scalar converts In [3]: # 2. indexing a single element from MultiIndex makes a In [6]: # 3. Enables to selecting along a ``` Changes in the public APIsSome changes were necessary to the public APIs, though I tried to minimize them.
Implementation summaryThe main changes in the implementation is the addition of our own wrapper of What we can do nowThe main merit of this proposal is that it enables us to handle
What we cannot do nowWith the current implementation, we can do
Similary, we can neither do What are to be decided
TODOs
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1426/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
391477755 | MDExOlB1bGxSZXF1ZXN0MjM4OTcyNzU5 | 2612 | Added Coarsen | fujiisoup 6815844 | closed | 0 | 16 | 2018-12-16T15:28:31Z | 2019-01-06T09:13:56Z | 2019-01-06T09:13:46Z | MEMBER | 0 | pydata/xarray/pulls/2612 |
Started to implement Currently, it is not working for a datetime coordinate, since I am not familiar with datetime things. Any advice will be appreciated. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
392535505 | MDExOlB1bGxSZXF1ZXN0MjM5Nzg0ODE1 | 2621 | Fix multiindex selection | fujiisoup 6815844 | closed | 0 | 7 | 2018-12-19T10:30:15Z | 2018-12-24T15:37:27Z | 2018-12-24T15:37:27Z | MEMBER | 0 | pydata/xarray/pulls/2621 |
Fix using |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2621/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
368045263 | MDExOlB1bGxSZXF1ZXN0MjIxMzExNzcw | 2477 | Inhouse LooseVersion | fujiisoup 6815844 | closed | 0 | 2 | 2018-10-09T05:23:56Z | 2018-10-10T13:47:31Z | 2018-10-10T13:47:23Z | MEMBER | 0 | pydata/xarray/pulls/2477 |
A fix for #2468. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2477/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
366653476 | MDExOlB1bGxSZXF1ZXN0MjIwMjcyODMz | 2462 | pep8speaks | fujiisoup 6815844 | closed | 0 | 14 | 2018-10-04T07:17:34Z | 2018-10-07T22:40:15Z | 2018-10-07T22:40:08Z | MEMBER | 0 | pydata/xarray/pulls/2462 |
I installed pep8speaks as suggested in #2428.
It looks they do not need a yml file, but it may be safer to add this (just renamed from |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
364565122 | MDExOlB1bGxSZXF1ZXN0MjE4NzIxNDUy | 2447 | restore ddof support in std | fujiisoup 6815844 | closed | 0 | 3 | 2018-09-27T16:51:44Z | 2018-10-03T12:44:55Z | 2018-09-28T13:44:29Z | MEMBER | 0 | pydata/xarray/pulls/2447 |
It looks that I wrongly remove |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
364545910 | MDExOlB1bGxSZXF1ZXN0MjE4NzA2NzQ1 | 2446 | fix:2445 | fujiisoup 6815844 | closed | 0 | 0 | 2018-09-27T16:00:17Z | 2018-09-28T18:24:42Z | 2018-09-28T18:24:36Z | MEMBER | 0 | pydata/xarray/pulls/2446 |
It is a regression after #2360. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2446/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
350247452 | MDExOlB1bGxSZXF1ZXN0MjA4MTQ0ODQx | 2366 | Future warning for default reduction dimension of groupby | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-14T01:16:34Z | 2018-09-28T06:54:30Z | 2018-09-28T06:54:30Z | MEMBER | 0 | pydata/xarray/pulls/2366 |
Started to fix #2363.
Now warns a futurewarning in groupby if default reduction dimension is not specified.
As a side effect, I added |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
333248242 | MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3 | 2236 | Refactor nanops | fujiisoup 6815844 | closed | 0 | 19 | 2018-06-18T12:27:31Z | 2018-09-26T12:42:55Z | 2018-08-16T06:59:33Z | MEMBER | 0 | pydata/xarray/pulls/2236 |
In #2230, the addition of I tried to refactor them by moving nan-aggregation methods to I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation. Note:
In my implementation, bottleneck is not used when |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
356698348 | MDExOlB1bGxSZXF1ZXN0MjEyODg5NzMy | 2398 | implement Gradient | fujiisoup 6815844 | closed | 0 | 19 | 2018-09-04T08:11:52Z | 2018-09-21T20:02:43Z | 2018-09-21T20:02:43Z | MEMBER | 0 | pydata/xarray/pulls/2398 |
Added |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
351502921 | MDExOlB1bGxSZXF1ZXN0MjA5MDc4NDQ4 | 2372 | [MAINT] Avoid using duck typing | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-17T08:26:31Z | 2018-08-20T01:13:26Z | 2018-08-20T01:13:16Z | MEMBER | 0 | pydata/xarray/pulls/2372 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2372/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
351591072 | MDExOlB1bGxSZXF1ZXN0MjA5MTQ1NDcy | 2373 | More support of non-string dimension names | fujiisoup 6815844 | closed | 0 | 2 | 2018-08-17T13:18:18Z | 2018-08-20T01:13:02Z | 2018-08-20T01:12:37Z | MEMBER | 0 | pydata/xarray/pulls/2373 |
Following to #2174 In some methods, consistency of the dictionary arguments and keyword arguments are checked twice in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
348536270 | MDExOlB1bGxSZXF1ZXN0MjA2ODY0NzU4 | 2353 | Raises a ValueError for a confliction between dimension names and level names | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-08T00:52:29Z | 2018-08-13T22:16:36Z | 2018-08-13T22:16:31Z | MEMBER | 0 | pydata/xarray/pulls/2353 |
Now it raises an Error to assign new dimension with the name conflicting with an existing level name. Therefore, it is not allowed ```python b = xr.Dataset(coords={'dim0': ['a', 'b'], 'dim1': [0, 1]}) b = b.stack(dim_stacked=['dim0', 'dim1']) This should raise an errors even though its length is consistent with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2353/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
348539667 | MDExOlB1bGxSZXF1ZXN0MjA2ODY3MjMw | 2354 | Mark some tests related to cdat-lite as xfail | fujiisoup 6815844 | closed | 0 | 2 | 2018-08-08T01:13:25Z | 2018-08-10T16:09:30Z | 2018-08-10T16:09:30Z | MEMBER | 0 | pydata/xarray/pulls/2354 | I just mark some to_cdms2 tests xfail. See #2332 for the details. It is a temporal workaround and we may need to keep #2332 open until it is solved. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
348108577 | MDExOlB1bGxSZXF1ZXN0MjA2NTM3NDc0 | 2349 | dask.ghost -> dask.overlap | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-06T22:54:46Z | 2018-08-08T01:14:04Z | 2018-08-08T01:14:02Z | MEMBER | 0 | pydata/xarray/pulls/2349 | Dask renamed |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
347672994 | MDExOlB1bGxSZXF1ZXN0MjA2MjI0Mjcz | 2342 | apply_ufunc now raises a ValueError when the size of input_core_dims is inconsistent with number of argument | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-05T06:20:03Z | 2018-08-06T22:38:57Z | 2018-08-06T22:38:53Z | MEMBER | 0 | pydata/xarray/pulls/2342 |
Now raises a ValueError when the size of input_core_dims is inconsistent with number of argument. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
347677525 | MDExOlB1bGxSZXF1ZXN0MjA2MjI2ODU0 | 2343 | local flake8 | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-05T07:47:38Z | 2018-08-05T23:47:00Z | 2018-08-05T23:47:00Z | MEMBER | 0 | pydata/xarray/pulls/2343 | Trivial changes to pass local flake8 tests. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
345434195 | MDExOlB1bGxSZXF1ZXN0MjA0NTg1MDU5 | 2326 | fix doc build error after #2312 | fujiisoup 6815844 | closed | 0 | 0 | 2018-07-28T09:15:20Z | 2018-07-28T10:05:53Z | 2018-07-28T10:05:50Z | MEMBER | 0 | pydata/xarray/pulls/2326 | I merged #2312 without making sure the building test passing, but there was a typo. Ths PR fixes it. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
289556132 | MDExOlB1bGxSZXF1ZXN0MTYzNjU3NDI0 | 1837 | Rolling window with `as_strided` | fujiisoup 6815844 | closed | 0 | 14 | 2018-01-18T09:18:19Z | 2018-06-22T22:27:11Z | 2018-03-01T03:39:19Z | MEMBER | 0 | pydata/xarray/pulls/1837 |
I started to work for refactoring rollings.
As suggested in #1831 comment, I implemented I got more than 1,000 times speed up! yey!
My current concerns are
+ Can we expose the new
Any thoughts are welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1837/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
330859619 | MDExOlB1bGxSZXF1ZXN0MTkzNzYyMjMx | 2222 | implement interp_like | fujiisoup 6815844 | closed | 0 | 4 | 2018-06-09T06:46:48Z | 2018-06-20T01:39:40Z | 2018-06-20T01:39:24Z | MEMBER | 0 | pydata/xarray/pulls/2222 |
This adds |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
320275317 | MDExOlB1bGxSZXF1ZXN0MTg1OTgzOTc3 | 2104 | implement interp() | fujiisoup 6815844 | closed | 0 | 51 | 2018-05-04T13:28:38Z | 2018-06-11T13:01:21Z | 2018-06-08T00:33:52Z | MEMBER | 0 | pydata/xarray/pulls/2104 |
I started working to add I think I need to take care of more edge cases, but before finishing up this PR, I want to discuss what the best API is. I would like to this method working similar to ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) ...: In [2]: # simple linear interpolation ...: da.interpolate_at(x=[0.5, 1.5]) ...: Out[2]: <xarray.DataArray (x: 2)> array([0.05, 0.15]) Coordinates: * x (x) float64 0.5 1.5 In [3]: # with cubic spline interpolation ...: da.interpolate_at(x=[0.5, 1.5], method='cubic') ...: Out[3]: <xarray.DataArray (x: 2)> array([0.0375, 0.1625]) Coordinates: * x (x) float64 0.5 1.5 In [4]: # interpolation at one single position ...: da.interpolate_at(x=0.5) ...: Out[4]: <xarray.DataArray ()> array(0.05) Coordinates: x float64 0.5 In [5]: # interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([[0.5, 1.0], [1.5, 2.0]], dims=['y', 'z'])) ...: Out[5]: <xarray.DataArray (y: 2, z: 2)> array([[0.05, 0.1 ], [0.15, 0.2 ]]) Coordinates: x (y, z) float64 0.5 1.0 1.5 2.0 Dimensions without coordinates: y, z In [6]: da = xr.DataArray([[0, 0.1, 0.2], [1.0, 1.1, 1.2]], ...: dims=['x', 'y'], ...: coords={'x': [0, 1], 'y': [0, 10, 20]}) ...: In [7]: # multidimensional interpolation ...: da.interpolate_at(x=[0.5, 1.5], y=[5, 15]) ...: Out[7]: <xarray.DataArray (x: 2, y: 2)> array([[0.55, 0.65], [ nan, nan]]) Coordinates: * x (x) float64 0.5 1.5 * y (y) int64 5 15 In [8]: # multidimensional interpolation with broadcasting ...: da.interpolate_at(x=xr.DataArray([0.5, 1.5], dims='z'), ...: y=xr.DataArray([5, 15], dims='z')) ...: Out[8]: <xarray.DataArray (z: 2)> array([0.55, nan]) Coordinates: x (z) float64 0.5 1.5 y (z) int64 5 15 Dimensions without coordinates: z ``` Design question
I appreciate any comments. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2104/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
330487989 | MDExOlB1bGxSZXF1ZXN0MTkzNDg2NzYz | 2220 | Reduce memory usage in doc.interpolation.rst | fujiisoup 6815844 | closed | 0 | 0 | 2018-06-08T01:23:13Z | 2018-06-08T01:45:11Z | 2018-06-08T01:31:19Z | MEMBER | 0 | pydata/xarray/pulls/2220 | I noticed an example I added to doc in #2104 consumes more than 1 GB memory, and it results in the failing in readthedocs build. This PR changes this to a much lighter example. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
300268334 | MDExOlB1bGxSZXF1ZXN0MTcxMzk2NjUw | 1942 | Fix precision drop when indexing a datetime64 arrays. | fujiisoup 6815844 | closed | 0 | 2 | 2018-02-26T14:53:57Z | 2018-06-08T01:21:07Z | 2018-02-27T01:13:45Z | MEMBER | 0 | pydata/xarray/pulls/1942 |
This precision drop was caused when converting We need to call |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
295838143 | MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1 | 1899 | Vectorized lazy indexing | fujiisoup 6815844 | closed | 0 | 37 | 2018-02-09T11:22:02Z | 2018-06-08T01:21:06Z | 2018-03-06T22:00:57Z | MEMBER | 0 | pydata/xarray/pulls/1899 |
I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue. My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation. The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time. I appreciate any suggestions. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1899/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
328006764 | MDExOlB1bGxSZXF1ZXN0MTkxNjUzMjk3 | 2205 | Support dot with older dask | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-31T06:13:48Z | 2018-06-01T01:01:37Z | 2018-06-01T01:01:34Z | MEMBER | 0 | pydata/xarray/pulls/2205 |
Related with #2203, I think it is better if The cost is a slight complication of the code. Any comments are welcome. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
326420749 | MDExOlB1bGxSZXF1ZXN0MTkwNTA5OTk5 | 2185 | weighted rolling mean -> weighted rolling sum | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-25T08:03:59Z | 2018-05-25T10:38:52Z | 2018-05-25T10:38:48Z | MEMBER | 0 | pydata/xarray/pulls/2185 | An example of weighted rolling mean in doc is actually weighted rolling sum. It is a little bit misleading SO, so I propose to change
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322572723 | MDExOlB1bGxSZXF1ZXN0MTg3NjU3MTg4 | 2124 | Raise an Error if a coordinate with wrong size is assigned to a dataarray | fujiisoup 6815844 | closed | 0 | 1 | 2018-05-13T07:50:15Z | 2018-05-16T02:10:48Z | 2018-05-15T16:39:22Z | MEMBER | 0 | pydata/xarray/pulls/2124 |
Now uses |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322572858 | MDExOlB1bGxSZXF1ZXN0MTg3NjU3MjY0 | 2125 | Reduce pad size in rolling | fujiisoup 6815844 | closed | 0 | 2 | 2018-05-13T07:52:50Z | 2018-05-14T22:43:24Z | 2018-05-13T22:37:48Z | MEMBER | 0 | pydata/xarray/pulls/2125 |
I noticed @jhamman , can you kindly review this? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
319420201 | MDExOlB1bGxSZXF1ZXN0MTg1MzQzMTgw | 2100 | Fix a bug introduced in #2087 | fujiisoup 6815844 | closed | 0 | 1 | 2018-05-02T06:07:01Z | 2018-05-14T00:01:15Z | 2018-05-02T21:59:34Z | MEMBER | 0 | pydata/xarray/pulls/2100 |
A quick fix for #2099 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322475569 | MDExOlB1bGxSZXF1ZXN0MTg3NjAwMzQy | 2122 | Fixes centerized rolling with bottleneck | fujiisoup 6815844 | closed | 0 | 2 | 2018-05-12T02:28:21Z | 2018-05-13T00:27:56Z | 2018-05-12T06:15:55Z | MEMBER | 0 | pydata/xarray/pulls/2122 |
Two bugs were found and fixed.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
322314146 | MDExOlB1bGxSZXF1ZXN0MTg3NDc3Mzgz | 2119 | Support keep_attrs for apply_ufunc for xr.Variable | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-11T14:18:51Z | 2018-05-11T22:54:48Z | 2018-05-11T22:54:44Z | MEMBER | 0 | pydata/xarray/pulls/2119 |
Fixes 2114. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
318237397 | MDExOlB1bGxSZXF1ZXN0MTg0NDk1MDI4 | 2087 | Drop conflicted coordinate when assignment. | fujiisoup 6815844 | closed | 0 | 1 | 2018-04-27T00:12:43Z | 2018-05-02T05:58:41Z | 2018-05-02T02:31:02Z | MEMBER | 0 | pydata/xarray/pulls/2087 |
After this, when assigning a dataarray to a dataset, non-dimensional and conflicted coordinates of the dataarray are dropped. example ``` In [2]: ds = xr.Dataset({'da': ('x', [0, 1, 2])}, ...: coords={'y': (('x',), [0.1, 0.2, 0.3])}) ...: ds ...: Out[2]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 In [3]: other = ds['da'] ...: other['y'] = 'x', [0, 1, 2] # conflicted non-dimensional coordinate ...: ds['da'] = other ...: ds ...: Out[3]: <xarray.Dataset> Dimensions: (x: 3) Coordinates: y (x) float64 0.1 0.2 0.3 # 'y' is not overwritten Dimensions without coordinates: x Data variables: da (x) int64 0 1 2 ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
297794911 | MDExOlB1bGxSZXF1ZXN0MTY5NjMxNTU3 | 1919 | Remove flake8 from travis | fujiisoup 6815844 | closed | 0 | 10 | 2018-02-16T14:03:46Z | 2018-05-01T07:24:04Z | 2018-05-01T07:24:00Z | MEMBER | 0 | pydata/xarray/pulls/1919 |
The removal of flake8 from travis would increase the clearer separation between style-issue and test failure. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
305751269 | MDExOlB1bGxSZXF1ZXN0MTc1NDAzMzE4 | 1994 | Make constructing slices lazily. | fujiisoup 6815844 | closed | 0 | 1 | 2018-03-15T23:15:26Z | 2018-03-18T08:56:31Z | 2018-03-18T08:56:27Z | MEMBER | 0 | pydata/xarray/pulls/1994 |
Quick fix of #1993. With this fix, the script shown in #1993 runs Bottleneck: 0.08317923545837402 s Pandas: 1.3338768482208252 s Xarray: 1.1349339485168457 s |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1994/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
302718231 | MDExOlB1bGxSZXF1ZXN0MTczMTcwNjc1 | 1968 | einsum for xarray | fujiisoup 6815844 | closed | 0 | 5 | 2018-03-06T14:18:22Z | 2018-03-12T06:42:12Z | 2018-03-12T06:42:08Z | MEMBER | 0 | pydata/xarray/pulls/1968 |
Currently, lazy-einsum for dask is not yet working. @shoyer
I think |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1968/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
302003819 | MDExOlB1bGxSZXF1ZXN0MTcyNjcwNTI4 | 1957 | Numpy 1.13 for rtd | fujiisoup 6815844 | closed | 0 | 4 | 2018-03-03T14:51:21Z | 2018-03-03T22:22:54Z | 2018-03-03T22:22:49Z | MEMBER | 0 | pydata/xarray/pulls/1957 | { "url": "https://api.github.com/repos/pydata/xarray/issues/1957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
301613959 | MDExOlB1bGxSZXF1ZXN0MTcyMzk0OTEz | 1950 | Fix doc for missing values. | fujiisoup 6815844 | closed | 0 | 4 | 2018-03-02T00:47:23Z | 2018-03-03T06:58:33Z | 2018-03-02T20:17:29Z | MEMBER | 0 | pydata/xarray/pulls/1950 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
300484822 | MDExOlB1bGxSZXF1ZXN0MTcxNTU3Mjc5 | 1943 | Fix rtd link on readme | fujiisoup 6815844 | closed | 0 | 1 | 2018-02-27T03:52:56Z | 2018-02-27T04:31:59Z | 2018-02-27T04:27:24Z | MEMBER | 0 | pydata/xarray/pulls/1943 | Typo in url. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
298054181 | MDExOlB1bGxSZXF1ZXN0MTY5ODEyMTA1 | 1922 | Support indexing with 0d-np.ndarray | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-18T02:46:27Z | 2018-02-18T07:26:33Z | 2018-02-18T07:26:30Z | MEMBER | 0 | pydata/xarray/pulls/1922 |
Now Variable accepts 0d-np.ndarray indexer. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
294052591 | MDExOlB1bGxSZXF1ZXN0MTY2OTI1MzU5 | 1883 | Support nan-ops for object-typed arrays | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-02T23:16:39Z | 2018-02-15T22:03:06Z | 2018-02-15T22:03:01Z | MEMBER | 0 | pydata/xarray/pulls/1883 |
I am working to add aggregation ops for object-typed arrays, which may make #1837 cleaner.
I added some tests but maybe not sufficient.
Any other cases which should be considered?
e.g. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1883/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
291544932 | MDExOlB1bGxSZXF1ZXN0MTY1MDk5Mzk2 | 1858 | Adding a link to asv benchmark. | fujiisoup 6815844 | closed | 0 | 2 | 2018-01-25T11:56:56Z | 2018-01-25T21:55:24Z | 2018-01-25T17:46:12Z | MEMBER | 0 | pydata/xarray/pulls/1858 | As discussed in #1851, I added a link in doc/installing.rst and a badge on README. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
290666013 | MDExOlB1bGxSZXF1ZXN0MTY0NDUyOTg4 | 1851 | Indexing benchmarking | fujiisoup 6815844 | closed | 0 | 3 | 2018-01-23T00:27:29Z | 2018-01-24T08:10:19Z | 2018-01-24T08:10:19Z | MEMBER | 0 | pydata/xarray/pulls/1851 |
Just added some benchmarks for basic, outer, and vectorized indexing and assignments. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
289877082 | MDExOlB1bGxSZXF1ZXN0MTYzODk2MjYw | 1841 | Add dtype support for reduce methods. | fujiisoup 6815844 | closed | 0 | 0 | 2018-01-19T06:40:41Z | 2018-01-20T18:29:02Z | 2018-01-20T18:29:02Z | MEMBER | 0 | pydata/xarray/pulls/1841 |
Fixes #1838. The new rule for reduce is + If dtype is not None and different from array's dtype, use numpy's aggregation function instead of bottleneck's. + If out is not None, raise an error. as suggested in this comments. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
277589143 | MDExOlB1bGxSZXF1ZXN0MTU1MjIxMjQ3 | 1746 | Fix in vectorized item assignment | fujiisoup 6815844 | closed | 0 | 4 | 2017-11-29T00:37:41Z | 2017-12-09T03:29:35Z | 2017-12-09T03:29:35Z | MEMBER | 0 | pydata/xarray/pulls/1746 |
Found bugs in I will add more tests later. Test case suggestions would be appreciated. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
274763120 | MDExOlB1bGxSZXF1ZXN0MTUzMjIzMjMy | 1724 | Fix unexpected loading after ``print`` | fujiisoup 6815844 | closed | 0 | 1 | 2017-11-17T06:20:28Z | 2017-11-17T16:44:40Z | 2017-11-17T16:44:40Z | MEMBER | 0 | pydata/xarray/pulls/1724 |
Only a single missing underscore causes this issue :) Added tests. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
269996138 | MDExOlB1bGxSZXF1ZXN0MTQ5ODE0MTU2 | 1676 | Support orthogonal indexing in MemoryCachedArray (Fix for #1429) | fujiisoup 6815844 | closed | 0 | 7 | 2017-10-31T15:10:59Z | 2017-11-09T13:47:38Z | 2017-11-06T17:21:56Z | MEMBER | 0 | pydata/xarray/pulls/1676 |
This bug originates from the complicated structure around the array wrappers and their indexing, i.e. different array wrappers support different indexing types, and moreover, some can store another array wrapper in it. I made some cleanups.
+ Now every array wrapper is a subclass of I think I added enough test for it, but I am not yet fully accustomed with xarray's backend. There might be many combinations of their hierarchical relation. I would appreciate any comments. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1676/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
272164108 | MDExOlB1bGxSZXF1ZXN0MTUxMzU2ODQz | 1700 | Add dropna test. | fujiisoup 6815844 | closed | 0 | 3 | 2017-11-08T11:25:18Z | 2017-11-09T07:56:19Z | 2017-11-09T07:56:13Z | MEMBER | 0 | pydata/xarray/pulls/1700 |
This PR simply adds a particular test pointed out in #1694 . |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1700/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
271180559 | MDExOlB1bGxSZXF1ZXN0MTUwNjcwMTI4 | 1693 | Bugfix in broadcast_indexes | fujiisoup 6815844 | closed | 0 | 8 | 2017-11-04T09:58:43Z | 2017-11-07T20:41:53Z | 2017-11-07T20:41:44Z | MEMBER | 0 | pydata/xarray/pulls/1693 |
Fixes #1688.
It is caused that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
265344609 | MDExOlB1bGxSZXF1ZXN0MTQ2NDk4Mzg4 | 1632 | Support autocompletion dictionary access in ipython. | fujiisoup 6815844 | closed | 0 | 6 | 2017-10-13T16:19:35Z | 2017-11-04T16:05:02Z | 2017-10-22T17:49:21Z | MEMBER | 0 | pydata/xarray/pulls/1632 |
Support #1628. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1632/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
253277979 | MDExOlB1bGxSZXF1ZXN0MTM3OTA3NDIx | 1530 | Deprecate old pandas support | fujiisoup 6815844 | closed | 0 | 0.10 2415632 | 1 | 2017-08-28T09:40:02Z | 2017-11-04T09:51:51Z | 2017-08-31T17:25:10Z | MEMBER | 0 | pydata/xarray/pulls/1530 |
Explicitly deprecated old pandas (< 0.18) and old numpy (< 1.11) supports.
Some backported functions in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1530/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
256261536 | MDExOlB1bGxSZXF1ZXN0MTQwMDQzMjAx | 1564 | Uint support in reduce methods with skipna | fujiisoup 6815844 | closed | 0 | 3 | 2017-09-08T13:54:54Z | 2017-11-04T09:51:49Z | 2017-09-08T16:12:23Z | MEMBER | 0 | pydata/xarray/pulls/1564 |
Fixes #1562 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
260619568 | MDExOlB1bGxSZXF1ZXN0MTQzMTMxNjcz | 1594 | Remove unused version check for pandas. | fujiisoup 6815844 | closed | 0 | 2 | 2017-09-26T13:16:42Z | 2017-11-04T09:51:45Z | 2017-09-27T02:10:58Z | MEMBER | 0 | pydata/xarray/pulls/1594 |
Currently some tests fail due to dask bug in #1591 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
271180056 | MDExOlB1bGxSZXF1ZXN0MTUwNjY5ODY1 | 1692 | Bugfix in broadcast indexes | fujiisoup 6815844 | closed | 0 | 1 | 2017-11-04T09:49:11Z | 2017-11-04T09:51:37Z | 2017-11-04T09:49:22Z | MEMBER | 0 | pydata/xarray/pulls/1692 |
Fixes #1688.
It is caused that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
270152596 | MDExOlB1bGxSZXF1ZXN0MTQ5OTMzMzI1 | 1677 | Removed `.T` from __dir__ explicitly | fujiisoup 6815844 | closed | 0 | 1 | 2017-10-31T23:43:42Z | 2017-11-04T09:51:21Z | 2017-11-01T00:48:42Z | MEMBER | 0 | pydata/xarray/pulls/1677 |
Remved |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
267019149 | MDExOlB1bGxSZXF1ZXN0MTQ3Njg4MzE5 | 1639 | indexing with broadcasting | fujiisoup 6815844 | closed | 0 | 2 | 2017-10-19T23:22:14Z | 2017-11-04T08:29:55Z | 2017-10-19T23:52:50Z | MEMBER | 0 | pydata/xarray/pulls/1639 |
This is a duplicate of #1473 originally opened by @shoyer Thanks, @shoyer, for giving me github's credit. I enjoyed this PR. I really appreciate your help to finish up this PR. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1639/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
208713614 | MDExOlB1bGxSZXF1ZXN0MTA2ODk5MDM1 | 1277 | Restored dim order in DataArray.rolling().reduce() | fujiisoup 6815844 | closed | 0 | 5 | 2017-02-19T12:14:55Z | 2017-07-09T23:53:15Z | 2017-02-27T17:11:02Z | MEMBER | 0 | pydata/xarray/pulls/1277 |
Added 1 line to fix #1125. I hope this is enough. If another care is necessary, please let me know. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
211323643 | MDExOlB1bGxSZXF1ZXN0MTA4NzEwNjg5 | 1289 | Added a support for Dataset.rolling. | fujiisoup 6815844 | closed | 0 | 9 | 2017-03-02T08:40:03Z | 2017-07-09T23:53:13Z | 2017-03-31T03:10:45Z | MEMBER | 0 | pydata/xarray/pulls/1289 |
There seems to be two approaches to realize Dataset.rolling,
1. Apply rolling in each DataArrays and then combine them.
2. Apply Dataset directoly with some DataArrays that do not depend on I chose the latter approach to reuse existing |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1289/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
226778103 | MDExOlB1bGxSZXF1ZXN0MTE5MzAzNzk3 | 1400 | Patch isel points | fujiisoup 6815844 | closed | 0 | 1 | 2017-05-06T14:59:51Z | 2017-07-09T23:53:06Z | 2017-05-09T02:31:52Z | MEMBER | 0 | pydata/xarray/pulls/1400 |
A small fix for the bug reported in #1337, where unselected coords were wrongly assigned as |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
220520879 | MDExOlB1bGxSZXF1ZXN0MTE1MDE0NTkw | 1364 | Fix a typo | fujiisoup 6815844 | closed | 0 | 4 | 2017-04-10T02:14:56Z | 2017-07-09T23:53:03Z | 2017-04-10T02:24:00Z | MEMBER | 0 | pydata/xarray/pulls/1364 |
Fixes typos in reshaping.rst. Is there a good way to check docs before merge? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
229370997 | MDExOlB1bGxSZXF1ZXN0MTIxMDcxMTA3 | 1412 | Multiindex scalar coords, fixes #1408 | fujiisoup 6815844 | closed | 0 | 9 | 2017-05-17T14:25:50Z | 2017-05-25T11:04:55Z | 2017-05-25T11:04:55Z | MEMBER | 0 | pydata/xarray/pulls/1412 |
To fix #1408,
This modification works, but actually I do not fully satisfied yet.
There are The major changes I made are
1. Change 1 keeps level-coordinates even after I guess much smarter solution should exist. I would be happy if anyone gives me a comment. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
218745734 | MDExOlB1bGxSZXF1ZXN0MTEzODA3NDE4 | 1347 | Support for DataArray.expand_dims() | fujiisoup 6815844 | closed | 0 | 9 | 2017-04-02T06:36:37Z | 2017-04-10T02:05:38Z | 2017-04-10T01:01:54Z | MEMBER | 0 | pydata/xarray/pulls/1347 |
I added a DataArray's method My concern is that I do not yet fully understand the lazy data manipulation in xarray. Does Variable.expand_dims do it? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);