home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

539 rows where user = 6815844 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

issue >30

  • WIP: indexing with broadcasting 33
  • implement interp() 21
  • Vectorized lazy indexing 12
  • Refactor nanops 10
  • Implement interp for interpolating between chunks of data (dask) 10
  • Rolling window with `as_strided` 9
  • nd-rolling 9
  • How should Dataset.update() handle conflicting coordinates? 7
  • Support for DataArray.expand_dims() 6
  • Explicit indexes in xarray's data-model (Future of MultiIndex) 6
  • IPython auto-completion triggers data loading 6
  • implement Gradient 6
  • what is the best way to reset an unintentional direct push to the master 6
  • Fix lag in Jupyter caused by CSS in `_repr_html_` 6
  • Added a support for Dataset.rolling. 5
  • Bugfix in broadcast_indexes 5
  • Regression: dropna() on lazy variable 5
  • fix datetime_to_numeric and Variable._to_numeric 5
  • Multiindex scalar coords, fixes #1408 4
  • scalar_level in MultiIndex 4
  • Support autocompletion dictionary access in ipython. 4
  • Indexing Variable objects with a mask 4
  • building doc is failing for the release 0.10.1 4
  • Load a small subset of data from a big dataset takes forever 4
  • rolling.mean vs rolling.construct.mean 4
  • Multi-dimensional binning/resampling/coarsening 4
  • Fix multiindex selection 4
  • sel with categorical index 4
  • small contrast of html view in VScode darkmode 4
  • Convolution operation 4
  • …

user 1

  • fujiisoup · 539 ✖

author_association 1

  • MEMBER 539
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1049447285 https://github.com/pydata/xarray/pull/4974#issuecomment-1049447285 https://api.github.com/repos/pydata/xarray/issues/4974 IC_kwDOAMm_X84-jUt1 fujiisoup 6815844 2022-02-24T03:02:43Z 2022-02-24T03:02:43Z MEMBER

Hi. Sorry for my late reply. Well, I've just left this PR untouched.

More specifically, the approach here (i.e., passing indexes via the pad_width argument) may be tricky in the context of flexible indexes where multiple indexes/coordinates are allowed for one dimension.

I think we can just discard this PR if this does not fit with the index refactoring. This PR is not big anyway and maybe rewriting this functionality is faster.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implemented pad with new-indexes 818583834
482162700 https://github.com/pydata/xarray/issues/2889#issuecomment-482162700 https://api.github.com/repos/pydata/xarray/issues/2889 MDEyOklzc3VlQ29tbWVudDQ4MjE2MjcwMA== fujiisoup 6815844 2019-04-11T15:28:58Z 2022-01-05T21:59:48Z MEMBER

Thanks @mathause I also think the current behavior is not perfect but the best.

I would expect both to return np.nan

I expect that np.nansum(ds) is equivalent to np.sum(not nan values) and thus should be 0, while np.mean should be NaN as @dcherian pointed out.

To me, the future average function would also return np.nan for all nan slices.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nansum vs nanmean for all-nan vectors 432074821
872042733 https://github.com/pydata/xarray/pull/5201#issuecomment-872042733 https://api.github.com/repos/pydata/xarray/issues/5201 MDEyOklzc3VlQ29tbWVudDg3MjA0MjczMw== fujiisoup 6815844 2021-07-01T08:31:46Z 2021-07-01T08:31:46Z MEMBER

Maybe a Jupyter issue and not related to libraries in use?

I see. Indeed, I didn't see any significant difference among branches.

I may be able later today when I am back to my main computer

I tried but I think maybe better to wait for your update.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix lag in Jupyter caused by CSS in `_repr_html_` 863506023
872033015 https://github.com/pydata/xarray/pull/5201#issuecomment-872033015 https://api.github.com/repos/pydata/xarray/issues/5201 MDEyOklzc3VlQ29tbWVudDg3MjAzMzAxNQ== fujiisoup 6815844 2021-07-01T08:18:55Z 2021-07-01T08:18:55Z MEMBER

Maybe can we measure the first-loading time? I observe the first-loading time is very long... (movie)

The only way I was able to see it was to use the Web Dev tools that come as part of Firefox or Chrome.

Can you tell me more about this? I'll try to reproduce and measure the performance.

https://user-images.githubusercontent.com/6815844/124090964-4e601e80-da90-11eb-8333-7c2a25a8f33d.mp4

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix lag in Jupyter caused by CSS in `_repr_html_` 863506023
872007738 https://github.com/pydata/xarray/pull/5201#issuecomment-872007738 https://api.github.com/repos/pydata/xarray/issues/5201 MDEyOklzc3VlQ29tbWVudDg3MjAwNzczOA== fujiisoup 6815844 2021-07-01T07:45:01Z 2021-07-01T07:45:01Z MEMBER

did you try if there are differences when running an individual cell, not just when loading the page the first time?

I tried to measure the performance by running all the cells as shown in the image but I could not find any significant difference.

However, I'm not very confident if this actually measures the css performance.

@SimonHeybrock, do you have any suggestions how to measure the peformance?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix lag in Jupyter caused by CSS in `_repr_html_` 863506023
871748248 https://github.com/pydata/xarray/pull/5201#issuecomment-871748248 https://api.github.com/repos/pydata/xarray/issues/5201 MDEyOklzc3VlQ29tbWVudDg3MTc0ODI0OA== fujiisoup 6815844 2021-06-30T21:45:52Z 2021-06-30T21:45:52Z MEMBER

I am trying to measure the performance of master, this PR and mine (which fixes this PR to be compatible with dark mode) but couldn't see any big difference in my environment.

What I did in this experiment is to make a notebook with hundreds of empty cells with xarray under these branches. Refreshed the browser to render the htmls. Number of cells are the same in all these experiments, but only the xarray branches (and produced html) are different.

Maybe we may need more cells? Any advice would be appreciated.

https://user-images.githubusercontent.com/6815844/124035536-9ef75d80-da37-11eb-9c78-a9c76d16da1a.mp4

movie top left: this branch top right: mine bottom left: master

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix lag in Jupyter caused by CSS in `_repr_html_` 863506023
846304424 https://github.com/pydata/xarray/issues/2944#issuecomment-846304424 https://api.github.com/repos/pydata/xarray/issues/2944 MDEyOklzc3VlQ29tbWVudDg0NjMwNDQyNA== fujiisoup 6815844 2021-05-21T23:12:21Z 2021-05-21T23:12:21Z MEMBER

Closed as the discussions can be continued in #5361 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `groupby` does not correctly handle non-dimensional coordinate 441088452
828794224 https://github.com/pydata/xarray/pull/5201#issuecomment-828794224 https://api.github.com/repos/pydata/xarray/issues/5201 MDEyOklzc3VlQ29tbWVudDgyODc5NDIyNA== fujiisoup 6815844 2021-04-28T21:33:06Z 2021-04-28T21:33:06Z MEMBER

This

https://github.com/fujiisoup/xarray/blob/6225f158626e75977a0a944fbc09c50769884e35/xarray/static/css/style.css#L5-L29

looks working with a darkmode, but I'm not sure if this solves the original problem.

It looks to me that defining custom properties in html[theme=dark] may cause the same problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix lag in Jupyter caused by CSS in `_repr_html_` 863506023
828778799 https://github.com/pydata/xarray/pull/5201#issuecomment-828778799 https://api.github.com/repos/pydata/xarray/issues/5201 MDEyOklzc3VlQ29tbWVudDgyODc3ODc5OQ== fujiisoup 6815844 2021-04-28T21:03:50Z 2021-04-28T21:03:50Z MEMBER

Confirmed that this also breaks the darkmode also in google colab.

@fujiisoup added the vscode dark mode support, maybe he has ideas.

I did it in #4036 but this was actually a workaround and should be improved by an expert. I'll take a look, but with little hope to fix.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix lag in Jupyter caused by CSS in `_repr_html_` 863506023
819140566 https://github.com/pydata/xarray/pull/5153#issuecomment-819140566 https://api.github.com/repos/pydata/xarray/issues/5153 MDEyOklzc3VlQ29tbWVudDgxOTE0MDU2Ng== fujiisoup 6815844 2021-04-14T00:41:20Z 2021-04-14T00:41:20Z MEMBER

There's a cumtrapz in https://github.com/fujiisoup/xr-scipy/blob/master/xrscipy/integrate.py. Does that help?

Now most of xr-scipy functionalities are already implemented in xarray and also I couldn't take time to maintain this package.

I think basic functionalities would be better to be integrated into xarray itself and cumulative_trapezoid would be a good candidate, as integrate is already there.

The implementation looks good to me. I didn't find any edge cases where duck_array_ops.cumulative_trapezoid behaves differently from scipy.integrate.cumtrapz.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cumulative_integrate() method 857378504
790021474 https://github.com/pydata/xarray/pull/4974#issuecomment-790021474 https://api.github.com/repos/pydata/xarray/issues/4974 MDEyOklzc3VlQ29tbWVudDc5MDAyMTQ3NA== fujiisoup 6815844 2021-03-03T20:08:18Z 2021-03-03T20:08:18Z MEMBER

Thank you @mathause for your suggestion. This looks all the tests are passing now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implemented pad with new-indexes 818583834
789511208 https://github.com/pydata/xarray/pull/4974#issuecomment-789511208 https://api.github.com/repos/pydata/xarray/issues/4974 MDEyOklzc3VlQ29tbWVudDc4OTUxMTIwOA== fujiisoup 6815844 2021-03-03T07:44:04Z 2021-03-03T07:44:04Z MEMBER

Not sure why the doctest is failing. The same tests in test_dataset.py do not fail...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implemented pad with new-indexes 818583834
747665480 https://github.com/pydata/xarray/pull/3587#issuecomment-747665480 https://api.github.com/repos/pydata/xarray/issues/3587 MDEyOklzc3VlQ29tbWVudDc0NzY2NTQ4MA== fujiisoup 6815844 2020-12-17T19:55:36Z 2020-12-17T19:55:36Z MEMBER

I was thinking to wait for the pad method implemented but forgot until now. I am not sure this is easily merginable, as the rolling.py has been updated for a while...

The first motivation was to implement the rolling operation for the periodic coordinate, but it is not yet implemented.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  boundary options for rolling.construct 531087939
717572036 https://github.com/pydata/xarray/issues/4325#issuecomment-717572036 https://api.github.com/repos/pydata/xarray/issues/4325 MDEyOklzc3VlQ29tbWVudDcxNzU3MjAzNg== fujiisoup 6815844 2020-10-27T22:14:41Z 2020-10-27T22:14:41Z MEMBER

@mathause Oh, I missed this issue. Yes, this is implemented only for count.

the thing is that rolling itself is already quite complicated

Agreed. We need to clean this up.

One possible option would be to drop support of bottleneck. This does not work for nd-rolling and if we implement the nd-nanreduce, the speed should be comparable with bottleneck.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize ndrolling nanreduce 675482176
699169927 https://github.com/pydata/xarray/issues/4463#issuecomment-699169927 https://api.github.com/repos/pydata/xarray/issues/4463 MDEyOklzc3VlQ29tbWVudDY5OTE2OTkyNw== fujiisoup 6815844 2020-09-25T21:42:40Z 2020-09-25T21:42:40Z MEMBER

Hi @aulemahal

I think you want to interpolate along tas well as x, and y. If so, you can do

python In [8]: da.interp(t=dx['t'], y=dy, x=dx, method='linear') Out[8]: <xarray.DataArray (t: 2, u: 2)> array([[2., 3.], [2., 3.]]) Coordinates: * t (t) int64 10 12 y (u) float64 1.5 2.5 x (t, u) float64 1.5 1.5 1.5 1.5 * u (u) int64 45 55

If not, this fails as dx['t'] and da['t'] do not match each other. The error message can be improved. A contribution is welcome ;)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Interpolation with multiple mutlidimensional arrays sharing dims fails 709272776
678072493 https://github.com/pydata/xarray/issues/4120#issuecomment-678072493 https://api.github.com/repos/pydata/xarray/issues/4120 MDEyOklzc3VlQ29tbWVudDY3ODA3MjQ5Mw== fujiisoup 6815844 2020-08-21T06:42:45Z 2020-08-21T06:42:45Z MEMBER

My last post was wrong.

I think this part overwrites the attrs,

https://github.com/pydata/xarray/blob/43a2a4bdf3a492d89aae9f2c5b0867932ff51cef/xarray/core/variable.py#L2028 https://github.com/pydata/xarray/blob/43a2a4bdf3a492d89aae9f2c5b0867932ff51cef/xarray/core/variable.py#L2073-L2076

The first line should be replaced by variable = self.copy(deep=False)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  coarsen deletes attrs on original object 630062936
677945186 https://github.com/pydata/xarray/issues/4120#issuecomment-677945186 https://api.github.com/repos/pydata/xarray/issues/4120 MDEyOklzc3VlQ29tbWVudDY3Nzk0NTE4Ng== fujiisoup 6815844 2020-08-20T22:51:21Z 2020-08-21T06:37:33Z MEMBER

~~These lines are suspicious. Maybe we should copy attrs here not geting its reference.~~

https://github.com/pydata/xarray/blob/43a2a4bdf3a492d89aae9f2c5b0867932ff51cef/xarray/core/rolling.py#L498

https://github.com/pydata/xarray/blob/43a2a4bdf3a492d89aae9f2c5b0867932ff51cef/xarray/core/rolling.py#L498

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  coarsen deletes attrs on original object 630062936
674321185 https://github.com/pydata/xarray/pull/4155#issuecomment-674321185 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY3NDMyMTE4NQ== fujiisoup 6815844 2020-08-15T00:30:21Z 2020-08-15T00:30:21Z MEMBER

@cyhsu Yes, because it is not yet released. (I'm not sure when the next release will be, but maybe a few months later) If you do pip install git+https://github.com/pydata/xarray, the current master will be installed in your system and interpolation over the chunks can be used. But note that this means you will install (a kind of) beta version.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
674305570 https://github.com/pydata/xarray/pull/4155#issuecomment-674305570 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY3NDMwNTU3MA== fujiisoup 6815844 2020-08-14T23:07:03Z 2020-08-14T23:07:03Z MEMBER

@cyhsu Yes, in the current master.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
672348216 https://github.com/pydata/xarray/pull/4155#issuecomment-672348216 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY3MjM0ODIxNg== fujiisoup 6815844 2020-08-11T23:16:07Z 2020-08-11T23:16:07Z MEMBER

Thanks @pums974 :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
671036572 https://github.com/pydata/xarray/pull/4329#issuecomment-671036572 https://api.github.com/repos/pydata/xarray/issues/4329 MDEyOklzc3VlQ29tbWVudDY3MTAzNjU3Mg== fujiisoup 6815844 2020-08-09T10:49:35Z 2020-08-09T10:49:35Z MEMBER

Thanks, @keewis , for the clarification.

It was a bug in the documentation page but not in rolling.construct.

It should raise an error in this case, because for 2d rolling we need 2 dimension names, python rolling_da = r.construct(x="x_win", y='y_win' , stride=2) I corrected the documentation and error message.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ndrolling repr fix 675604714
670993724 https://github.com/pydata/xarray/pull/4329#issuecomment-670993724 https://api.github.com/repos/pydata/xarray/issues/4329 MDEyOklzc3VlQ29tbWVudDY3MDk5MzcyNA== fujiisoup 6815844 2020-08-09T01:43:22Z 2020-08-09T01:43:22Z MEMBER

Thanks @keewis for checking. I'm not sure what causes the error in rolling_da.mean("window_dim", skipna=False)

self._mapping_to_list should handle this problem. How can I get the details of this error? I just saw the time-out error...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ndrolling repr fix 675604714
670983702 https://github.com/pydata/xarray/issues/4328#issuecomment-670983702 https://api.github.com/repos/pydata/xarray/issues/4328 MDEyOklzc3VlQ29tbWVudDY3MDk4MzcwMg== fujiisoup 6815844 2020-08-08T23:13:18Z 2020-08-08T23:13:18Z MEMBER

Ah, this attrs = [ --> 102 "{k}->{v}".format(k=k, v=getattr(self, k)) 103 for k in list(self.dim) + self.window + self.center + [self.min_periods] 104 ] should be "{k}->{v}".format(k=k, v=getattr(self.dims, k)) not "{k}->{v}".format(k=k, v=getattr(self, k)) I'll send a fix.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  failing docs CI 675602229
670865538 https://github.com/pydata/xarray/issues/4196#issuecomment-670865538 https://api.github.com/repos/pydata/xarray/issues/4196 MDEyOklzc3VlQ29tbWVudDY3MDg2NTUzOA== fujiisoup 6815844 2020-08-08T10:43:06Z 2020-08-08T10:43:06Z MEMBER

Or maybe we can convolve over the shared dimensions. python da = xr.DataArray(np.random.randn(15, 30), dims=['x', 'y']) kernel = xr.DataArray(np.random.randn(3, 3), dims=['x', 'y']) da.convolve(kernel, mode='same') Other dimensions maybe broadcasted.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Convolution operation 650547452
670842737 https://github.com/pydata/xarray/issues/4196#issuecomment-670842737 https://api.github.com/repos/pydata/xarray/issues/4196 MDEyOklzc3VlQ29tbWVudDY3MDg0MjczNw== fujiisoup 6815844 2020-08-08T08:09:58Z 2020-08-08T08:09:58Z MEMBER

Maybe we can keep this issue open.

python da.convolve(kernel, x='kx', y='ky', mode='same') would be a possible API?

The contribution will be very much appreciated ;)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Convolution operation 650547452
670842411 https://github.com/pydata/xarray/issues/4196#issuecomment-670842411 https://api.github.com/repos/pydata/xarray/issues/4196 MDEyOklzc3VlQ29tbWVudDY3MDg0MjQxMQ== fujiisoup 6815844 2020-08-08T08:07:01Z 2020-08-08T08:07:01Z MEMBER

Maybe we can have a simpler API for convolution operation, though.

python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(15, 30), dims=['x', 'y']) ...: kernel = xr.DataArray(np.random.randn(3, 3), dims=['kx', 'ky']) ...: ...: da.rolling(x=3, y=3).construct(x='kx', y='ky').dot(kernel) Out[1]: <xarray.DataArray (x: 15, y: 30)> array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], ... [ nan, nan, -2.30319699e-01, 3.98542408e-01, 7.65734275e+00, -3.78602564e-01, -3.79670552e+00, -4.63870114e+00, 3.34264622e-02, -3.12097772e+00, -5.76697267e+00, 1.19804861e+00, -8.94696248e-01, 2.29308845e+00, -6.39524525e-01, 4.63574750e+00, 9.72065650e-01, -2.79080617e-01, -4.08284408e-01, 4.09498738e+00, 2.21513156e+00, 2.46188185e-01, -1.30140822e+00, -4.70525588e+00, -4.60012056e+00, 2.33333189e-01, -2.86204413e-01, -5.63190762e-01, 9.31915537e-01, 7.84188609e-01], [ nan, nan, 1.04286238e+00, -1.51693719e+00, 2.49199283e+00, 1.74931359e-01, -4.26361392e+00, -1.85066273e-01, -2.45780660e+00, -3.20920459e+00, -4.13765502e+00, -3.64119127e+00, 1.13819179e-01, -2.10588083e-01, -2.58307399e-02, -6.73602885e-01, 1.51186293e+00, 2.22395020e+00, 3.59169613e+00, 4.44203028e+00, 3.15528384e-01, -2.30913656e+00, 3.07864240e+00, -9.21743416e-01, -2.87995499e+00, -1.92025700e+00, -3.95047208e-01, 4.60378793e+00, 1.11828099e+00, 4.29419626e-01]]) Dimensions without coordinates: x, y

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Convolution operation 650547452
670821295 https://github.com/pydata/xarray/pull/4219#issuecomment-670821295 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY3MDgyMTI5NQ== fujiisoup 6815844 2020-08-08T04:18:08Z 2020-08-08T04:18:08Z MEMBER

@max-sixty thanks for the review. merged

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
670705764 https://github.com/pydata/xarray/pull/4219#issuecomment-670705764 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY3MDcwNTc2NA== fujiisoup 6815844 2020-08-07T20:45:01Z 2020-08-07T20:45:01Z MEMBER

Thanks @max-sixty .

You are completely correct. As the test pass, I was fooling myself.

The reason was that the dataset I was using for the test does not have time and x simultaneously. So I was not testing the 2d-rolling but just 1d-rolling.

Fixed. Now it correctly fails for mean and std, but passes with max and sum

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
667412134 https://github.com/pydata/xarray/pull/4155#issuecomment-667412134 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY2NzQxMjEzNA== fujiisoup 6815844 2020-07-31T22:28:07Z 2020-07-31T22:28:07Z MEMBER

This PR looks good for me. Maybe we can wait for a few days in case anyone has some comments on it. If no comments, I'll merge this then.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
667411555 https://github.com/pydata/xarray/pull/4219#issuecomment-667411555 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY2NzQxMTU1NQ== fujiisoup 6815844 2020-07-31T22:25:25Z 2020-07-31T22:25:25Z MEMBER

Thanks @max-sixty for the review ;) I'll work for the update in a few days.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
666841275 https://github.com/pydata/xarray/pull/4219#issuecomment-666841275 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY2Njg0MTI3NQ== fujiisoup 6815844 2020-07-31T00:42:23Z 2020-07-31T00:42:23Z MEMBER

Could anyone kindly review this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
666720655 https://github.com/pydata/xarray/pull/4155#issuecomment-666720655 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY2NjcyMDY1NQ== fujiisoup 6815844 2020-07-30T21:38:55Z 2020-07-30T21:38:55Z MEMBER

OK. If you have additional time, it would be nicer if you could add more comments on tests, like what is being tested there ;)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
663788117 https://github.com/pydata/xarray/pull/4155#issuecomment-663788117 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY2Mzc4ODExNw== fujiisoup 6815844 2020-07-25T01:08:52Z 2020-07-25T01:08:52Z MEMBER

Thanks @pums974 for this update and sorry for my late response. It looks good but I'll take a deeper look in the next week.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
659908563 https://github.com/pydata/xarray/pull/4233#issuecomment-659908563 https://api.github.com/repos/pydata/xarray/issues/4233 MDEyOklzc3VlQ29tbWVudDY1OTkwODU2Mw== fujiisoup 6815844 2020-07-17T07:02:56Z 2020-07-17T07:02:56Z MEMBER

Thanks, @jenssss for sending a PR. This looks good to me. Could you add a line for this contribution to our whatsnew?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Linear interp with NaNs in nd indexer 658938729
658403527 https://github.com/pydata/xarray/pull/4219#issuecomment-658403527 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY1ODQwMzUyNw== fujiisoup 6815844 2020-07-14T20:44:12Z 2020-07-14T20:44:12Z MEMBER

I got an error for typechecking, only in CI but not in local, from the code that I didn't change.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
657902895 https://github.com/pydata/xarray/pull/4219#issuecomment-657902895 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY1NzkwMjg5NQ== fujiisoup 6815844 2020-07-14T00:49:38Z 2020-07-14T00:49:38Z MEMBER

A possible improvement will be nan-reduction methods for nd-rolling. Currently, we just use numpy nan-reductions, which is memory consuming for strided arrays.

This issue can be solved by replacing nan by appropriate values and applying nonnan-reduction methods, e.g., python da.rolling(x=2, y=3).construct(x='xw', y='yw').sum(['xw', 'yw']) should be the same with python da.rolling(x=2, y=3).construct(x='xw', y='yw', fill_value=0).sum(['xw', 'yw'], skipna=False) and the latter is much more memory efficient.

I'd like to leave this improvement to future PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
657897529 https://github.com/pydata/xarray/pull/4219#issuecomment-657897529 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY1Nzg5NzUyOQ== fujiisoup 6815844 2020-07-14T00:27:51Z 2020-07-14T00:27:51Z MEMBER

I think now it is ready for review, though I'm sure tests miss a lot of edge cases. Maybe we can fix them if pointed out.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
657273886 https://github.com/pydata/xarray/issues/4218#issuecomment-657273886 https://api.github.com/repos/pydata/xarray/issues/4218 MDEyOklzc3VlQ29tbWVudDY1NzI3Mzg4Ng== fujiisoup 6815844 2020-07-12T20:55:53Z 2020-07-12T20:55:53Z MEMBER

I think the preferred option for dealing with accidentally pushed changes is to push a "revert" commit generated from git revert

OK, understood.

but as long as we keep the master branch protected, it's always possible always possible to move forward by reverting changes -- there is no way to lose work.

Then, probably the most dangarous part was when I unprotected the master branch. I was afraid of messing up the commit history, but it is much better than losing entire commit history...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  what is the best way to reset an unintentional direct push to the master 655382009
657270068 https://github.com/pydata/xarray/pull/4219#issuecomment-657270068 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY1NzI3MDA2OA== fujiisoup 6815844 2020-07-12T20:18:28Z 2020-07-12T20:18:28Z MEMBER

Another API concern. We now use min_periods, in which we implicitly assume one-dimension cases.

With nd-dimension, I think min_counts argument is more appropriate like bottleneck, which will limit the lower bound of the number of missing entries in the n-dimensional window.

Even if we leave it, we may disallow nd-argument of min_periods but keep it a scalar.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
657269189 https://github.com/pydata/xarray/pull/4219#issuecomment-657269189 https://api.github.com/repos/pydata/xarray/issues/4219 MDEyOklzc3VlQ29tbWVudDY1NzI2OTE4OQ== fujiisoup 6815844 2020-07-12T20:09:34Z 2020-07-12T20:09:34Z MEMBER

Hi @max-sixty

One alternative is to allow fluent args, like: ...but does that then seem like the second rolling is operating on the result of the first?

I couldn't think of it until just now. But yes, it sounds to me like a repeated rolling operation.

I'm being slow, but where is the nd-rolling algo? I had thought bottleneck didn't support more than one dimension?

No. With nd-rolling, we need to use numpy reductions. Its skipna=True operation is currently slow, but it can be improved replacing nan before the stride-trick.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nd-rolling 655389649
657215217 https://github.com/pydata/xarray/issues/4218#issuecomment-657215217 https://api.github.com/repos/pydata/xarray/issues/4218 MDEyOklzc3VlQ29tbWVudDY1NzIxNTIxNw== fujiisoup 6815844 2020-07-12T12:26:41Z 2020-07-12T12:26:41Z MEMBER

OK, thanks.

So I think either a pre-push hook or git config branch.master.pushRemote no_push (but then you also can't push to your own master anymore) are the best way forward

Agreed. I'll use your pre-push hook. Thanks @keewis .

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  what is the best way to reset an unintentional direct push to the master 655382009
657212192 https://github.com/pydata/xarray/issues/4218#issuecomment-657212192 https://api.github.com/repos/pydata/xarray/issues/4218 MDEyOklzc3VlQ29tbWVudDY1NzIxMjE5Mg== fujiisoup 6815844 2020-07-12T11:59:11Z 2020-07-12T11:59:11Z MEMBER

BTW, is it possible to disallow direct push to master on github? Maybe we only need to merge PRs and but not push.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  what is the best way to reset an unintentional direct push to the master 655382009
657211959 https://github.com/pydata/xarray/issues/4218#issuecomment-657211959 https://api.github.com/repos/pydata/xarray/issues/4218 MDEyOklzc3VlQ29tbWVudDY1NzIxMTk1OQ== fujiisoup 6815844 2020-07-12T11:56:34Z 2020-07-12T11:56:34Z MEMBER

OK. Done. Thanks. I'll use your script. Thanks.

And sorry again for my mistake.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  what is the best way to reset an unintentional direct push to the master 655382009
657211712 https://github.com/pydata/xarray/issues/4218#issuecomment-657211712 https://api.github.com/repos/pydata/xarray/issues/4218 MDEyOklzc3VlQ29tbWVudDY1NzIxMTcxMg== fujiisoup 6815844 2020-07-12T11:54:40Z 2020-07-12T11:54:40Z MEMBER

Maybe I can unprotect the master, but I'm hesitating this action...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  what is the best way to reset an unintentional direct push to the master 655382009
657211365 https://github.com/pydata/xarray/issues/4218#issuecomment-657211365 https://api.github.com/repos/pydata/xarray/issues/4218 MDEyOklzc3VlQ29tbWVudDY1NzIxMTM2NQ== fujiisoup 6815844 2020-07-12T11:51:31Z 2020-07-12T11:51:31Z MEMBER

Thanks. but it looks the master is protected and I cannot force push. Total 0 (delta 0), reused 0 (delta 0) remote: error: GH006: Protected branch update failed for refs/heads/master. remote: error: Cannot force-push to this protected branch To https://github.com/pydata/xarray.git ! [remote rejected] master -> master (protected branch hook declined) error: failed to push some refs to 'https://github.com/pydata/xarray.git'

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  what is the best way to reset an unintentional direct push to the master 655382009
653754721 https://github.com/pydata/xarray/issues/4196#issuecomment-653754721 https://api.github.com/repos/pydata/xarray/issues/4196 MDEyOklzc3VlQ29tbWVudDY1Mzc1NDcyMQ== fujiisoup 6815844 2020-07-04T11:34:19Z 2020-07-04T11:34:19Z MEMBER

One thing I would like to implement in somday is multi-dimensional rolling operation. The 1-dimensional convolution can be done with rolling -> construct -> dot, as can be seen in the doc page (see the last paragraph of http://xarray.pydata.org/en/stable/computation.html#rolling-window-operations)

This is can be extended to multiple dimensions, but it may not be straightforward.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Convolution operation 650547452
653752196 https://github.com/pydata/xarray/issues/4197#issuecomment-653752196 https://api.github.com/repos/pydata/xarray/issues/4197 MDEyOklzc3VlQ29tbWVudDY1Mzc1MjE5Ng== fujiisoup 6815844 2020-07-04T11:05:49Z 2020-07-04T11:05:49Z MEMBER

@cwerner ```python In [40]: idx = (da.count('y').cumsum() != 0) * (da.count('y')[::-1].cumsum()[::- ...: 1] != 0)

In [42]: da.isel(x=idx)
Out[42]: <xarray.DataArray (x: 3, y: 4)> array([[nan, 0., 2., nan], [nan, nan, nan, nan], [nan, 2., 0., nan]]) Dimensions without coordinates: x, y ``` Maybe this works, but I have no cleaner solution.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Provide a "shrink" command to remove bounding nan/ whitespace of DataArray 650549352
653729887 https://github.com/pydata/xarray/issues/4197#issuecomment-653729887 https://api.github.com/repos/pydata/xarray/issues/4197 MDEyOklzc3VlQ29tbWVudDY1MzcyOTg4Nw== fujiisoup 6815844 2020-07-04T06:47:04Z 2020-07-04T06:47:04Z MEMBER

@keewis I think it is close to da.dropna(how='all') python In [12]: da.dropna('x', how='all').dropna('y', how='all') Out[12]: <xarray.DataArray (x: 2, y: 2)> array([[0., 2.], [2., 0.]]) Dimensions without coordinates: x, y I think supporting multiple dimensions for dropna is totally in our scope. Currently, dropna only works with a single dimension and da.dropna(how='all') does not work.

@cwerner Is it close to your example? If you don't want to drop all nans but only those located at the edges, the above example does not work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Provide a "shrink" command to remove bounding nan/ whitespace of DataArray 650549352
651589183 https://github.com/pydata/xarray/pull/4155#issuecomment-651589183 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY1MTU4OTE4Mw== fujiisoup 6815844 2020-06-30T07:01:31Z 2020-06-30T07:01:31Z MEMBER

Hum, ok, but I don't see how it would work if all points are between chunks (see my second example)

Maybe we can support sequential interpolation only at this moment. In this case, python res = data.interp(x=np.linspace(0, 1), y=0.5) can be interpreted as python res = data.interp(x=np.linspace(0, 1)).interp(y=0.5) which might not be too difficult.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
651454795 https://github.com/pydata/xarray/issues/4186#issuecomment-651454795 https://api.github.com/repos/pydata/xarray/issues/4186 MDEyOklzc3VlQ29tbWVudDY1MTQ1NDc5NQ== fujiisoup 6815844 2020-06-30T01:06:34Z 2020-06-30T01:06:34Z MEMBER

I agree that it's better not to sort.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_xarray() result is incorrect when one of multi-index levels is not sorted 646716560
651438776 https://github.com/pydata/xarray/issues/4186#issuecomment-651438776 https://api.github.com/repos/pydata/xarray/issues/4186 MDEyOklzc3VlQ29tbWVudDY1MTQzODc3Ng== fujiisoup 6815844 2020-06-30T00:21:43Z 2020-06-30T00:21:43Z MEMBER

I think the #3953 fixes the case where the multiindex has unused levels. I had no better idea than #3953, but if it works without #3953, it would be better ;)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_xarray() result is incorrect when one of multi-index levels is not sorted 646716560
650428037 https://github.com/pydata/xarray/pull/4155#issuecomment-650428037 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY1MDQyODAzNw== fujiisoup 6815844 2020-06-26T22:17:22Z 2020-06-26T22:17:22Z MEMBER

As for implementing this in dask, you may be right, it probably belong there, But I am even less use to their code base, and have no clue where to put it.

OK. Even so, I would suggest restructuring the code base; maybe we can add an interp1d equivalence into core.dask_array_ops.interp1d which works with dask-arrays (non-xarray object). It'll be easier to test. The API should be the as same with scipy.interp.interp1d as possible.

In missing.py, we can call this function.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
649836609 https://github.com/pydata/xarray/pull/4155#issuecomment-649836609 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY0OTgzNjYwOQ== fujiisoup 6815844 2020-06-25T21:53:36Z 2020-06-25T21:53:36Z MEMBER

Also in my local environment, it gives AttributeError: 'memoryview' object has no attribute 'dtype'

The full stack trace is ``` _________ test_interpolate_1d[1-y-cubic] ____________

method = 'cubic', dim = 'y', case = 1

@pytest.mark.parametrize("method", ["linear", "cubic"])
@pytest.mark.parametrize("dim", ["x", "y"])
@pytest.mark.parametrize("case", [0, 1])
def test_interpolate_1d(method, dim, case):
    if not has_scipy:
        pytest.skip("scipy is not installed.")

    if not has_dask and case in [1]:
        pytest.skip("dask is not installed in the environment.")

    da = get_example_data(case)
    xdest = np.linspace(0.0, 0.9, 80)

    actual = da.interp(method=method, **{dim: xdest})

    # scipy interpolation for the reference
    def func(obj, new_x):
        return scipy.interpolate.interp1d(
            da[dim],
            obj.data,
            axis=obj.get_axis_num(dim),
            bounds_error=False,
            fill_value=np.nan,
            kind=method,
        )(new_x)

    if dim == "x":
        coords = {"x": xdest, "y": da["y"], "x2": ("x", func(da["x2"], xdest))}
    else:  # y
        coords = {"x": da["x"], "y": xdest, "x2": da["x2"]}

    expected = xr.DataArray(func(da, xdest), dims=["x", "y"], coords=coords)
  assert_allclose(actual, expected)

xarray/tests/test_interp.py:86:


xarray/testing.py:132: in compat_variable return a.dims == b.dims and (a._data is b._data or equiv(a.data, b.data)) xarray/testing.py:31: in _data_allclose_or_equiv return duck_array_ops.allclose_or_equiv(arr1, arr2, rtol=rtol, atol=atol) xarray/core/duck_array_ops.py:221: in allclose_or_equiv arr1 = np.array(arr1) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/array/core.py:1314: in array x = self.compute() ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/base.py:165: in compute (result,) = compute(self, traverse=False, kwargs) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/base.py:436: in compute results = schedule(dsk, keys, kwargs) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/local.py:527: in get_sync return get_async(apply_sync, 1, dsk, keys, kwargs) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/local.py:494: in get_async fire_task() ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/local.py:466: in fire_task callback=queue.put, ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/local.py:516: in apply_sync res = func(*args, kwds) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/local.py:227: in execute_task result = pack_exception(e, dumps) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/local.py:222: in execute_task result = _execute_task(task, data) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/core.py:119: in _execute_task return func(args2) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/optimization.py:982: in call return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args))) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/core.py:149: in get result = _execute_task(task, cache) ../../../anaconda3/envs/xarray/lib/python3.7/site-packages/dask/core.py:119: in _execute_task return func(args2) xarray/core/missing.py:830: in _dask_aware_interpnd return _interpnd(var, old_x, new_x, func, kwargs) xarray/core/missing.py:793: in _interpnd x, new_x = _floatize_x(x, new_x) xarray/core/missing.py:577: in _floatize_x if _contains_datetime_like_objects(x[i]): xarray/core/common.py:1595: in _contains_datetime_like_objects return is_np_datetime_like(var.dtype) or contains_cftime_datetimes(var) xarray/core/common.py:1588: in contains_cftime_datetimes return _contains_cftime_datetimes(var.data)


array = <memory at 0x7f771d6daef0>

def _contains_cftime_datetimes(array) -> bool:
    """Check if an array contains cftime.datetime objects
    """
    try:
        from cftime import datetime as cftime_datetime
    except ImportError:
        return False
    else:
      if array.dtype == np.dtype("O") and array.size > 0:

E AttributeError: 'memoryview' object has no attribute 'dtype'

xarray/core/common.py:1574: AttributeError ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
649827797 https://github.com/pydata/xarray/pull/4155#issuecomment-649827797 https://api.github.com/repos/pydata/xarray/issues/4155 MDEyOklzc3VlQ29tbWVudDY0OTgyNzc5Nw== fujiisoup 6815844 2020-06-25T21:30:17Z 2020-06-25T21:30:17Z MEMBER

Hi @pums974

Thanks for sending the PR. I'm working to review it, but it may take more time.

A few comments; Does it work with an unsorted destination? e.g., python da.interp(y=[0, -1, 2])

I'm feeling that the basic algorithm, such as np.interp-equivalence, should be interpreted in upstream. I'm sure Dask community welcomes this addition. Do you have an interest on it?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement interp for interpolating between chunks of data (dask) 638909879
645139667 https://github.com/pydata/xarray/issues/1077#issuecomment-645139667 https://api.github.com/repos/pydata/xarray/issues/1077 MDEyOklzc3VlQ29tbWVudDY0NTEzOTY2Nw== fujiisoup 6815844 2020-06-17T04:21:40Z 2020-06-17T04:21:40Z MEMBER

@dcherian. Now I understood. Your working examples were really nice for me to understand the idea. Thank you for this clarification.

I think the use of this convention is the best idea to save MultiIndex in netCDF. Maybe we can start implementing this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  MultiIndex serialization to NetCDF 187069161
644447471 https://github.com/pydata/xarray/issues/1077#issuecomment-644447471 https://api.github.com/repos/pydata/xarray/issues/1077 MDEyOklzc3VlQ29tbWVudDY0NDQ0NzQ3MQ== fujiisoup 6815844 2020-06-15T23:45:27Z 2020-06-15T23:45:27Z MEMBER

@dcherian I think the problem is how to serialize MultiIndex objects rather than the array itself. In your encoded, how can we tell the MultiIndex is [('a', 1), ('b', 1), ('a', 2), ('b', 2)] or [('a', 1), ('a', 2), ('b', 1), ('b', 2)]? Maybe we need to store similar objects to landpoint for level variables, such as latpoint and lonpoint.

I think just using reset_index is simpler and easier to restore.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  MultiIndex serialization to NetCDF 187069161
644417331 https://github.com/pydata/xarray/issues/4156#issuecomment-644417331 https://api.github.com/repos/pydata/xarray/issues/4156 MDEyOklzc3VlQ29tbWVudDY0NDQxNzMzMQ== fujiisoup 6815844 2020-06-15T22:13:50Z 2020-06-15T22:13:50Z MEMBER

Do we already have something similar encoding (and decoding) scheme to write (and read) data? (does CFTime use a similar scheme?) I think we don't have a scheme to save multiindex yet but need to manually convert by reset_index.

1077

Maybe we can decide this encoding-decoding API before #1603.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  writing sparse to netCDF 638947370
644368878 https://github.com/pydata/xarray/issues/4156#issuecomment-644368878 https://api.github.com/repos/pydata/xarray/issues/4156 MDEyOklzc3VlQ29tbWVudDY0NDM2ODg3OA== fujiisoup 6815844 2020-06-15T20:27:37Z 2020-06-15T20:27:37Z MEMBER

@dcherian Though I have no experience with this gather compression, it looks that python-netcdf4 does not have this function impremented.

One thing we can do is sparse -> multiindex -> reset_index > netCDF or maybe we can even add a function to skip constructing a multiindex but just make flattened index arrays from a sparse array.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  writing sparse to netCDF 638947370
636619598 https://github.com/pydata/xarray/issues/4113#issuecomment-636619598 https://api.github.com/repos/pydata/xarray/issues/4113 MDEyOklzc3VlQ29tbWVudDYzNjYxOTU5OA== fujiisoup 6815844 2020-06-01T05:24:35Z 2020-06-01T05:24:35Z MEMBER

Reading with chunks load the memory more than reading without chunks, but not loading an amount of memory equals to the size of the array (300MB for a 800MB array in the example below). And by the way, also loading up the memory a bit more when stacking.

I think it depends on the chunk size. If I use the chunks chunks=dict(x=128, y=128), the memory usage is RAM: 118.14 MB da: 800.0 MB RAM: 119.14 MB RAM: 125.59 MB RAM: 943.79 MB

When stacking a chunked array, only chunks alongside the first stacking dimension are conserved, and chunks along the second stacking dimension seem to be merged.

I am not sure where 512 comes from in your example (maybe dask does something). If I work with chunks=dict(x=128, y=128), the chunksize after the stacking was (100, 16384), which is reasonable (z=100, px=(128, 128)).

A workaround could have been to save the data already stacked, but "MultiIndex cannot yet be serialized to netCDF".

You can do reset_index before saving it into the netCDF, but it requires another computation when creating the MultiIndex after loading.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.stack load data into memory 627735640
636418772 https://github.com/pydata/xarray/issues/4113#issuecomment-636418772 https://api.github.com/repos/pydata/xarray/issues/4113 MDEyOklzc3VlQ29tbWVudDYzNjQxODc3Mg== fujiisoup 6815844 2020-05-31T04:21:29Z 2020-05-31T04:21:29Z MEMBER

Thank you for raising an issue. I confirmed this problem is reproduced.

Since our Lazyarray does not support the reshaping, it loads the data automatically. This automatic loading happens in many other operations.

For example, if you multiply your array by a scalar, python mda = da *2 It also loads the data into memory. Maybe we should improve the documentation.

FYI, using dask arrays may solve this problem. To open the file with dask, you could add chunks keywords, python da = xr.open_dataarray("da.nc", chunks={'x': 16, 'y': 16}) Then, the reshape will be a lazy operation too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.stack load data into memory 627735640
633453286 https://github.com/pydata/xarray/issues/4068#issuecomment-633453286 https://api.github.com/repos/pydata/xarray/issues/4068 MDEyOklzc3VlQ29tbWVudDYzMzQ1MzI4Ng== fujiisoup 6815844 2020-05-25T08:36:58Z 2020-05-25T08:36:58Z MEMBER

Thanks @DWesl Maybe better to continue discussion either in #3297. I'll close this issue. Thanks for pointing it out.

@dcherian

Personally, I think the h5netcdf workaround is good enough until there is a CF standard for writing complex numbers.

Agreed. Thanks for your thoughts.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  utility function to save complex values as a netCDF file 619347681
633320749 https://github.com/pydata/xarray/pull/4069#issuecomment-633320749 https://api.github.com/repos/pydata/xarray/issues/4069 MDEyOklzc3VlQ29tbWVudDYzMzMyMDc0OQ== fujiisoup 6815844 2020-05-25T00:09:39Z 2020-05-25T00:09:39Z MEMBER

I'll merge this tomorrow.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Improve interp performance 619374891
629859433 https://github.com/pydata/xarray/pull/4069#issuecomment-629859433 https://api.github.com/repos/pydata/xarray/issues/4069 MDEyOklzc3VlQ29tbWVudDYyOTg1OTQzMw== fujiisoup 6815844 2020-05-17T20:56:34Z 2020-05-17T20:56:34Z MEMBER

Maybe I'll merge this in a few days.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Improve interp performance 619374891
624847519 https://github.com/pydata/xarray/pull/4036#issuecomment-624847519 https://api.github.com/repos/pydata/xarray/issues/4036 MDEyOklzc3VlQ29tbWVudDYyNDg0NzUxOQ== fujiisoup 6815844 2020-05-06T19:35:44Z 2020-05-06T19:35:44Z MEMBER

Added a style for colab darkmode according to googlecolab/colabtools/issues/1214 and now it works also in colab dark theme :)

If no further comments, I'll merge this in a day.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  support darkmode 613044689
624438211 https://github.com/pydata/xarray/issues/4024#issuecomment-624438211 https://api.github.com/repos/pydata/xarray/issues/4024 MDEyOklzc3VlQ29tbWVudDYyNDQzODIxMQ== fujiisoup 6815844 2020-05-06T04:42:08Z 2020-05-06T04:42:08Z MEMBER

Thanks, @shoyer and @DocOtak for the suggestions.

It looks like there may be some standard ways to detect dark vs light mode in CSS? https://medium.com/js-dojo/how-to-enable-dark-mode-on-your-website-with-pure-css-32640335474

It looks not working in vscode...

VS Code will tell you if it is in "dark" "light" or "high contrast" modes https://code.visualstudio.com/api/extension-guides/webview#theming-webview-content

In #4036 I used css body.vscode-dark { } code block, but maybe more general solution would be better if available...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  small contrast of html view in VScode darkmode 611643130
624359804 https://github.com/pydata/xarray/issues/4024#issuecomment-624359804 https://api.github.com/repos/pydata/xarray/issues/4024 MDEyOklzc3VlQ29tbWVudDYyNDM1OTgwNA== fujiisoup 6815844 2020-05-05T23:31:26Z 2020-05-05T23:31:26Z MEMBER

It looks that Pandas is taking a very different approach and codebase and I don't think it is easy to adapt their approach...

I am not familiar with the css staff in jupyter but the simplest approach may be just to disable the text- and background-coloring but use the default color only. Then, our html repr becomes less pretty but maybe more robust.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  small contrast of html view in VScode darkmode 611643130
624350348 https://github.com/pydata/xarray/issues/4024#issuecomment-624350348 https://api.github.com/repos/pydata/xarray/issues/4024 MDEyOklzc3VlQ29tbWVudDYyNDM1MDM0OA== fujiisoup 6815844 2020-05-05T23:00:30Z 2020-05-05T23:00:30Z MEMBER

pandas has a good style. We may be able to take it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  small contrast of html view in VScode darkmode 611643130
624338446 https://github.com/pydata/xarray/issues/4024#issuecomment-624338446 https://api.github.com/repos/pydata/xarray/issues/4024 MDEyOklzc3VlQ29tbWVudDYyNDMzODQ0Ng== fujiisoup 6815844 2020-05-05T22:24:04Z 2020-05-05T22:24:04Z MEMBER

It is how it looks like in Light mode

Here is the css definition https://github.com/pydata/xarray/blob/59b470f5d1464366dc55b082618ea87da8fbc9af/xarray/static/css/style.css#L5-L14

It looks like that --jp-content-font-color0 and --jp-layout-color0 come from the theme but the others come from our default values. I have no idea yet how we can manage this...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  small contrast of html view in VScode darkmode 611643130
610707359 https://github.com/pydata/xarray/issues/3954#issuecomment-610707359 https://api.github.com/repos/pydata/xarray/issues/3954 MDEyOklzc3VlQ29tbWVudDYxMDcwNzM1OQ== fujiisoup 6815844 2020-04-08T01:53:23Z 2020-04-08T01:53:23Z MEMBER

Ah, OK. Makes sense. Thanks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Concatenate 3D array with 2D array 596249070
610705859 https://github.com/pydata/xarray/issues/3954#issuecomment-610705859 https://api.github.com/repos/pydata/xarray/issues/3954 MDEyOklzc3VlQ29tbWVudDYxMDcwNTg1OQ== fujiisoup 6815844 2020-04-08T01:48:23Z 2020-04-08T01:48:23Z MEMBER

Hi, @zxdawn

Thank you for raising the issue. I think you need an actual value of z as your b.expand_dims('z') does not have a value for z but it only knows the z is the dimension name.

You can do like python b['z'] = 3 # add a scalar coordinate named 'z' to add a value (we call it coordinate) for z Then, your script will work, b = b.expand_dims('z') # expand 2d to 3d comb = xr.concat([a, b], dim='z')

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Concatenate 3D array with 2D array 596249070
610615886 https://github.com/pydata/xarray/issues/3951#issuecomment-610615886 https://api.github.com/repos/pydata/xarray/issues/3951 MDEyOklzc3VlQ29tbWVudDYxMDYxNTg4Ng== fujiisoup 6815844 2020-04-07T20:56:07Z 2020-04-07T20:56:07Z MEMBER

Thanks, @delgadom, for reporting this issue. Reproduced.

I'll take a look.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  series.to_xarray() fails when MultiIndex not sorted in xarray 0.15.1 596115014
609558708 https://github.com/pydata/xarray/issues/3939#issuecomment-609558708 https://api.github.com/repos/pydata/xarray/issues/3939 MDEyOklzc3VlQ29tbWVudDYwOTU1ODcwOA== fujiisoup 6815844 2020-04-06T04:29:19Z 2020-04-06T04:29:19Z MEMBER

Agreed with @max-sixty. I also like sel and isel as they are clearly distinguishable. It is not clear to me if parenthesis corresponds to sel or isel.

For me, the largest drawback of sel and isel is the fact that autocompleters can not suggest the dimension names (it is another issue though)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Why don't we allow indexing with keyword args via __call__? 594688816
609483638 https://github.com/pydata/xarray/issues/3932#issuecomment-609483638 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTQ4MzYzOA== fujiisoup 6815844 2020-04-05T21:09:58Z 2020-04-05T21:09:58Z MEMBER

An inspection of the dask dashboard indicates that the computation is not distributed among workers though. How could I make sure this happens?

Ah, I have no idea... Are you able to distribute the function some_exp without wrapping by xarray?

Within my limited knowledge, it may be better to prepare another function that distributes some_exp over the workers and put this another function into apply_ufunc, but I am not 100% sure. Probably there is a better way...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609103467 https://github.com/pydata/xarray/pull/1469#issuecomment-609103467 https://api.github.com/repos/pydata/xarray/issues/1469 MDEyOklzc3VlQ29tbWVudDYwOTEwMzQ2Nw== fujiisoup 6815844 2020-04-04T23:24:20Z 2020-04-04T23:24:20Z MEMBER

Hi @johnomotani . Probably I have no time to finish this up and this is already too old. It would be nice if someone can update this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Argmin indexes 239918314
609094164 https://github.com/pydata/xarray/issues/3932#issuecomment-609094164 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTA5NDE2NA== fujiisoup 6815844 2020-04-04T21:54:41Z 2020-04-04T21:54:56Z MEMBER

Is python xr.apply_ufunc(some_exp, ds.x, ds.y, dask='parallelized', output_dtypes=[float], output_sizes={'stats': Nstats}, output_core_dims=[['stats']], vectorize=True) what you want? This gives ```python

<xarray.DataArray (x: 10, y: 20, stats: 5)> array([[[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], ... [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]]]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 8 9 * y (y) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Dimensions without coordinates: stats

In [26]: ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
601411557 https://github.com/pydata/xarray/issues/3868#issuecomment-601411557 https://api.github.com/repos/pydata/xarray/issues/3868 MDEyOklzc3VlQ29tbWVudDYwMTQxMTU1Nw== fujiisoup 6815844 2020-03-19T20:53:30Z 2020-03-19T20:53:30Z MEMBER

How about passing an Index instead of just a simple integer to the pad method? ```python In [4]: da = xr.DataArray([0.5, 1.5, 2.5], dims=['x'], coords={'x': [0, 1, 2]})

In [5]: da
Out[5]: <xarray.DataArray (x: 3)> array([0.5, 1.5, 2.5]) Coordinates: * x (x) int64 0 1 2

In [8]: da.pad(x=([-1, -2], 0))
Out[8]: <xarray.DataArray (x: 5)> array([nan, nan, 0.5, 1.5, 2.5]) Coordinates: * x (x) int64 -1 -2 0 1 2 ```

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What should pad do about IndexVariables? 584461380
598887666 https://github.com/pydata/xarray/pull/3860#issuecomment-598887666 https://api.github.com/repos/pydata/xarray/issues/3860 MDEyOklzc3VlQ29tbWVudDU5ODg4NzY2Ng== fujiisoup 6815844 2020-03-13T19:55:01Z 2020-03-13T19:55:01Z MEMBER

Thank you, @mancellin, for sending the fix. And thank you @max-sixty for the review. It looks all great to me.

Merging. Have a good weekend:)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix multi-index with categorical values. 580646897
598047797 https://github.com/pydata/xarray/issues/3674#issuecomment-598047797 https://api.github.com/repos/pydata/xarray/issues/3674 MDEyOklzc3VlQ29tbWVudDU5ODA0Nzc5Nw== fujiisoup 6815844 2020-03-12T07:37:22Z 2020-03-12T07:37:22Z MEMBER

@mancellin Sorry for my no response. Yes, there may be some possible workarounds, but nowadays I have less spare time... Do you have the interest to send a PR?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-index with categorical values 547091670
578449448 https://github.com/pydata/xarray/pull/3670#issuecomment-578449448 https://api.github.com/repos/pydata/xarray/issues/3670 MDEyOklzc3VlQ29tbWVudDU3ODQ0OTQ0OA== fujiisoup 6815844 2020-01-25T22:38:10Z 2020-01-25T22:38:10Z MEMBER

Thanks, @dcherian and @keewis , for keeping this updated. Merging.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sel with categorical index 546784890
576246864 https://github.com/pydata/xarray/pull/3699#issuecomment-576246864 https://api.github.com/repos/pydata/xarray/issues/3699 MDEyOklzc3VlQ29tbWVudDU3NjI0Njg2NA== fujiisoup 6815844 2020-01-20T12:09:31Z 2020-01-20T12:09:31Z MEMBER

Thanks, @mathause :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/align in dot 550964139
576060074 https://github.com/pydata/xarray/pull/3699#issuecomment-576060074 https://api.github.com/repos/pydata/xarray/issues/3699 MDEyOklzc3VlQ29tbWVudDU3NjA2MDA3NA== fujiisoup 6815844 2020-01-19T23:33:17Z 2020-01-19T23:33:17Z MEMBER

I'll merge this after the conflict in whats-new.rst is solved.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/align in dot 550964139
574426008 https://github.com/pydata/xarray/pull/3670#issuecomment-574426008 https://api.github.com/repos/pydata/xarray/issues/3670 MDEyOklzc3VlQ29tbWVudDU3NDQyNjAwOA== fujiisoup 6815844 2020-01-14T23:40:08Z 2020-01-14T23:40:08Z MEMBER

I'll merge this tomorrow if no more commens.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sel with categorical index 546784890
574425136 https://github.com/pydata/xarray/issues/3694#issuecomment-574425136 https://api.github.com/repos/pydata/xarray/issues/3694 MDEyOklzc3VlQ29tbWVudDU3NDQyNTEzNg== fujiisoup 6815844 2020-01-14T23:37:11Z 2020-01-14T23:37:11Z MEMBER

I have no strong opinion, but if most of the arithmetic in xarray uses join='inner', then it would be nicer to do so here too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.dot requires equal indexes (join="exact") 549679475
573275521 https://github.com/pydata/xarray/pull/3670#issuecomment-573275521 https://api.github.com/repos/pydata/xarray/issues/3670 MDEyOklzc3VlQ29tbWVudDU3MzI3NTUyMQ== fujiisoup 6815844 2020-01-11T03:20:27Z 2020-01-11T03:20:27Z MEMBER

I think this PR is ready for review.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sel with categorical index 546784890
573270896 https://github.com/pydata/xarray/issues/3671#issuecomment-573270896 https://api.github.com/repos/pydata/xarray/issues/3671 MDEyOklzc3VlQ29tbWVudDU3MzI3MDg5Ng== fujiisoup 6815844 2020-01-11T02:24:19Z 2020-01-11T02:24:19Z MEMBER

But I mistakenly thought that there was a performance penalty to doing this.

Yes, construct(stride=2) does exactly the same thing before returning an array. https://github.com/pydata/xarray/blob/ff75081304eb2e2784dcb229cc48a532da557896/xarray/core/rolling.py#L242

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  rolling.construct alignment 546791416
572718436 https://github.com/pydata/xarray/issues/3671#issuecomment-572718436 https://api.github.com/repos/pydata/xarray/issues/3671 MDEyOklzc3VlQ29tbWVudDU3MjcxODQzNg== fujiisoup 6815844 2020-01-09T19:32:29Z 2020-01-09T19:32:29Z MEMBER

Hi @mark-boer for raising an issue. I am not sure if I got the point exactly, but the following is similar to what you want? python In [81]: arr = xr.DataArray(np.arange(4), dims=("x",)) ...: arr.rolling(x=2).construct("roll_x").isel(x=slice(1, None, 2)) Out[81]: <xarray.DataArray (x: 2, roll_x: 2)> array([[0., 1.], [2., 3.]]) Dimensions without coordinates: x, roll_x

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  rolling.construct alignment 546791416
572520005 https://github.com/pydata/xarray/issues/3674#issuecomment-572520005 https://api.github.com/repos/pydata/xarray/issues/3674 MDEyOklzc3VlQ29tbWVudDU3MjUyMDAwNQ== fujiisoup 6815844 2020-01-09T11:27:39Z 2020-01-09T11:27:39Z MEMBER

xref: #3670

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-index with categorical values 547091670
572506873 https://github.com/pydata/xarray/issues/3675#issuecomment-572506873 https://api.github.com/repos/pydata/xarray/issues/3675 MDEyOklzc3VlQ29tbWVudDU3MjUwNjg3Mw== fujiisoup 6815844 2020-01-09T10:51:40Z 2020-01-09T10:51:40Z MEMBER

Hi @sfinkens. Thank you for raising an issue.

I think what you actually want would be ```python In [16]: ds = xr.Dataset({'data': ('x', [1, 2]), ...: 'x': ('x', [1, 2]) ...: }, coords={'x_bnds': (('x', 'bnds'), [[0.5, 1.5], [1.5, ...: 2.5]])}) ...: ds['x'].attrs['bounds'] = 'x_bnds' ...: ds = ds.expand_dims({'time': [0]})

In [17]: ds
Out[17]: <xarray.Dataset> Dimensions: (bnds: 2, time: 1, x: 2) Coordinates: * time (time) int64 0 * x (x) int64 1 2 x_bnds (x, bnds) float64 0.5 1.5 1.5 2.5 Dimensions without coordinates: bnds Data variables: data (time, x) int64 1 2 `` wherex_bnds` would be a coordinate rather than a data variable.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.expand_dims expands dimensions on coordinate bounds 547373923
572266378 https://github.com/pydata/xarray/issues/3674#issuecomment-572266378 https://api.github.com/repos/pydata/xarray/issues/3674 MDEyOklzc3VlQ29tbWVudDU3MjI2NjM3OA== fujiisoup 6815844 2020-01-08T21:32:04Z 2020-01-08T21:32:04Z MEMBER

Thanks for reporting again. OK. It looks there are several places to be fixed.

Please add comments here if you find another not-working case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-index with categorical values 547091670
572256690 https://github.com/pydata/xarray/issues/3669#issuecomment-572256690 https://api.github.com/repos/pydata/xarray/issues/3669 MDEyOklzc3VlQ29tbWVudDU3MjI1NjY5MA== fujiisoup 6815844 2020-01-08T21:06:06Z 2020-01-08T21:06:06Z MEMBER

Let's close this after #3670 is merged.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fail to sel() when index comes from categorical pandas Series 546727720
572239991 https://github.com/pydata/xarray/pull/3670#issuecomment-572239991 https://api.github.com/repos/pydata/xarray/issues/3670 MDEyOklzc3VlQ29tbWVudDU3MjIzOTk5MQ== fujiisoup 6815844 2020-01-08T20:20:50Z 2020-01-08T20:20:50Z MEMBER

I don't think the check failure is related.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sel with categorical index 546784890
571995984 https://github.com/pydata/xarray/issues/3669#issuecomment-571995984 https://api.github.com/repos/pydata/xarray/issues/3669 MDEyOklzc3VlQ29tbWVudDU3MTk5NTk4NA== fujiisoup 6815844 2020-01-08T10:51:46Z 2020-01-08T10:51:46Z MEMBER

Thanks, @mancellin

I sent a quick fix. Please feel free to comment there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fail to sel() when index comes from categorical pandas Series 546727720
571108889 https://github.com/pydata/xarray/pull/3663#issuecomment-571108889 https://api.github.com/repos/pydata/xarray/issues/3663 MDEyOklzc3VlQ29tbWVudDU3MTEwODg4OQ== fujiisoup 6815844 2020-01-06T11:41:51Z 2020-01-06T11:41:51Z MEMBER

Thanks, @yohai !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Typo in Universal Functions section 545624732
570557238 https://github.com/pydata/xarray/pull/3658#issuecomment-570557238 https://api.github.com/repos/pydata/xarray/issues/3658 MDEyOklzc3VlQ29tbWVudDU3MDU1NzIzOA== fujiisoup 6815844 2020-01-03T12:17:17Z 2020-01-03T12:17:17Z MEMBER

thanks, @hazbottles Merged

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add multiindex level name checking to .rename() 544371732
565757852 https://github.com/pydata/xarray/issues/3245#issuecomment-565757852 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDU2NTc1Nzg1Mg== fujiisoup 6815844 2019-12-14T22:14:03Z 2019-12-14T22:14:03Z MEMBER

What is the best way to save sparse array into a disc?

One naive way would be to use stack -> reset_index, but it flattened coordinates and if there is another variable that depends on these coordinates, they will be also flattened and may consume a lot of space.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
564303463 https://github.com/pydata/xarray/pull/3607#issuecomment-564303463 https://api.github.com/repos/pydata/xarray/issues/3607 MDEyOklzc3VlQ29tbWVudDU2NDMwMzQ2Mw== fujiisoup 6815844 2019-12-10T23:16:51Z 2019-12-10T23:16:51Z MEMBER

@niowniow Thank you for your contribution!

I think stride option is a good idea. One thing is how to implement this efficient nan-reduction method.

Currently, we use 'bottleneck' if it is installed for speeding up nan-ops, but bottleneck does not support stride option. Another problem is inefficiency of nan-ops of numpy for strided arrays; he copies the strided array into full array and replace np.nan by zero before the reduction.

One way we could do is 1. skip using 'bottleneck' if stride is other than 1 2. implement our nan-ops for rolling. For example, for nansum, we can replace np.nan by 0 before creating the strided arrays and apply usual sum for the strided array.

In rolling.count, we did a similar thing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Strided rolling 535686852
562821225 https://github.com/pydata/xarray/pull/3596#issuecomment-562821225 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU2MjgyMTIyNQ== fujiisoup 6815844 2019-12-07T06:47:32Z 2019-12-07T06:47:32Z MEMBER

Hi, @mark-boer. In #3587, I tried using dask's pad method but noticed a few bugs in older (but newer than 1.2) dask. For me, it would be very welcome if you add this method to dask_array_compat. Then, I would wait for merging #3587 until this PR is completed.

Thanks for your contribution :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
555537348 https://github.com/pydata/xarray/issues/3546#issuecomment-555537348 https://api.github.com/repos/pydata/xarray/issues/3546 MDEyOklzc3VlQ29tbWVudDU1NTUzNzM0OA== fujiisoup 6815844 2019-11-19T14:40:01Z 2019-11-19T14:40:01Z MEMBER

This behaviour, however, seems to be slightly different from the .loc API of pandas.DataFrame which can take boolean arrays for selection. Is there a reason for the discrepancy?

Hi, @roxyboy

This is just because that multidimensional boolean indexing is not yet implemented in xarray (#1887). The one-dimensional indexing would work with .loc, ```python In [2]: da = xr.DataArray([0, 1, 2], dims=['x'])

In [3]: da.loc[da < 1]
Out[3]: <xarray.DataArray (x: 1)> array([0]) Dimensions without coordinates: x ```

FYI, in xarray, probably .sel and .isel methods are more convenient than .loc, as we don't need to remember the dimension order. For the above (my) example, I would write python da.isel(x=da < 1) instead of da.loc[da < 1].

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  loc API gives KeyError: "not all values found in index" 524940277
554795681 https://github.com/pydata/xarray/issues/3245#issuecomment-554795681 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDU1NDc5NTY4MQ== fujiisoup 6815844 2019-11-17T22:38:51Z 2019-11-17T22:38:51Z MEMBER

Do we arrive at the consensus here for API to change the sparse or numpy array? xref #3542

To make it sparse array, to_sparse() would be better? How about or as_sparse()? + to_sparse() is probably consistent to todense() method + as_sparse() sounds similar to sparse's function, e.g. sparse.as_coo

To change the backend back from sparse array, to_dense() would be better? FYI, sparse uses todense().

I personally like as_sparse or as_numpy (or as_dense?), which sounds similar to as_type, which gives xarray object not dtype itself.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
554643027 https://github.com/pydata/xarray/pull/3541#issuecomment-554643027 https://api.github.com/repos/pydata/xarray/issues/3541 MDEyOklzc3VlQ29tbWVudDU1NDY0MzAyNw== fujiisoup 6815844 2019-11-16T14:37:01Z 2019-11-16T14:37:01Z MEMBER

Thanks, @max-sixty, for the review :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added fill_value for unstack 523831612

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2720.765ms · About: xarray-datasette