issue_comments
7 rows where author_association = "NONE" and issue = 638909879 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Implement interp for interpolating between chunks of data (dask) · 7 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
687776559 | https://github.com/pydata/xarray/pull/4155#issuecomment-687776559 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY4Nzc3NjU1OQ== | lazyoracle 11018951 | 2020-09-06T12:27:15Z | 2020-09-06T12:27:15Z | NONE | @max-sixty Is there a timeline on when we can expect this feature in a stable release? Is it scheduled for the next minor release and to be made available on |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 | |
674579300 | https://github.com/pydata/xarray/pull/4155#issuecomment-674579300 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY3NDU3OTMwMA== | cyhsu 5323645 | 2020-08-16T21:18:48Z | 2020-08-16T21:48:06Z | NONE | Gotcha! Yes, it is. If I have many points in lat, lon, depth, and time, I should better chunk my input arrays at this stage to speed up the performance. The reason why I asked this question is I thought chunking the input array to do the interpolation should faster than if I didn't chunk the input array. But in my test case, it is not. Please see the attached. The results I show here is the parallel one way slower than the normal case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 | |
674578856 | https://github.com/pydata/xarray/pull/4155#issuecomment-674578856 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY3NDU3ODg1Ng== | cyhsu 5323645 | 2020-08-16T21:14:46Z | 2020-08-16T21:14:46Z | NONE | @pums974 then how about if we do the interpolation by using chunk input array to the chunk interpolated dimension? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 | |
674577513 | https://github.com/pydata/xarray/pull/4155#issuecomment-674577513 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY3NDU3NzUxMw== | cyhsu 5323645 | 2020-08-16T21:02:50Z | 2020-08-16T21:02:50Z | NONE | @fujiisoup Thanks for the response. Since I have not updated my xarray package through this beta version. I hope you can answer my additional question for me. By considering the interpolation, which way is faster? a. chunk the dataset, and then do interpolation or b. chunk the interpolation list and then do interpolation? a.
b.
x = xr.DataArray(data = da.from_array(np.linspace(0,1), chunks=2), dims='x') res = data.interp(x=x) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 | |
674319860 | https://github.com/pydata/xarray/pull/4155#issuecomment-674319860 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY3NDMxOTg2MA== | cyhsu 5323645 | 2020-08-15T00:22:07Z | 2020-08-15T00:22:07Z | NONE | @fujiisoup Thanks for letting me know. But I am still unable to do even though I have updated my xarray via "conda update xarray". |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 | |
674288483 | https://github.com/pydata/xarray/pull/4155#issuecomment-674288483 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY3NDI4ODQ4Mw== | cyhsu 5323645 | 2020-08-14T21:57:02Z | 2020-08-14T21:57:02Z | NONE | Hi Just curious about this. I followed the discussion since this issue addressed. Is this chunk interpolation solved already? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 | |
644178631 | https://github.com/pydata/xarray/pull/4155#issuecomment-644178631 | https://api.github.com/repos/pydata/xarray/issues/4155 | MDEyOklzc3VlQ29tbWVudDY0NDE3ODYzMQ== | pep8speaks 24736507 | 2020-06-15T14:43:57Z | 2020-07-31T18:56:12Z | NONE | Hello @pums974! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found: There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers: Comment last updated at 2020-07-31 18:56:12 UTC |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement interp for interpolating between chunks of data (dask) 638909879 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3