issue_comments
3 rows where issue = 1379372915 and user = 691772 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- pandas.errors.InvalidIndexError raised when running computation in parallel using dask · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1268031159 | https://github.com/pydata/xarray/issues/7059#issuecomment-1268031159 | https://api.github.com/repos/pydata/xarray/issues/7059 | IC_kwDOAMm_X85LlJ63 | lumbric 691772 | 2022-10-05T07:02:23Z | 2022-10-05T07:02:48Z | CONTRIBUTOR |
What do you mean by that?
Uhm yes, you are right, this should be removed, not sure how this happened. Removing
Oh wow, thanks! Haven't seen flox before. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pandas.errors.InvalidIndexError raised when running computation in parallel using dask 1379372915 | |
1254873700 | https://github.com/pydata/xarray/issues/7059#issuecomment-1254873700 | https://api.github.com/repos/pydata/xarray/issues/7059 | IC_kwDOAMm_X85Ky9pk | lumbric 691772 | 2022-09-22T11:09:16Z | 2022-09-22T11:09:16Z | CONTRIBUTOR | I have managed to reduce the reproducing example (see "Minimal Complete Verifiable Example 2" above) and then also find a proper solution to fix this issue. I am still not sure whether this is a bug or intended behavior, so I'll won't close the issue for now. Basically the issue occurs when a chunked NetCDF file is loaded from disk, passed to ``` --- run-broken.py 2022-09-22 13:00:41.095555961 +0200 +++ run.py 2022-09-22 13:01:14.452696511 +0200 @@ -30,17 +30,17 @@ def resample_annually(data): return data.sortby("time").resample(time="1A", label="left", loffset="1D").mean(dim="time")
This seems to fix this issue and seems to be the proper solution anyway. I still don't see why I am not allowed to use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pandas.errors.InvalidIndexError raised when running computation in parallel using dask 1379372915 | |
1252561840 | https://github.com/pydata/xarray/issues/7059#issuecomment-1252561840 | https://api.github.com/repos/pydata/xarray/issues/7059 | IC_kwDOAMm_X85KqJOw | lumbric 691772 | 2022-09-20T15:54:48Z | 2022-09-20T15:54:48Z | CONTRIBUTOR | @benbovy thanks for the hint! I tried passing an explicit lock to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pandas.errors.InvalidIndexError raised when running computation in parallel using dask 1379372915 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1