issue_comments
5 rows where author_association = "MEMBER", issue = 379177627 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- HDF Errors since xarray 0.11 · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
437717439 | https://github.com/pydata/xarray/issues/2551#issuecomment-437717439 | https://api.github.com/repos/pydata/xarray/issues/2551 | MDEyOklzc3VlQ29tbWVudDQzNzcxNzQzOQ== | shoyer 1217238 | 2018-11-11T23:52:34Z | 2018-11-11T23:52:34Z | MEMBER | Previously the indexing operation would sometimes return a NumPy array. Now it's always lazy, so accessing the dataset when the file is closed fails. Perhaps we should raise our own error though instead of just passing things through to netCDF4. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF Errors since xarray 0.11 379177627 | |
437707332 | https://github.com/pydata/xarray/issues/2551#issuecomment-437707332 | https://api.github.com/repos/pydata/xarray/issues/2551 | MDEyOklzc3VlQ29tbWVudDQzNzcwNzMzMg== | shoyer 1217238 | 2018-11-11T21:37:42Z | 2018-11-11T21:37:42Z | MEMBER | My best guess is that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF Errors since xarray 0.11 379177627 | |
437618316 | https://github.com/pydata/xarray/issues/2551#issuecomment-437618316 | https://api.github.com/repos/pydata/xarray/issues/2551 | MDEyOklzc3VlQ29tbWVudDQzNzYxODMxNg== | shoyer 1217238 | 2018-11-10T20:18:31Z | 2018-11-10T20:18:31Z | MEMBER | @fmaussion Yep, that looks a bug on your end. Were you using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF Errors since xarray 0.11 379177627 | |
437483895 | https://github.com/pydata/xarray/issues/2551#issuecomment-437483895 | https://api.github.com/repos/pydata/xarray/issues/2551 | MDEyOklzc3VlQ29tbWVudDQzNzQ4Mzg5NQ== | shoyer 1217238 | 2018-11-09T20:22:03Z | 2018-11-09T20:22:03Z | MEMBER | I am slightly regretting not doing a release candidate here, but hopefully this should be straightforward to fix. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF Errors since xarray 0.11 379177627 | |
437471566 | https://github.com/pydata/xarray/issues/2551#issuecomment-437471566 | https://api.github.com/repos/pydata/xarray/issues/2551 | MDEyOklzc3VlQ29tbWVudDQzNzQ3MTU2Ng== | shoyer 1217238 | 2018-11-09T19:37:15Z | 2018-11-09T19:37:15Z | MEMBER | Oh my -- sorry about that! Thinking about this a little more, I guess this should not be too surprising since I don't think we have any dask integration tests that cover appending to existing files. Maybe a good place to start would be adding one of those, e.g., adapted from this existing test: https://github.com/pydata/xarray/blob/575e97aef405c9b473508f5bc0e66332df4930f3/xarray/tests/test_distributed.py#L66 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF Errors since xarray 0.11 379177627 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1