issue_comments
7 rows where author_association = "MEMBER" and issue = 218260909 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- round-trip performance with save_mfdataset / open_mfdataset · 7 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
298433889 | https://github.com/pydata/xarray/issues/1340#issuecomment-298433889 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5ODQzMzg4OQ== | shoyer 1217238 | 2017-05-01T21:11:15Z | 2017-05-01T21:11:15Z | MEMBER | @karenamckinnon In this case, it was in the file paths, i.e., |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 | |
297572440 | https://github.com/pydata/xarray/issues/1340#issuecomment-297572440 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5NzU3MjQ0MA== | shoyer 1217238 | 2017-04-26T23:48:29Z | 2017-04-26T23:48:29Z | MEMBER | @karenamckinnon From your traceback, it looks like you're using pandas 0.14, but xarray requires at least pandas 0.15. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 | |
297566576 | https://github.com/pydata/xarray/issues/1340#issuecomment-297566576 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5NzU2NjU3Ng== | shoyer 1217238 | 2017-04-26T23:08:55Z | 2017-04-26T23:08:55Z | MEMBER | @karenamckinnon could you please share a traceback for the error? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 | |
290484162 | https://github.com/pydata/xarray/issues/1340#issuecomment-290484162 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5MDQ4NDE2Mg== | rabernat 1197350 | 2017-03-30T17:33:02Z | 2017-03-30T17:33:02Z | MEMBER | This sounds like the kind of thing I could manage. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 | |
290480036 | https://github.com/pydata/xarray/issues/1340#issuecomment-290480036 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5MDQ4MDAzNg== | shoyer 1217238 | 2017-03-30T17:18:22Z | 2017-03-30T17:18:22Z | MEMBER | Indeed, it's not. We should add some way to pipe this arguments through |
{ "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 | |
290478206 | https://github.com/pydata/xarray/issues/1340#issuecomment-290478206 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5MDQ3ODIwNg== | rabernat 1197350 | 2017-03-30T17:12:01Z | 2017-03-30T17:12:01Z | MEMBER |
http://xarray.pydata.org/en/latest/generated/xarray.open_mfdataset.html#xarray-open-mfdataset |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 | |
290477014 | https://github.com/pydata/xarray/issues/1340#issuecomment-290477014 | https://api.github.com/repos/pydata/xarray/issues/1340 | MDEyOklzc3VlQ29tbWVudDI5MDQ3NzAxNA== | shoyer 1217238 | 2017-03-30T17:07:50Z | 2017-03-30T17:07:50Z | MEMBER | My strong suspicion is that the bottleneck here is xarray checking all the coordinates for equality in concat, when deciding whether to add a "time" dimension or not. Try passing This was a convenient check for small/in-memory datasets but possibly it's not a good one going forward. It's generally slow to load all the coordinate data for comparisons, but it's even worse with the current implementation, which computes pair-wise comparisons of arrays with dask instead of doing them in parallel all at once. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
round-trip performance with save_mfdataset / open_mfdataset 218260909 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2