issue_comments
6 rows where author_association = "CONTRIBUTOR" and issue = 307318224 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Slicing DataArray can take longer than not slicing · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
738189796 | https://github.com/pydata/xarray/issues/2004#issuecomment-738189796 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDczODE4OTc5Ng== | WeatherGod 291576 | 2020-12-03T18:15:35Z | 2020-12-03T18:15:35Z | CONTRIBUTOR | I think so, at least in terms of my original problem. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375056363 | https://github.com/pydata/xarray/issues/2004#issuecomment-375056363 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTA1NjM2Mw== | WeatherGod 291576 | 2018-03-21T18:50:58Z | 2018-03-21T18:50:58Z | CONTRIBUTOR | Ah, nevermind, I see that our examples only had one greater-than-one stride |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375056077 | https://github.com/pydata/xarray/issues/2004#issuecomment-375056077 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTA1NjA3Nw== | WeatherGod 291576 | 2018-03-21T18:50:01Z | 2018-03-21T18:50:01Z | CONTRIBUTOR | Dunno. I can't seem to get that engine working on my system. Reading through that thread, I wonder if the optimization they added only applies if there is only one stride greater than one? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375036951 | https://github.com/pydata/xarray/issues/2004#issuecomment-375036951 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTAzNjk1MQ== | WeatherGod 291576 | 2018-03-21T17:51:54Z | 2018-03-21T17:51:54Z | CONTRIBUTOR | This might be relevant: https://github.com/Unidata/netcdf4-python/issues/680 Still reading through the thread. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375034973 | https://github.com/pydata/xarray/issues/2004#issuecomment-375034973 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTAzNDk3Mw== | WeatherGod 291576 | 2018-03-21T17:46:09Z | 2018-03-21T17:46:09Z | CONTRIBUTOR | my bet is probably netCDF4-python. Don't want to write up the C code though to confirm it. Sigh... this isn't going to be a fun one to track down. Shall I open a bug report over there? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375014480 | https://github.com/pydata/xarray/issues/2004#issuecomment-375014480 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTAxNDQ4MA== | WeatherGod 291576 | 2018-03-21T16:50:59Z | 2018-03-21T16:56:13Z | CONTRIBUTOR | Yeah, good example. Eliminates a lot of possible variables such as problems with netcdf4 compression and such. Probably should see if it happens in v0.10.0 to see if the changes to the indexing system caused this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1