issue_comments
5 rows where issue = 244016361 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Loss of coordinate information from groupby.apply() on a stacked object · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
703191101 | https://github.com/pydata/xarray/issues/1483#issuecomment-703191101 | https://api.github.com/repos/pydata/xarray/issues/1483 | MDEyOklzc3VlQ29tbWVudDcwMzE5MTEwMQ== | stale[bot] 26384082 | 2020-10-04T02:32:44Z | 2020-10-04T02:32:44Z | NONE | In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here or remove the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Loss of coordinate information from groupby.apply() on a stacked object 244016361 | |
316773753 | https://github.com/pydata/xarray/issues/1483#issuecomment-316773753 | https://api.github.com/repos/pydata/xarray/issues/1483 | MDEyOklzc3VlQ29tbWVudDMxNjc3Mzc1Mw== | shoyer 1217238 | 2017-07-20T17:25:27Z | 2017-07-20T17:25:27Z | MEMBER | This wasn't intentional. If we can fix it in a straightforward fashion, we definitely should. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Loss of coordinate information from groupby.apply() on a stacked object 244016361 | |
316381473 | https://github.com/pydata/xarray/issues/1483#issuecomment-316381473 | https://api.github.com/repos/pydata/xarray/issues/1483 | MDEyOklzc3VlQ29tbWVudDMxNjM4MTQ3Mw== | byersiiasa 17701232 | 2017-07-19T13:12:03Z | 2017-07-19T13:12:03Z | NONE | @darothen yes you are right - this is definitely not a good way to apply mean - I was just using mean as a (poor) example trying not to over-complicate or distract from the issue. But, as you suggest, this is what I do when needing to apply customised functions like from scipy... which, can end up being slow. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Loss of coordinate information from groupby.apply() on a stacked object 244016361 | |
316377854 | https://github.com/pydata/xarray/issues/1483#issuecomment-316377854 | https://api.github.com/repos/pydata/xarray/issues/1483 | MDEyOklzc3VlQ29tbWVudDMxNjM3Nzg1NA== | darothen 4992424 | 2017-07-19T12:59:04Z | 2017-07-19T12:59:04Z | NONE | Instead of computing the mean over your non-stacked dimension by
why not just instead call
so that you just collapse the time dimension and preserve the attributes on your data? Then you can |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Loss of coordinate information from groupby.apply() on a stacked object 244016361 | |
316363418 | https://github.com/pydata/xarray/issues/1483#issuecomment-316363418 | https://api.github.com/repos/pydata/xarray/issues/1483 | MDEyOklzc3VlQ29tbWVudDMxNjM2MzQxOA== | byersiiasa 17701232 | 2017-07-19T12:00:42Z | 2017-07-19T12:00:42Z | NONE | ** Maybe not an issue for others or I am missing something... Or perhaps this is intended behaviour? Thanks for clarification! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Loss of coordinate information from groupby.apply() on a stacked object 244016361 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4