issue_comments
3 rows where author_association = "MEMBER" and issue = 706507153 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Did copy(deep=True) break with 0.16.1? · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
696997402 | https://github.com/pydata/xarray/issues/4449#issuecomment-696997402 | https://api.github.com/repos/pydata/xarray/issues/4449 | MDEyOklzc3VlQ29tbWVudDY5Njk5NzQwMg== | shoyer 1217238 | 2020-09-22T21:41:01Z | 2020-09-22T21:41:01Z | MEMBER | The work around is to call The direct source of the issue here is: https://github.com/pydata/xarray/pull/4379 We probably should have been more careful with that change, because it is technically a regression. Previously we would always load data into NumPy arrays when doing a deep copy, but after that change the underlying data structures are deep-copied rather than being loaded into NumPy. That's probably more consistent with what users would expect in general from a "deep copy" but is definitely a change from the previous behavior. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Did copy(deep=True) break with 0.16.1? 706507153 | |
696991642 | https://github.com/pydata/xarray/issues/4449#issuecomment-696991642 | https://api.github.com/repos/pydata/xarray/issues/4449 | MDEyOklzc3VlQ29tbWVudDY5Njk5MTY0Mg== | dcherian 2448579 | 2020-09-22T21:27:13Z | 2020-09-22T21:27:13Z | MEMBER | Would |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Did copy(deep=True) break with 0.16.1? 706507153 | |
696980120 | https://github.com/pydata/xarray/issues/4449#issuecomment-696980120 | https://api.github.com/repos/pydata/xarray/issues/4449 | MDEyOklzc3VlQ29tbWVudDY5Njk4MDEyMA== | max-sixty 5635139 | 2020-09-22T21:04:04Z | 2020-09-22T21:04:04Z | MEMBER | Thanks for the issue @blaylockbk I think this is caused by https://github.com/pydata/xarray/pull/4426, which defers to copy-on-write rather than make an extra copy. Do you need this behavior per se? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Did copy(deep=True) break with 0.16.1? 706507153 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3