issue_comments
5 rows where author_association = "MEMBER", issue = 184722754 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- shallow copies become deep copies when pickling · 5 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 277549915 | https://github.com/pydata/xarray/issues/1058#issuecomment-277549915 | https://api.github.com/repos/pydata/xarray/issues/1058 | MDEyOklzc3VlQ29tbWVudDI3NzU0OTkxNQ== | shoyer 1217238 | 2017-02-05T21:13:41Z | 2017-02-05T21:13:41Z | MEMBER | Alternatively, it could make sense to change pickle upstream in NumPy to special case arrays with a stride of 0 along some dimension differently. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
shallow copies become deep copies when pickling 184722754 | |
| 277549355 | https://github.com/pydata/xarray/issues/1058#issuecomment-277549355 | https://api.github.com/repos/pydata/xarray/issues/1058 | MDEyOklzc3VlQ29tbWVudDI3NzU0OTM1NQ== | shoyer 1217238 | 2017-02-05T21:06:19Z | 2017-02-05T21:06:19Z | MEMBER | @crusaderky Yes, I think it could be reasonable to unify array types when you call If your scalar array is the result of an expensive dask calculation, this also might be a good use case for dask's new |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
shallow copies become deep copies when pickling 184722754 | |
| 273001734 | https://github.com/pydata/xarray/issues/1058#issuecomment-273001734 | https://api.github.com/repos/pydata/xarray/issues/1058 | MDEyOklzc3VlQ29tbWVudDI3MzAwMTczNA== | shoyer 1217238 | 2017-01-17T01:53:18Z | 2017-01-17T01:53:18Z | MEMBER | I think this is fixed about as well as we can hope given how pickle works for NumPy by https://github.com/pydata/xarray/pull/1128. So I'm closing this now, but feel free to open another issue for any follow-up concerns. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
shallow copies become deep copies when pickling 184722754 | |
| 256144009 | https://github.com/pydata/xarray/issues/1058#issuecomment-256144009 | https://api.github.com/repos/pydata/xarray/issues/1058 | MDEyOklzc3VlQ29tbWVudDI1NjE0NDAwOQ== | shoyer 1217238 | 2016-10-25T19:05:01Z | 2016-10-25T19:05:01Z | MEMBER | I answered the StackOverflow question: https://stackoverflow.com/questions/13746601/preserving-numpy-view-when-pickling/40247761#40247761 This was a tricky puzzle to figure out! |
{
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
shallow copies become deep copies when pickling 184722754 | |
| 255622303 | https://github.com/pydata/xarray/issues/1058#issuecomment-255622303 | https://api.github.com/repos/pydata/xarray/issues/1058 | MDEyOklzc3VlQ29tbWVudDI1NTYyMjMwMw== | shoyer 1217238 | 2016-10-23T23:27:09Z | 2016-10-23T23:27:09Z | MEMBER | The plan is stop making default indexes with I'm not confident that your work around will work properly. At the very least, you should check If it would really help, I'm open to making |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
shallow copies become deep copies when pickling 184722754 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1