home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "MEMBER", issue = 184722754 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer · 5 ✖

issue 1

  • shallow copies become deep copies when pickling · 5 ✖

author_association 1

  • MEMBER · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
277549915 https://github.com/pydata/xarray/issues/1058#issuecomment-277549915 https://api.github.com/repos/pydata/xarray/issues/1058 MDEyOklzc3VlQ29tbWVudDI3NzU0OTkxNQ== shoyer 1217238 2017-02-05T21:13:41Z 2017-02-05T21:13:41Z MEMBER

Alternatively, it could make sense to change pickle upstream in NumPy to special case arrays with a stride of 0 along some dimension differently.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  shallow copies become deep copies when pickling 184722754
277549355 https://github.com/pydata/xarray/issues/1058#issuecomment-277549355 https://api.github.com/repos/pydata/xarray/issues/1058 MDEyOklzc3VlQ29tbWVudDI3NzU0OTM1NQ== shoyer 1217238 2017-02-05T21:06:19Z 2017-02-05T21:06:19Z MEMBER

@crusaderky Yes, I think it could be reasonable to unify array types when you call broadcast() or align(), as either as optional behavior or by changing the default.

If your scalar array is the result of an expensive dask calculation, this also might be a good use case for dask's new .persist() method (https://github.com/dask/dask/issues/1908), which we could add to xarray as an alternative to .compute().

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  shallow copies become deep copies when pickling 184722754
273001734 https://github.com/pydata/xarray/issues/1058#issuecomment-273001734 https://api.github.com/repos/pydata/xarray/issues/1058 MDEyOklzc3VlQ29tbWVudDI3MzAwMTczNA== shoyer 1217238 2017-01-17T01:53:18Z 2017-01-17T01:53:18Z MEMBER

I think this is fixed about as well as we can hope given how pickle works for NumPy by https://github.com/pydata/xarray/pull/1128.

So I'm closing this now, but feel free to open another issue for any follow-up concerns.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  shallow copies become deep copies when pickling 184722754
256144009 https://github.com/pydata/xarray/issues/1058#issuecomment-256144009 https://api.github.com/repos/pydata/xarray/issues/1058 MDEyOklzc3VlQ29tbWVudDI1NjE0NDAwOQ== shoyer 1217238 2016-10-25T19:05:01Z 2016-10-25T19:05:01Z MEMBER

I answered the StackOverflow question: https://stackoverflow.com/questions/13746601/preserving-numpy-view-when-pickling/40247761#40247761

This was a tricky puzzle to figure out!

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  shallow copies become deep copies when pickling 184722754
255622303 https://github.com/pydata/xarray/issues/1058#issuecomment-255622303 https://api.github.com/repos/pydata/xarray/issues/1058 MDEyOklzc3VlQ29tbWVudDI1NTYyMjMwMw== shoyer 1217238 2016-10-23T23:27:09Z 2016-10-23T23:27:09Z MEMBER

The plan is stop making default indexes with np.arange. See https://github.com/pydata/xarray/pull/1017, which is my top priority for the next major release.

I'm not confident that your work around will work properly. At the very least, you should check strides as well. Otherwise get_base(array[::-1]) would return array.

If it would really help, I'm open to making Variable(dims, array) reuse the same numpy array instead of creating a view (see as_compatible_data).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  shallow copies become deep copies when pickling 184722754

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4495.417ms · About: xarray-datasette