home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER", issue = 340192831 and user = 1197350 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • rabernat · 2 ✖

issue 1

  • can't store zarr after open_zarr and isel · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
404510872 https://github.com/pydata/xarray/issues/2278#issuecomment-404510872 https://api.github.com/repos/pydata/xarray/issues/2278 MDEyOklzc3VlQ29tbWVudDQwNDUxMDg3Mg== rabernat 1197350 2018-07-12T13:24:51Z 2018-07-12T13:24:51Z MEMBER

Yes, this is the same underlying issue.

On Thu, Jul 12, 2018 at 2:59 PM Aurélien Ponte notifications@github.com wrote:

Note that there is also a fix here that is simply del ds['v'].encoding['chunks'] prior to data storage.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/2278#issuecomment-404503718, or mute the thread https://github.com/notifications/unsubscribe-auth/ABJFJp-x0xW1Pe_zzEmnO41Ae3tYE541ks5uF0hAgaJpZM4VK7Q0 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  can't store zarr after open_zarr and isel 340192831
404429223 https://github.com/pydata/xarray/issues/2278#issuecomment-404429223 https://api.github.com/repos/pydata/xarray/issues/2278 MDEyOklzc3VlQ29tbWVudDQwNDQyOTIyMw== rabernat 1197350 2018-07-12T08:15:43Z 2018-07-12T08:16:02Z MEMBER

Any idea about how serious this is and/or where it's coming from?

The source of the bug is that encoding metadata chunks (which describes the chunk size of the underlying zarr store) is automatically getting populated when you load the zarr store (ds = xr.open_zarr('data.zarr')), and this encoding metadata is being preserved as you transform (sub-select) the dataset. Some possible solutions would be to

  1. Not put chunks into encoding at all.
  2. Figure out a way to strip chunks when performing selection operations or other operations that change shape.

Idea 1 is easier but would mean discarding some relevant metadata about encoding. This would break round-tripping of the un-modified zarr dataset.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  can't store zarr after open_zarr and isel 340192831

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 47.986ms · About: xarray-datasette