home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "CONTRIBUTOR", issue = 1033142897 and user = 4666753 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jaicher · 1 ✖

issue 1

  • Failing parallel writes to_zarr with regions parameter? · 1 ✖

author_association 1

  • CONTRIBUTOR · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
949875448 https://github.com/pydata/xarray/issues/5883#issuecomment-949875448 https://api.github.com/repos/pydata/xarray/issues/5883 IC_kwDOAMm_X844nfL4 jaicher 4666753 2021-10-22T18:37:06Z 2021-10-22T18:37:06Z CONTRIBUTOR

I see the issue now. Closing the issue, but in case anyone else is figuring this out:

  • I initially had this problem with coordinate "a" (index over dimension). In retrospect, since it was a dimension index, it was not chunked. So parallel writes are not safe (since they are not to independent chunks).
  • At some point, while constructing my MCVE, I swapped the chunking of "x". Right now, it is not chunked on a, but rather on b. So, parallel writes are not safe (since they are not to independent chunks).
  • If I write chunks=(1, None) for the dummy value for x, the example no longer has errors.

In summary, main takeaways are: - check that your chunking aligns with your parallel writes using regions. - dimension coordinates cannot be chunked in zarr (to my knowledge, at least), and so, they do not support parallel writes. Write these either before or after the parallel writes.

Sorry for opening this issue that wasn't a real issue!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Failing parallel writes to_zarr with regions parameter? 1033142897

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1279.253ms · About: xarray-datasette