home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "NONE" and issue = 1249638836 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • JamiePringle 2

issue 1

  • to_zarr fails for large dimensions; sensitive to exact dimension size and chunk size · 2 ✖

author_association 1

  • NONE · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1142597646 https://github.com/pydata/xarray/issues/6640#issuecomment-1142597646 https://api.github.com/repos/pydata/xarray/issues/6640 IC_kwDOAMm_X85EGqgO JamiePringle 12818667 2022-05-31T20:13:26Z 2022-05-31T20:13:26Z NONE

I have had a few other odd indexing issues with large arrays. It almost feels as if somewhere, the sizes are forced to be a fixed size integer or something.

On Tue, May 31, 2022 at 3:34 PM Deepak Cherian @.***> wrote:

CAUTION: This email originated from outside of the University System. Do not click links or open attachments unless you recognize the sender and know the content is safe.

CAUTION: This email originated from outside of the University System. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Thanks Jamie.

Yes it now fails here with maxNumObs=1 and xarray main.

It looks like self._chunks is wrong but I don't know why.

— Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/6640#issuecomment-1142565607, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADBZR27HXRATD7CDBZR7GBDVMZSTFANCNFSM5XBK4B2A . You are receiving this because you were mentioned.Message ID: @.***>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_zarr fails for large dimensions; sensitive to exact dimension size and chunk size 1249638836
1142516331 https://github.com/pydata/xarray/issues/6640#issuecomment-1142516331 https://api.github.com/repos/pydata/xarray/issues/6640 IC_kwDOAMm_X85EGWpr JamiePringle 12818667 2022-05-31T18:36:47Z 2022-05-31T18:36:47Z NONE

My apologies @dcherian, in commenting the code, I switched "FAILS" and "WORKS" -- the size that fails is numberOfDrifters=120067029

I have edited the example code above, and it should fail when run. I have made a test environment with the versions you suggested, and with maxNumObs=1, and it still fails with the same error.

Jamie

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_zarr fails for large dimensions; sensitive to exact dimension size and chunk size 1249638836

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.226ms · About: xarray-datasette