home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "NONE" and user = 12818667 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 4

  • to_zarr fails for large dimensions; sensitive to exact dimension size and chunk size 2
  • dataset.sel inconsistent results when argument is a list or a slice. 2
  • Writing netcdf after running xarray.dataset.reindex to fill gaps in a time series fails due to memory allocation error 1
  • Saving a DataArray of datetime objects as zarr is not a lazy operation despite compute=False 1

user 1

  • JamiePringle · 6 ✖

author_association 1

  • NONE · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1302325806 https://github.com/pydata/xarray/issues/7132#issuecomment-1302325806 https://api.github.com/repos/pydata/xarray/issues/7132 IC_kwDOAMm_X85Nn-ou JamiePringle 12818667 2022-11-03T15:58:13Z 2022-11-03T15:58:13Z NONE

@dcherian Thanks; I agree that this seems to be the same as #7028. Just as a note, I have had 3 people reach out to me (1 from UNH, 2 from across the globe) thanking me for the work around in my message. So this does seem to be a commonly encountered issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Saving a DataArray of datetime objects as zarr is not a lazy operation despite compute=False 1397532790
1245425482 https://github.com/pydata/xarray/issues/7018#issuecomment-1245425482 https://api.github.com/repos/pydata/xarray/issues/7018 IC_kwDOAMm_X85KO69K JamiePringle 12818667 2022-09-13T13:36:40Z 2022-09-13T13:36:40Z NONE

I think #7028 might help you -- I was running into a similar problem. In short, try keeping your time variables as float64 instead of as date time (or converting before you try to save).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Writing netcdf after running xarray.dataset.reindex to fill gaps in a time series fails due to memory allocation error 1368696980
1235556683 https://github.com/pydata/xarray/issues/6976#issuecomment-1235556683 https://api.github.com/repos/pydata/xarray/issues/6976 IC_kwDOAMm_X85JpRlL JamiePringle 12818667 2022-09-02T14:13:27Z 2022-09-02T14:13:27Z NONE

I am happy to close this; it would be lovely if the documentation was more explicit about this issue. I was certainly surprised even after a close reading of the docs.

Jamie

On Fri, Sep 2, 2022 at 10:07 AM Mathias Hauser @.***> wrote:

CAUTION: This email originated from outside of the University System. Do not click links or open attachments unless you recognize the sender and know the content is safe.

CAUTION: This email originated from outside of the University System. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Jup, that's always the tradeoff - #1613 https://github.com/pydata/xarray/issues/1613 discusses a similar case.

— Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/6976#issuecomment-1235549943, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADBZR25KUX6CEF5ESPUJLKLV4ICYRANCNFSM6AAAAAAQCNUNFM . You are receiving this because you authored the thread.Message ID: @.***>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataset.sel inconsistent results when argument is a list or a slice. 1358960570
1234465676 https://github.com/pydata/xarray/issues/6976#issuecomment-1234465676 https://api.github.com/repos/pydata/xarray/issues/6976 IC_kwDOAMm_X85JlHOM JamiePringle 12818667 2022-09-01T15:47:43Z 2022-09-01T15:47:43Z NONE

So is this an expected behavior? I can work around it by explicitly creating the indices with arange() or the like. I do wonder if this is what is causing to_zarr() to fail even with compute=False? But I can work around that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataset.sel inconsistent results when argument is a list or a slice. 1358960570
1142597646 https://github.com/pydata/xarray/issues/6640#issuecomment-1142597646 https://api.github.com/repos/pydata/xarray/issues/6640 IC_kwDOAMm_X85EGqgO JamiePringle 12818667 2022-05-31T20:13:26Z 2022-05-31T20:13:26Z NONE

I have had a few other odd indexing issues with large arrays. It almost feels as if somewhere, the sizes are forced to be a fixed size integer or something.

On Tue, May 31, 2022 at 3:34 PM Deepak Cherian @.***> wrote:

CAUTION: This email originated from outside of the University System. Do not click links or open attachments unless you recognize the sender and know the content is safe.

CAUTION: This email originated from outside of the University System. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Thanks Jamie.

Yes it now fails here with maxNumObs=1 and xarray main.

It looks like self._chunks is wrong but I don't know why.

— Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/6640#issuecomment-1142565607, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADBZR27HXRATD7CDBZR7GBDVMZSTFANCNFSM5XBK4B2A . You are receiving this because you were mentioned.Message ID: @.***>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_zarr fails for large dimensions; sensitive to exact dimension size and chunk size 1249638836
1142516331 https://github.com/pydata/xarray/issues/6640#issuecomment-1142516331 https://api.github.com/repos/pydata/xarray/issues/6640 IC_kwDOAMm_X85EGWpr JamiePringle 12818667 2022-05-31T18:36:47Z 2022-05-31T18:36:47Z NONE

My apologies @dcherian, in commenting the code, I switched "FAILS" and "WORKS" -- the size that fails is numberOfDrifters=120067029

I have edited the example code above, and it should fail when run. I have made a test environment with the versions you suggested, and with maxNumObs=1, and it still fails with the same error.

Jamie

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_zarr fails for large dimensions; sensitive to exact dimension size and chunk size 1249638836

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.288ms · About: xarray-datasette