home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where user = 13770365 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

issue 3

  • Cannot write array larger than 4GB with SciPy netCDF backend 1
  • Keep index dimension when selecting only a single coord 1
  • open_dataarray(cache=False) still uses cached version of dataarray 1

user 1

  • ngreenwald · 3 ✖

author_association 1

  • NONE 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
592051771 https://github.com/pydata/xarray/issues/3785#issuecomment-592051771 https://api.github.com/repos/pydata/xarray/issues/3785 MDEyOklzc3VlQ29tbWVudDU5MjA1MTc3MQ== ngreenwald 13770365 2020-02-27T16:30:36Z 2020-02-27T16:30:36Z NONE

Hey all, just wanted to follow up to see if anyone had suggestions for how to approach this.

We're running into this issue when we load an xarray, make some changes to it, and then save the modified version. Especially for interactive sessions, this produces bugs that are challenging to track down, as it appears that the previous step has failed, not that the actual loading is the problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_dataarray(cache=False) still uses cached version of dataarray 568705055
547577441 https://github.com/pydata/xarray/issues/3458#issuecomment-547577441 https://api.github.com/repos/pydata/xarray/issues/3458 MDEyOklzc3VlQ29tbWVudDU0NzU3NzQ0MQ== ngreenwald 13770365 2019-10-29T18:54:42Z 2019-10-29T18:54:42Z NONE

Great, thanks so much!

On Tue, Oct 29, 2019 at 11:20 AM Deepak Cherian notifications@github.com wrote:

We should add this to the documentation since it comes up often enough.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/3458?email_source=notifications&email_token=ADJB47PJGAKVFXIPKCOVBTDQRB5HJA5CNFSM4JGMJOP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECRSJQQ#issuecomment-547562690, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADJB47NCROANMZGIJCOCTFLQRB5HJANCNFSM4JGMJOPQ .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Keep index dimension when selecting only a single coord 514077742
544183597 https://github.com/pydata/xarray/issues/3416#issuecomment-544183597 https://api.github.com/repos/pydata/xarray/issues/3416 MDEyOklzc3VlQ29tbWVudDU0NDE4MzU5Nw== ngreenwald 13770365 2019-10-19T18:19:08Z 2019-10-19T18:19:33Z NONE

Yes, you're right @max-sixty, seems like the actual IO is happening within scipy. Do you have any suggestions for how I might troubleshoot this further? Could I make a similar example that uses only the scipy netcdf functionality somehow to drill down into where the memory error is coming from?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot write array larger than 4GB with SciPy netCDF backend 509285415

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 238.27ms · About: xarray-datasette