home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 597657663 and user = 2560426 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • heerad · 2 ✖

issue 1

  • Hangs while saving netcdf file opened using xr.open_mfdataset with lock=None · 2 ✖

author_association 1

  • NONE 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
778841149 https://github.com/pydata/xarray/issues/3961#issuecomment-778841149 https://api.github.com/repos/pydata/xarray/issues/3961 MDEyOklzc3VlQ29tbWVudDc3ODg0MTE0OQ== heerad 2560426 2021-02-14T21:01:21Z 2021-02-14T21:01:21Z NONE

Or alternatively you can try to set sleep between openings.

To clarify, do you mean adding a sleep of e.g. 1 second prior to your preprocess function (and setting preprocess to just sleep then return ds if you're not doing any preprocessing)? Or, are you instead sleeping before the entire open_mfdataset call?

Is this solution only addressing the issue of opening the same ds multiple times within a python process, or would it also address multiple processes opening the same ds?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hangs while saving netcdf file opened using xr.open_mfdataset with lock=None 597657663
778838527 https://github.com/pydata/xarray/issues/3961#issuecomment-778838527 https://api.github.com/repos/pydata/xarray/issues/3961 MDEyOklzc3VlQ29tbWVudDc3ODgzODUyNw== heerad 2560426 2021-02-14T20:40:38Z 2021-02-14T20:40:38Z NONE

Also seeing this as of version 0.16.1.

In some cases, I need lock=False otherwise I'll run into hung processes a certain percentage of the time. ds.load() prior to to_netcdf() does not solve the problem.

In other cases, I need lock=None otherwise I'll consistently get RuntimeError: NetCDF: Not a valid ID.

Is the current recommended solution to set lock=False and retry until success? Or, is it to keep lock=None and use zarr instead? @dcherian

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hangs while saving netcdf file opened using xr.open_mfdataset with lock=None 597657663

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.05ms · About: xarray-datasette