home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "NONE", issue = 621177286 and user = 1872600 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • rsignell-usgs · 6 ✖

issue 1

  • "write to read-only" Error in xarray.open_mfdataset() with opendap datasets · 6 ✖

author_association 1

  • NONE · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
642841283 https://github.com/pydata/xarray/issues/4082#issuecomment-642841283 https://api.github.com/repos/pydata/xarray/issues/4082 MDEyOklzc3VlQ29tbWVudDY0Mjg0MTI4Mw== rsignell-usgs 1872600 2020-06-11T17:58:30Z 2020-06-11T18:00:28Z NONE

@jswhit, do you know if https://github.com/Unidata/netcdf4-python is doing the caching?

Just to catch you up quickly, we have a workflow that opens a bunch of opendap datasets, and while the default file_cache_maxsize=128 works on Linux, if this exceeds 25 files on windows it fails: ``` xr.set_options(file_cache_maxsize=25) # works

xr.set_options(file_cache_maxsize=26) # fails

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() with opendap datasets 621177286
641236117 https://github.com/pydata/xarray/issues/4082#issuecomment-641236117 https://api.github.com/repos/pydata/xarray/issues/4082 MDEyOklzc3VlQ29tbWVudDY0MTIzNjExNw== rsignell-usgs 1872600 2020-06-09T11:42:38Z 2020-06-09T11:42:38Z NONE

@DennisHeimbigner , do you not agree that this issue on windows is related to the number of files cached from OPeNDAP requests? Clearly there are some differences with cache files on windows: https://www.unidata.ucar.edu/support/help/MailArchives/netcdf/msg11190.html

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() with opendap datasets 621177286
640808125 https://github.com/pydata/xarray/issues/4082#issuecomment-640808125 https://api.github.com/repos/pydata/xarray/issues/4082 MDEyOklzc3VlQ29tbWVudDY0MDgwODEyNQ== rsignell-usgs 1872600 2020-06-08T18:51:37Z 2020-06-08T18:51:37Z NONE

@DennisHeimbigner I don't understand how it can be a DAP or code issue since: - it runs on Linux without errors with default file_cache_maxsize=128. - it runs on Windows without errors with file_cache_maxsize=25 Right? Or am I missing something?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() with opendap datasets 621177286
640590247 https://github.com/pydata/xarray/issues/4082#issuecomment-640590247 https://api.github.com/repos/pydata/xarray/issues/4082 MDEyOklzc3VlQ29tbWVudDY0MDU5MDI0Nw== rsignell-usgs 1872600 2020-06-08T13:05:28Z 2020-06-08T13:05:28Z NONE

Or perhaps Unidata's @WardF, who leads NetCDF development.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() with opendap datasets 621177286
639450932 https://github.com/pydata/xarray/issues/4082#issuecomment-639450932 https://api.github.com/repos/pydata/xarray/issues/4082 MDEyOklzc3VlQ29tbWVudDYzOTQ1MDkzMg== rsignell-usgs 1872600 2020-06-05T12:26:14Z 2020-06-05T12:26:14Z NONE

@shoyer, unfortunately these opendap datasets contain only 1 time record (1 daily value) each. And it works fine on Linux with file_cache_maxsize=128, so it must be some Windows cache thing right?

So since I just picked file_cache_maxsize=10 arbitrarily, I thought it would be useful to see what the maximum value was. Using the good old bi-section method, I determined that (for this case anyway), the maximum size that works is 25.

In other words: ``` xr.set_options(file_cache_maxsize=25) # works

xr.set_options(file_cache_maxsize=26) # fails

``` I would bet money that Unidata's @DennisHeimbigner knows what's going on here!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() with opendap datasets 621177286
639111588 https://github.com/pydata/xarray/issues/4082#issuecomment-639111588 https://api.github.com/repos/pydata/xarray/issues/4082 MDEyOklzc3VlQ29tbWVudDYzOTExMTU4OA== rsignell-usgs 1872600 2020-06-04T20:55:49Z 2020-06-04T20:55:49Z NONE

@EliT1626 , I confirmed that this problem exists on Windows, but not on Linux.

The error: IOError: [Errno -37] NetCDF: Write to read only: 'https://www.ncei.noaa.gov/thredds/dodsC/OisstBase/NetCDF/V2.1/AVHRR/201703/oisst-avhrr-v02r01.20170304.nc' suggested some kind of cache problem, and as you noted it always fails after a certain number of dates, so I tried increasing the number of cached files from the default 128 to 256: xr.set_options(file_cache_maxsize=256) but that had no effect.

Just to see if it would fail earlier, I then tried decreasing the number of cached files: xr.set_options(file_cache_maxsize=10) and to my surprise, it ran all the way through: https://nbviewer.jupyter.org/gist/rsignell-usgs/c52fadd8626734bdd32a432279bc6779

I'm hoping someone who worked on the caching (@shoyer?) might have some idea of what is going on, but at least you can execute your workflow now on windows!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() with opendap datasets 621177286

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.024ms · About: xarray-datasette