home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where user = 3487237 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 5

  • open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 6
  • Problem opening unstructured grid ocean forecasts with 4D vertical coordinates 2
  • Be able to override calendar in `open_dataset`/`open_mfdataset`/etc OR include another calendar name 2
  • open_dataset leading to NetCDF: file not found 1
  • In a specific case, `decode_cf` adds encoding dtype that breaks `to_netcdf` 1

user 1

  • kthyng · 12 ✖

author_association 1

  • NONE 12
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1494957700 https://github.com/pydata/xarray/issues/7079#issuecomment-1494957700 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85ZGz6E kthyng 3487237 2023-04-03T20:45:42Z 2023-04-03T20:45:42Z NONE

I'm not really sure what to think any more — we have had a real, consistent issue that seemed to fit the description of this issue which went away with one of the fixes above (using single threading), but using local files at the moment seems to remove the error even with the current version of xarray and either parallel option.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1494950846 https://github.com/pydata/xarray/issues/7079#issuecomment-1494950846 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85ZGyO- kthyng 3487237 2023-04-03T20:39:02Z 2023-04-03T20:39:02Z NONE

Ok I downloaded the two files and indeed there is no error with parallel=True nor parallel=False.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1494542372 https://github.com/pydata/xarray/issues/7079#issuecomment-1494542372 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85ZFOgk kthyng 3487237 2023-04-03T15:31:54Z 2023-04-03T15:31:54Z NONE

@jhamman Yes, using the PR version of xarray, with parallel=True I met the error but with parallel=False I did not.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1492670307 https://github.com/pydata/xarray/issues/7079#issuecomment-1492670307 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85Y-Fdj kthyng 3487237 2023-03-31T22:14:06Z 2023-03-31T22:14:06Z NONE

I was able to reproduce the error with the current version of xarray and then have it work with the new version. Here is what I did:

Make new environment conda create -n test_xarray xarray netcdf4 dask

Check version ``` (test_xarray) kthyng@adams ~ % conda list xarray

packages in environment at /Users/kthyng/miniconda3/envs/test_xarray:

Name Version Build Channel

xarray 2023.3.0 pyhd8ed1ab_0 conda-forge ```

In python: import xarray as xr urls = ["https://opendap.co-ops.nos.noaa.gov/thredds/dodsC/NOAA/WCOFS/MODELS/2023/03/31/nos.wcofs.2ds.n001.20230331.t03z.nc", "https://opendap.co-ops.nos.noaa.gov/thredds/dodsC/NOAA/WCOFS/MODELS/2023/03/31/nos.wcofs.2ds.n002.20230331.t03z.nc"] xr.open_mfdataset(urls) returns the following the first time xr.open_mfdataset(urls) is run but the second time it runs fine. OSError: [Errno -70] NetCDF: DAP server error: 'https://opendap.co-ops.nos.noaa.gov/thredds/dodsC/NOAA/WCOFS/MODELS/2023/03/31/nos.wcofs.2ds.n002.20230331.t03z.nc'

Next I used the PR version of xarray and reran the code above and then it was able to read in ok on the first try.

Note: after a week or so those files won't work and will have to be updated with something more current but the pattern to use is clear from the file names.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1490420229 https://github.com/pydata/xarray/issues/7079#issuecomment-1490420229 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85Y1gIF kthyng 3487237 2023-03-30T14:36:39Z 2023-03-30T14:36:39Z NONE

@jhamman Sorry for my delay — I started this the other day and got waylaid. I'll try to get back to it today or tomorrow.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1276656637 https://github.com/pydata/xarray/issues/7079#issuecomment-1276656637 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85MGDv9 kthyng 3487237 2022-10-12T19:45:08Z 2022-10-12T19:45:08Z NONE

@ocefpaf and all: thank you! What a mysterious error this has been. Using the workaround

import dask dask.config.set(scheduler="single-threaded")

did indeed avoid the issue for me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1171375793 https://github.com/pydata/xarray/issues/2233#issuecomment-1171375793 https://api.github.com/repos/pydata/xarray/issues/2233 IC_kwDOAMm_X85F0cax kthyng 3487237 2022-06-30T15:40:34Z 2022-06-30T15:40:34Z NONE

@benbovy Ah, I see you mean under "Relax all constraints related to “dimension (index) coordinates” in Xarray". Ok, thank you for clarifying that for me! (I wasn't sure what the second item meant in the list of lists.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problem opening unstructured grid ocean forecasts with 4D vertical coordinates 332471780
1171342518 https://github.com/pydata/xarray/issues/2233#issuecomment-1171342518 https://api.github.com/repos/pydata/xarray/issues/2233 IC_kwDOAMm_X85F0US2 kthyng 3487237 2022-06-30T15:11:00Z 2022-06-30T15:11:00Z NONE

I've looked through the github issues associated with the explicit indices, but can't quite tell if I can use them to load FVCOM model output. In any case I just updated and tried without doing anything new and it didn't work:

import xarray as xr url = 'https://opendap.co-ops.nos.noaa.gov/thredds/dodsC/NOAA/SFBOFS/MODELS/2022/06/30/nos.sfbofs.fields.f014.20220630.t09z.nc' # this file will not be available in a few days but one for the present day will be available ds = xr.open_dataset(url, drop_variables='Itime2')

Same error message as before:

MissingDimensionsError: 'siglay' has more than 1-dimension and the same name as one of its dimensions ('siglay', 'node'). xarray disallows such variables because they conflict with the coordinates used to label dimensions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problem opening unstructured grid ocean forecasts with 4D vertical coordinates 332471780
1095088391 https://github.com/pydata/xarray/issues/6453#issuecomment-1095088391 https://api.github.com/repos/pydata/xarray/issues/6453 IC_kwDOAMm_X85BRbkH kthyng 3487237 2022-04-11T13:58:48Z 2022-04-11T13:58:48Z NONE

@spencerkclark Interesting! Good I am glad it is fixed with the dev version. Great if this work up can be used for a test.

Mostly, I wanted to also get this documented since it took forever to track down the issue — maybe it will save someone else some time when they are googling.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  In a specific case, `decode_cf` adds encoding dtype that breaks `to_netcdf` 1196270877
1077728973 https://github.com/pydata/xarray/issues/6259#issuecomment-1077728973 https://api.github.com/repos/pydata/xarray/issues/6259 IC_kwDOAMm_X85APNbN kthyng 3487237 2022-03-24T15:03:50Z 2022-03-24T15:03:50Z NONE

What?! Whoa I did not know about the preprocess option and it looks really powerful! I have been getting the derived datasets to work but I think this would do the job in a more simple and easy-to-understand way. I will give it a try.

intake-xarray should now work with open_mfdataset — I added this as an option, though it's probably not in a release yet.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Be able to override calendar in `open_dataset`/`open_mfdataset`/etc OR include another calendar name 1128759050
1035604231 https://github.com/pydata/xarray/issues/6259#issuecomment-1035604231 https://api.github.com/repos/pydata/xarray/issues/6259 IC_kwDOAMm_X849uhEH kthyng 3487237 2022-02-10T22:39:16Z 2022-02-10T22:39:16Z NONE

Thanks @d70-t for the idea! I haven't tried out the derived datasets capabilities in intake, but I'll give them a try. Sounds like they could be pretty powerful.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Be able to override calendar in `open_dataset`/`open_mfdataset`/etc OR include another calendar name 1128759050
323464863 https://github.com/pydata/xarray/issues/1510#issuecomment-323464863 https://api.github.com/repos/pydata/xarray/issues/1510 MDEyOklzc3VlQ29tbWVudDMyMzQ2NDg2Mw== kthyng 3487237 2017-08-18T21:27:34Z 2017-08-18T21:27:34Z NONE

Huh! Weird! I had tried accessing a particular value of u with netCDF as a test and it had worked fine and hadn't worried about it after that. I just now tried ocean_time and it works for particular indices (like d['ocean_time'][0]), but as you said doesn't work if I put in d['ocean_time'][:].

Thanks for everyone's help, I'll work on the thredds end of things.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_dataset leading to NetCDF: file not found 251332357

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 720.537ms · About: xarray-datasette