home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 878824626

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/5597#issuecomment-878824626 https://api.github.com/repos/pydata/xarray/issues/5597 878824626 MDEyOklzc3VlQ29tbWVudDg3ODgyNDYyNg== 1373406 2021-07-13T06:46:55Z 2021-07-13T06:46:55Z NONE

That example is actually a different file than the original. I unpacked the original file externally using ncpdq -U BIG_FILE_packed.nc BIG_FILE_unpacked.nc before opening it with xarray, so the decoding step is skipped and there aren't any 2 values generated. The data is correct using that method, so it's a possible workaround, but unpacking externally makes each file 4x larger.

In all the examples, the data is the same time and location, so they should be the same values outside of whatever is lost from compressing to int16 and decompressing, and the output arrays are from selecting a single day (24 hours) at a single location from the dataset returned by open_dataset in the ipython interpreter.

So actually there are three files I've tested with, all of which should have the same data (assuming the issue isn't with how the files are built, which could be the case): BIG_FILE_packed.nc BIG_FILE_unpacked.nc and SMALL_FILE_packed.nc, and the only one that displays the issue is the first one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  942738904
Powered by Datasette · Queries took 0.937ms · About: xarray-datasette