home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where user = 791145 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • magau · 1 ✖

issue 1

  • float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 1

author_association 1

  • NONE 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
462592638 https://github.com/pydata/xarray/issues/2304#issuecomment-462592638 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQ2MjU5MjYzOA== magau 791145 2019-02-12T02:48:00Z 2019-02-12T02:48:00Z NONE

Hi everyone, I've start using xarray recently, so I apologize if I'm saying something wrong... I've also faced the here reported issue, so have tried to find some answers. Unpacking netcdf files with respect to the NUG attributes (scale_factor and add_offset) seems to be mentioned by the CF-Conventions directives. And it's clear about which data type should be applied to the unpacked data. cf-conventions-1.7/packed-data In this chapter you can read that: "If the scale_factor and add_offset attributes are of the same data type as the associated variable, the unpacked data is assumed to be of the same data type as the packed data. However, if the scale_factor and add_offset attributes are of a different data type from the variable (containing the packed data) then the unpacked data should match the type of these attributes". In my opinion this should be the default behavior of the xarray.decode_cf function. Which doesn't invalidate the idea of forcing the unpacked data dtype. However non of the CFScaleOffsetCoder and CFMaskCoder de/encoder classes seems to be according with these CF directives, since the first one doesn't look for the scale_factor or add_offset dtypes, and the second one also changes the unpacked data dtype (maybe because nan values are being used to replace the fill values). Sorry for such an extensive comment, without any solutions proposal... Regards! :+1:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.452ms · About: xarray-datasette