home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 375581841

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2005#issuecomment-375581841 https://api.github.com/repos/pydata/xarray/issues/2005 375581841 MDEyOklzc3VlQ29tbWVudDM3NTU4MTg0MQ== 13906519 2018-03-23T08:43:43Z 2018-03-23T08:43:43Z NONE

Maybe it's a misconception of mine how compression with add_offset, scale_factor works?

I tried using i2 dtype (ctype='i2')and only scale_factor (no add_offset) and this looks ok. However, when I switch to i4/i8 type I get strange data in the netCDFs (I write with NETCDF4_CLASSIC if this matters?)... Is it not possible to use a higher precision integer type for add_offset/ scale_factor encoding to get a better precision of scaled values?

About the code samples: sorry, just copied them verbatim from my script. The first block is the logic to compute the scale and offset values, the second is the enconding application using the decorator-based extension to neatly pipe encoding settings to an data array...

Doing a minimal example at the moment is a bit problematic as I'm traveling...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  307444427
Powered by Datasette · Queries took 0.597ms · About: xarray-datasette