issue_comments: 375581841
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/2005#issuecomment-375581841 | https://api.github.com/repos/pydata/xarray/issues/2005 | 375581841 | MDEyOklzc3VlQ29tbWVudDM3NTU4MTg0MQ== | 13906519 | 2018-03-23T08:43:43Z | 2018-03-23T08:43:43Z | NONE | Maybe it's a misconception of mine how compression with add_offset, scale_factor works? I tried using i2 dtype ( About the code samples: sorry, just copied them verbatim from my script. The first block is the logic to compute the scale and offset values, the second is the enconding application using the decorator-based extension to neatly pipe encoding settings to an data array... Doing a minimal example at the moment is a bit problematic as I'm traveling... |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
307444427 |