home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 307444427 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • shoyer 1
  • cwerner 1
  • stale[bot] 1

author_association 2

  • NONE 2
  • MEMBER 1

issue 1

  • What is the recommended way to do proper compression/ scaling of vars? · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
589558432 https://github.com/pydata/xarray/issues/2005#issuecomment-589558432 https://api.github.com/repos/pydata/xarray/issues/2005 MDEyOklzc3VlQ29tbWVudDU4OTU1ODQzMg== stale[bot] 26384082 2020-02-21T08:50:16Z 2020-02-21T08:50:16Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What is the recommended way to do proper compression/ scaling of vars? 307444427
375581841 https://github.com/pydata/xarray/issues/2005#issuecomment-375581841 https://api.github.com/repos/pydata/xarray/issues/2005 MDEyOklzc3VlQ29tbWVudDM3NTU4MTg0MQ== cwerner 13906519 2018-03-23T08:43:43Z 2018-03-23T08:43:43Z NONE

Maybe it's a misconception of mine how compression with add_offset, scale_factor works?

I tried using i2 dtype (ctype='i2')and only scale_factor (no add_offset) and this looks ok. However, when I switch to i4/i8 type I get strange data in the netCDFs (I write with NETCDF4_CLASSIC if this matters?)... Is it not possible to use a higher precision integer type for add_offset/ scale_factor encoding to get a better precision of scaled values?

About the code samples: sorry, just copied them verbatim from my script. The first block is the logic to compute the scale and offset values, the second is the enconding application using the decorator-based extension to neatly pipe encoding settings to an data array...

Doing a minimal example at the moment is a bit problematic as I'm traveling...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What is the recommended way to do proper compression/ scaling of vars? 307444427
375546333 https://github.com/pydata/xarray/issues/2005#issuecomment-375546333 https://api.github.com/repos/pydata/xarray/issues/2005 MDEyOklzc3VlQ29tbWVudDM3NTU0NjMzMw== shoyer 1217238 2018-03-23T05:05:49Z 2018-03-23T05:05:49Z MEMBER

You gave lots of codes examples, but it's not clear to me yet how it all fits together.

Can you put it into a single code example that I can run to reproduce your problem? A minimum, complete and verifiable example would be best: https://stackoverflow.com/help/mcve

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What is the recommended way to do proper compression/ scaling of vars? 307444427

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.01ms · About: xarray-datasette