home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "NONE", issue = 307444427 and user = 13906519 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • cwerner · 1 ✖

issue 1

  • What is the recommended way to do proper compression/ scaling of vars? · 1 ✖

author_association 1

  • NONE · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
375581841 https://github.com/pydata/xarray/issues/2005#issuecomment-375581841 https://api.github.com/repos/pydata/xarray/issues/2005 MDEyOklzc3VlQ29tbWVudDM3NTU4MTg0MQ== cwerner 13906519 2018-03-23T08:43:43Z 2018-03-23T08:43:43Z NONE

Maybe it's a misconception of mine how compression with add_offset, scale_factor works?

I tried using i2 dtype (ctype='i2')and only scale_factor (no add_offset) and this looks ok. However, when I switch to i4/i8 type I get strange data in the netCDFs (I write with NETCDF4_CLASSIC if this matters?)... Is it not possible to use a higher precision integer type for add_offset/ scale_factor encoding to get a better precision of scaled values?

About the code samples: sorry, just copied them verbatim from my script. The first block is the logic to compute the scale and offset values, the second is the enconding application using the decorator-based extension to neatly pipe encoding settings to an data array...

Doing a minimal example at the moment is a bit problematic as I'm traveling...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What is the recommended way to do proper compression/ scaling of vars? 307444427

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 17.992ms · About: xarray-datasette