home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 35762823 and user = 514053 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • akleeman · 2 ✖

issue 1

  • BUG: fix encoding issues (array indexing now resets encoding) · 2 ✖

author_association 1

  • CONTRIBUTOR 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
46275601 https://github.com/pydata/xarray/pull/163#issuecomment-46275601 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2Mjc1NjAx akleeman 514053 2014-06-17T07:28:45Z 2014-06-17T07:28:45Z CONTRIBUTOR

One possibility could be to have the encoding filtering only happen once if the variable was loaded from NetCDF4. Ie, if a variable with chunksizes encoding were loaded from file they it would be removed after the first attempt to index, afterwards all encodings persist. I've been experimenting with something along those lines but don't have it working perfectly yet.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823
46146675 https://github.com/pydata/xarray/pull/163#issuecomment-46146675 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2MTQ2Njc1 akleeman 514053 2014-06-16T07:02:26Z 2014-06-16T07:02:26Z CONTRIBUTOR

Is there a reason why we don't just have it remove problematic encodings? Some encodings are certainly nice to persist (fill value, scale, offset, time units etc ...)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1366.385ms · About: xarray-datasette