home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 35762823 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • akleeman 2
  • shoyer 2

author_association 2

  • CONTRIBUTOR 2
  • MEMBER 2

issue 1

  • BUG: fix encoding issues (array indexing now resets encoding) · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
46275601 https://github.com/pydata/xarray/pull/163#issuecomment-46275601 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2Mjc1NjAx akleeman 514053 2014-06-17T07:28:45Z 2014-06-17T07:28:45Z CONTRIBUTOR

One possibility could be to have the encoding filtering only happen once if the variable was loaded from NetCDF4. Ie, if a variable with chunksizes encoding were loaded from file they it would be removed after the first attempt to index, afterwards all encodings persist. I've been experimenting with something along those lines but don't have it working perfectly yet.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823
46204269 https://github.com/pydata/xarray/pull/163#issuecomment-46204269 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2MjA0MjY5 shoyer 1217238 2014-06-16T16:54:48Z 2014-06-16T16:54:48Z MEMBER

The other concern would be performance -- any logic related to persisting encodings would be invoked every time an array is indexed, an operation that it quite likely to take place an inner loop. Hence, if possible, I would like to only do this sort of logic when actually persisting a dataset to disk.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823
46147928 https://github.com/pydata/xarray/pull/163#issuecomment-46147928 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2MTQ3OTI4 shoyer 1217238 2014-06-16T07:22:38Z 2014-06-16T07:22:38Z MEMBER

Not particularly -- except that it would have been slightly more complicated to implement.

I'm open to it, if you would like to suggest a whitelist or blacklist.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823
46146675 https://github.com/pydata/xarray/pull/163#issuecomment-46146675 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2MTQ2Njc1 akleeman 514053 2014-06-16T07:02:26Z 2014-06-16T07:02:26Z CONTRIBUTOR

Is there a reason why we don't just have it remove problematic encodings? Some encodings are certainly nice to persist (fill value, scale, offset, time units etc ...)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.253ms · About: xarray-datasette