home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 28445412 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • shoyer 2
  • akleeman 1
  • ebrevdo 1

author_association 2

  • CONTRIBUTOR 2
  • MEMBER 2

issue 1

  • Allow the ability to add/persist details of how a dataset is stored. · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
36477970 https://github.com/pydata/xarray/issues/26#issuecomment-36477970 https://api.github.com/repos/pydata/xarray/issues/26 MDEyOklzc3VlQ29tbWVudDM2NDc3OTcw shoyer 1217238 2014-03-03T02:54:16Z 2014-03-03T02:54:16Z MEMBER

I'm marking this issue as "Closed" for now since #20 added the "encoding" attribute to XArrays. But we could certainly do more to make sure of encodings (e.g., for per-variable compression).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the ability to add/persist details of how a dataset is stored. 28445412
36280018 https://github.com/pydata/xarray/issues/26#issuecomment-36280018 https://api.github.com/repos/pydata/xarray/issues/26 MDEyOklzc3VlQ29tbWVudDM2MjgwMDE4 shoyer 1217238 2014-02-27T19:21:19Z 2014-02-27T19:22:00Z MEMBER

Per our in-person discussion, I am a fan of this solution. The next version of #20 will include a preliminary version.

My feeling is that non-relevant encoding details can be ignored if a format doesn't know what to do with them. "units" should only be moved to encoding (from attributes) if the XArray object really no longer has sensible units (i.e., it was decoded into a pandas.DatetimeIndex).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the ability to add/persist details of how a dataset is stored. 28445412
36279915 https://github.com/pydata/xarray/issues/26#issuecomment-36279915 https://api.github.com/repos/pydata/xarray/issues/26 MDEyOklzc3VlQ29tbWVudDM2Mjc5OTE1 akleeman 514053 2014-02-27T19:20:25Z 2014-02-27T19:20:25Z CONTRIBUTOR

Yeah I think keeping them transparent to the user except when reading/writing is the way to go. Two datasets with the same data but different encodings should still be equal when compared, and operations beyond slicing should probably destroy encodings. Not sure how to handle the various file formats, like you said it could be all part of the store, or we could just throw warnings/fail if encodings aren't feasible.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the ability to add/persist details of how a dataset is stored. 28445412
36279285 https://github.com/pydata/xarray/issues/26#issuecomment-36279285 https://api.github.com/repos/pydata/xarray/issues/26 MDEyOklzc3VlQ29tbWVudDM2Mjc5Mjg1 ebrevdo 1794715 2014-02-27T19:14:56Z 2014-02-27T19:14:56Z CONTRIBUTOR

Some of these are specific to the datastore. nc3/nc4 may care about integer packing and masking, but grib format may not. maybe that's where these things should really reside. as aspects of the datastore object. not sure about units though. either way, ideally these would be transparent to the user of the xarray/dataset objects, except as parameters when reading/writing?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the ability to add/persist details of how a dataset is stored. 28445412

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.325ms · About: xarray-datasette