home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 715374721 and user = 226037 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • alexamici · 1 ✖

issue 1

  • Group together decoding options into a single argument · 1 ✖

author_association 1

  • MEMBER 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
705594621 https://github.com/pydata/xarray/issues/4490#issuecomment-705594621 https://api.github.com/repos/pydata/xarray/issues/4490 MDEyOklzc3VlQ29tbWVudDcwNTU5NDYyMQ== alexamici 226037 2020-10-08T14:06:35Z 2020-10-08T14:06:35Z MEMBER

@shoyer I favour option 2. as a stable solution, possibly with all arguments moved to keyword-only ones: * users don't need to import and additional class * users get the argument completion onopen_dataset * xarray does validation and mangling in the class and passes to the backends only the non default values

I'm for using the words decode/decoding but I'm against the use of CF.

What backend will do is map the on-disk representation of the data (typically optimised for space) to the in-memory representation preferred by xarray (typically optimised of easy of use). This mapping is especially tricky for time-like variables.

CF is one possible way to specify the map, but it is not the only one. Both the GRIB format and all the spatial formats supported by GDAL/rasterio can encode rich data and decoding has (typically) nothing to do with the CF conventions.

My preferred meaning for the decode_-options is: * True: the backend attempts to map the data to the xarray natural data types (np.datetime64, np.float with mask and scale) * False: the backend attempts to return a representation of the data as close as possible to the on-disk one

Typically when a user asks the backend not to decode they intend to insepct the content of the data file to understand why something is not mapping in the expected way.

As an example: in the case of GRIB time-like values are represented as integers like 20190101, but cfgrib at the moment is forced to convert them into a fake CF representation before passing them to xarray, and when using decode_times=False a GRIB user is presented with something that has nothing to do with the on-disk representation.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Group together decoding options into a single argument 715374721

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 20.707ms · About: xarray-datasette