home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 503578688 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • dcherian 1
  • crusaderky 1

issue 1

  • implement normalize_token · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
541315926 https://github.com/pydata/xarray/issues/3378#issuecomment-541315926 https://api.github.com/repos/pydata/xarray/issues/3378 MDEyOklzc3VlQ29tbWVudDU0MTMxNTkyNg== crusaderky 6213168 2019-10-12T11:27:52Z 2019-10-12T11:38:11Z MEMBER

https://docs.dask.org/en/latest/custom-collections.html#implementing-deterministic-hashing

```python @normalize_token.register(Dataset) def tokenize_dataset(ds): return Dataset, ds._variables, ds._coord_names, ds._attrs

@normalize_token.register(DataArray) def tokenize_dataarray(da): return DataArray, ds._variable, ds._coords, ds._name

Note: the @singledispatch for IndexVariable must be defined before the one for Variable

@normalize_token.register(IndexVariable) def tokenize_indexvariable(v): # Don't waste time converting pd.Index to np.ndarray return IndexVariable, v._dims, v._data.array, v._attrs

@normalize_token.register(Variable) def tokenize_variable(v): # Note: it's v.data, not v._data, in order to cope with the # wrappers around NetCDF and the like return Variable, v._dims, v.data, v._attrs ```

You'll need to write a dummy normalize_token for when dask is not installed.

Unit tests: - running tokenize() twice on the same object returns the same result - changing the content of a data_var (or the variable, for DataArray) changes the output - changing the content of a coord changes the output - changing attrs, name, or dimension names change the output - whether a variable is a data_var or a coord changes the output - dask arrays aren't computed - non-numpy, non-dask NEP18 data is not converted to numpy - works with xarray's fancy wrappers around NetCDF and the like

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implement normalize_token 503578688
541121326 https://github.com/pydata/xarray/issues/3378#issuecomment-541121326 https://api.github.com/repos/pydata/xarray/issues/3378 MDEyOklzc3VlQ29tbWVudDU0MTEyMTMyNg== dcherian 2448579 2019-10-11T15:55:23Z 2019-10-11T15:55:23Z MEMBER

How should this be implemented?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implement normalize_token 503578688

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.916ms · About: xarray-datasette