home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "MEMBER", issue = 323839238 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • shoyer · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
391114129 https://github.com/pydata/xarray/issues/2145#issuecomment-391114129 https://api.github.com/repos/pydata/xarray/issues/2145 MDEyOklzc3VlQ29tbWVudDM5MTExNDEyOQ== shoyer 1217238 2018-05-22T19:34:48Z 2018-05-22T19:47:09Z MEMBER

This is not really desirable behavior, but it's an implication of how xarray implements ds.resample(time='1M').mean(): - Resample is converted into a groupby call, e.g., ds.groupby(time_starts).mean('time') - .mean('time') for each grouped dataset averages over the 'time' dimension, resulting in a dataset with only a 'space' dimension, e.g., ```

list(ds.resample(time='1M'))[0][1].mean('time') <xarray.Dataset> Dimensions: (space: 10) Coordinates: * space (space) int64 0 1 2 3 4 5 6 7 8 9 Data variables: var_withtime1 (space) float64 0.008982 -0.09879 0.1361 -0.2485 -0.023 ... var_withtime2 (space) float64 0.2621 0.06009 -0.1686 0.07397 0.1095 ... var_timeless1 (space) float64 0.8519 -0.4253 -0.8581 0.9085 -0.4797 ... var_timeless2 (space) float64 0.8006 1.954 -0.5349 0.3317 1.778 -0.7954 ... `` -concat()` is used to combine grouped datasets into the final result, but it doesn't know anything about which variables were aggregated, so every data variable gets the "time" dimension added.

To fix this I would suggest three steps: 1. Add a keep_dims argument to xarray reductions like mean(), indicating that a dimension should be preserved with length 1, like keep_dims=True for numpy reductions (https://github.com/pydata/xarray/issues/2170). 2. Fix concat to only concatenate variables that already have the concatenated dimension, as discussed in https://github.com/pydata/xarray/issues/2064 3. Use keep_dims=True in groupby reductions. Then the result should automatically only include aggregated dimensions. This would convenient allow us to remove existing logic in groupby() for restoring the original order of aggregated dimensions (see _restore_dim_order()).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.resample() adds time dimension to independant variables 323839238

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4013.489ms · About: xarray-datasette