home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 90658514 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • shoyer 3
  • j08lue 2

author_association 2

  • MEMBER 3
  • CONTRIBUTOR 2

issue 1

  • NetCDF attributes like `long_name` and `units` lost on `.mean()` · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
115561175 https://github.com/pydata/xarray/issues/442#issuecomment-115561175 https://api.github.com/repos/pydata/xarray/issues/442 MDEyOklzc3VlQ29tbWVudDExNTU2MTE3NQ== j08lue 3404817 2015-06-26T07:28:54Z 2015-06-26T07:28:54Z CONTRIBUTOR

That makes sense. Great that there is an option to keep_attrs. Closing this issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NetCDF attributes like `long_name` and `units` lost on `.mean()` 90658514
115356215 https://github.com/pydata/xarray/issues/442#issuecomment-115356215 https://api.github.com/repos/pydata/xarray/issues/442 MDEyOklzc3VlQ29tbWVudDExNTM1NjIxNQ== shoyer 1217238 2015-06-25T18:30:45Z 2015-06-25T18:50:08Z MEMBER

Ah. So this is intentional. There is an optional parameter that lets you control this -- try .mean(keep_attrs=True).

The basic problem is that it's ambiguous how to handle attributes like units after doing computation. I don't want to inspect attributes and choose some to preserve and others to remove, so we have a choice of either preserving all attributes in an operation or removing all of them.

Obviously, for some aggregations (e.g., sum or var) it doesn't make sense to preserve attributes (which commonly include units). I suppose we could make an exception for aggregations like mean/median/std, but it's also weird to have some aggregations that preserve attributes and others that don't.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NetCDF attributes like `long_name` and `units` lost on `.mean()` 90658514
115177414 https://github.com/pydata/xarray/issues/442#issuecomment-115177414 https://api.github.com/repos/pydata/xarray/issues/442 MDEyOklzc3VlQ29tbWVudDExNTE3NzQxNA== j08lue 3404817 2015-06-25T09:10:26Z 2015-06-25T09:10:38Z CONTRIBUTOR

Sorry for the confusion! The loss of attributes actually occurs when applying .mean() (rather than .load()).

See this notebook (same in nbviewer) for an example with some opendap-hosted data.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NetCDF attributes like `long_name` and `units` lost on `.mean()` 90658514
114969596 https://github.com/pydata/xarray/issues/442#issuecomment-114969596 https://api.github.com/repos/pydata/xarray/issues/442 MDEyOklzc3VlQ29tbWVudDExNDk2OTU5Ng== shoyer 1217238 2015-06-24T18:20:57Z 2015-06-24T18:20:57Z MEMBER

Could you post an example dataset/code for which this occurs? I'm struggling to reproduce this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NetCDF attributes like `long_name` and `units` lost on `.mean()` 90658514
114949306 https://github.com/pydata/xarray/issues/442#issuecomment-114949306 https://api.github.com/repos/pydata/xarray/issues/442 MDEyOklzc3VlQ29tbWVudDExNDk0OTMwNg== shoyer 1217238 2015-06-24T17:32:05Z 2015-06-24T17:32:05Z MEMBER

Hmm. This is definitely a bug -- load should preserve all metadata.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NetCDF attributes like `long_name` and `units` lost on `.mean()` 90658514

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1681.436ms · About: xarray-datasette