home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "NONE" and issue = 28575097 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • ms8r 2

issue 1

  • Dataset.__delitem__() kills dimensions dictionary · 2 ✖

author_association 1

  • NONE · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
36472462 https://github.com/pydata/xarray/issues/32#issuecomment-36472462 https://api.github.com/repos/pydata/xarray/issues/32 MDEyOklzc3VlQ29tbWVudDM2NDcyNDYy ms8r 6509590 2014-03-02T23:55:51Z 2014-03-02T23:55:51Z NONE

Many thanks! I hadn't realized the indexed_by behavior for integer indexers - that's great. In that case my __delitem__ suggestion becomes superfluous anyway, since what I described can be achieved by indexed_by as it appears.

The new polyglot aka xray looks very impressive - and the name is cool...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.__delitem__() kills dimensions dictionary 28575097
36467822 https://github.com/pydata/xarray/issues/32#issuecomment-36467822 https://api.github.com/repos/pydata/xarray/issues/32 MDEyOklzc3VlQ29tbWVudDM2NDY3ODIy ms8r 6509590 2014-03-02T21:33:17Z 2014-03-02T21:33:17Z NONE

Many thanks for the clarification. np.squeeze was used in slocum to remove a dimension that had been shrunk down to one value (then via views, now via indexed_by). The idea was to get the resulting Dataset as small as possible before dumping it and sending it over a very low bandwidth email link. If a dimension with only one element (like height_above_ground in the example) can be neglected in terms of size impact, it's not worth the trouble with np.squeeze. Otherwise it would be nice to have it back. Thanks in any case.

I think the right behavior would be to delete every variable that uses the dimension.

How about deleting it from every variable that uses the dimension to be deleted, and only keep index 0 if there were multiple values along that dimension. That would seem closer to what happens in a n-dimensional coordinate system if I get rid of one axis?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.__delitem__() kills dimensions dictionary 28575097

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.513ms · About: xarray-datasette