issue_comments
5 rows where author_association = "MEMBER", issue = 314764258 and user = 2448579 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: updated_at (date)
These facets timed out: author_association
issue 1
- concat_dim getting added to *all* variables of multifile datasets · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
531818131 | https://github.com/pydata/xarray/issues/2064#issuecomment-531818131 | https://api.github.com/repos/pydata/xarray/issues/2064 | MDEyOklzc3VlQ29tbWVudDUzMTgxODEzMQ== | dcherian 2448579 | 2019-09-16T15:03:12Z | 2019-09-16T15:03:12Z | MEMBER | #3239 has been merged. Now What's left is to change defaults to implement @shoyer's comment
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim getting added to *all* variables of multifile datasets 314764258 | |
524021001 | https://github.com/pydata/xarray/issues/2064#issuecomment-524021001 | https://api.github.com/repos/pydata/xarray/issues/2064 | MDEyOklzc3VlQ29tbWVudDUyNDAyMTAwMQ== | dcherian 2448579 | 2019-08-22T18:22:37Z | 2019-08-22T18:22:37Z | MEMBER | Thanks for your input @bonnland.
We do have a What's under discussion here is what to do about variables duplicated across datasets or indeed, how do we know that these variables are duplicated across datasets when concatenating other variables. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim getting added to *all* variables of multifile datasets 314764258 | |
523960862 | https://github.com/pydata/xarray/issues/2064#issuecomment-523960862 | https://api.github.com/repos/pydata/xarray/issues/2064 | MDEyOklzc3VlQ29tbWVudDUyMzk2MDg2Mg== | dcherian 2448579 | 2019-08-22T15:42:10Z | 2019-08-22T15:42:10Z | MEMBER | I have a draft solution in #3239. It adds a new mode called "sensible" that acts like "all" when the concat dimension doesn't exist in the dataset and acts like "minimal" when the dimension is present. We can decide whether this is the right way i.e. add a new mode but the more fundamental problem is below. The issue is dealing with variables that should not be concatentated in "minimal" mode (e.g. time-invariant non dim coords when concatenating in time). In this case, we want to skip the equality checks in I thought the clean way to do this would be to add the However, So do we want to support all the other @shoyer What do you think? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim getting added to *all* variables of multifile datasets 314764258 | |
519149757 | https://github.com/pydata/xarray/issues/2064#issuecomment-519149757 | https://api.github.com/repos/pydata/xarray/issues/2064 | MDEyOklzc3VlQ29tbWVudDUxOTE0OTc1Nw== | dcherian 2448579 | 2019-08-07T15:32:16Z | 2019-08-07T15:32:16Z | MEMBER |
I'm in favour of this. What should we name this mode? One comment on "existing dimensions" mode:
For variables without the dimension, this will still raise a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim getting added to *all* variables of multifile datasets 314764258 | |
511468454 | https://github.com/pydata/xarray/issues/2064#issuecomment-511468454 | https://api.github.com/repos/pydata/xarray/issues/2064 | MDEyOklzc3VlQ29tbWVudDUxMTQ2ODQ1NA== | dcherian 2448579 | 2019-07-15T16:15:51Z | 2019-07-15T16:15:51Z | MEMBER | @bonnland I don't think you want to change the default
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim getting added to *all* variables of multifile datasets 314764258 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1