issue_comments
5 rows where issue = 157545837 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- decode_cf not concatenating string arrays · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
457945747 | https://github.com/pydata/xarray/issues/862#issuecomment-457945747 | https://api.github.com/repos/pydata/xarray/issues/862 | MDEyOklzc3VlQ29tbWVudDQ1Nzk0NTc0Nw== | stale[bot] 26384082 | 2019-01-27T19:18:39Z | 2019-01-27T19:18:39Z | NONE | In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here; otherwise it will be marked as closed automatically |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf not concatenating string arrays 157545837 | |
224949305 | https://github.com/pydata/xarray/issues/862#issuecomment-224949305 | https://api.github.com/repos/pydata/xarray/issues/862 | MDEyOklzc3VlQ29tbWVudDIyNDk0OTMwNQ== | mogismog 6079398 | 2016-06-09T16:26:36Z | 2016-06-09T16:26:36Z | NONE |
Yeah, that's a fair point. I'll put together something that uses an optional list of dimensions to concatenate over. Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf not concatenating string arrays 157545837 | |
224920993 | https://github.com/pydata/xarray/issues/862#issuecomment-224920993 | https://api.github.com/repos/pydata/xarray/issues/862 | MDEyOklzc3VlQ29tbWVudDIyNDkyMDk5Mw== | shoyer 1217238 | 2016-06-09T14:55:24Z | 2016-06-09T14:55:24Z | MEMBER |
This seems a little too magical to me. How would we know if the dataset dimension was added intentionally or not? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf not concatenating string arrays 157545837 | |
224787831 | https://github.com/pydata/xarray/issues/862#issuecomment-224787831 | https://api.github.com/repos/pydata/xarray/issues/862 | MDEyOklzc3VlQ29tbWVudDIyNDc4NzgzMQ== | mogismog 6079398 | 2016-06-09T02:51:11Z | 2016-06-09T02:51:52Z | NONE | Hey @shoyer, Sorry for the delayed response. Passing a list of dimensions over which to concatenate over seems like it would be the easiest workaround with the fewest questions asked. As you mentioned, every dimension gets a variable by the time it is a dataset, so another option (that I'll admit I haven't thought all the way through and may not even work) would be to first check if Either way, I can put something together this week and open up a PR. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf not concatenating string arrays 157545837 | |
222873438 | https://github.com/pydata/xarray/issues/862#issuecomment-222873438 | https://api.github.com/repos/pydata/xarray/issues/862 | MDEyOklzc3VlQ29tbWVudDIyMjg3MzQzOA== | shoyer 1217238 | 2016-06-01T02:05:42Z | 2016-06-01T02:05:42Z | MEMBER | The heuristic we use for determining if you can concatenate over a dimension includes checking if it's included as a variable: https://github.com/pydata/xarray/blob/v0.7.2/xarray/conventions.py#L800 If a potential dummy dimension is also a variable, we don't concatenate over it. Of course, every dimension gets a variable by the time you've turned it into a dataset, so this never works on datasets, only data stores. I'm certainly open to ideas on how to improve this. Possibly accepting an explicit lists of dimensions to concatenate over (and remove) would be the way to go. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf not concatenating string arrays 157545837 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3