issue_comments
7 rows where issue = 788534915 and user = 35968931 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- combine_by_coords can succed when it shouldn't · 7 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
840680285 | https://github.com/pydata/xarray/issues/4824#issuecomment-840680285 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDg0MDY4MDI4NQ== | TomNicholas 35968931 | 2021-05-13T16:35:16Z | 2021-05-13T16:35:16Z | MEMBER | Thanks @dcherian that helps. It does seem silly to be like "I'm going to use the coordinates to decide how everything should be concatenated, but I'm also happy to change some of those coordinate values whilst I'm combining". Does that mean we should just not to allow |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 | |
840632567 | https://github.com/pydata/xarray/issues/4824#issuecomment-840632567 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDg0MDYzMjU2Nw== | TomNicholas 35968931 | 2021-05-13T15:18:20Z | 2021-05-13T15:18:20Z | MEMBER | I honestly don't really understand |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 | |
810476560 | https://github.com/pydata/xarray/issues/4824#issuecomment-810476560 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDgxMDQ3NjU2MA== | TomNicholas 35968931 | 2021-03-30T18:20:16Z | 2021-03-30T18:20:16Z | MEMBER | Thanks @dcherian. So passing
Your other example though @mathause, of allowing certain ragged arrays to pass through, presumably we still need a new check of some kind to disallow that? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 | |
771583218 | https://github.com/pydata/xarray/issues/4824#issuecomment-771583218 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDc3MTU4MzIxOA== | TomNicholas 35968931 | 2021-02-02T11:52:57Z | 2021-02-02T12:02:14Z | MEMBER |
I don't actually know - this behaviour of
Yes exactly. Well, more specifically, "if they have the same start they need to be equal in length otherwise its a ragged hypercube, and if they have the same start, equal in length, but different values in the middle then it's a valid hypercube but an inconsistent dataset". It is currently assumed that the user passes a valid hypercube with consistent data, see this comment in Assume that any two datasets whose coord along dim startswith the same value have the same coord values throughout.``` Though I realise now that I don't think this assumption is made explicit in the docs anywhere, instead it just talks about coords being monotonic. If people pass "dodgy" hypercubes then this could currently fail in multiple ways (including silently), but the reason we didn't just actually check that the coords were completely equal throughout was because then you have to load all the actual values from the files, which could incur a significant performance cost. Adding a check of just the last value of each coord would help considerably (it should solve #4077), but unless we check every value then there will always be a way to silently produce a nonsense result by feeding it inconsistent data. We might consider some kind of flag for whether or not these checks should be done, which defaults to on, and users can turn off if they trust their data but want more speed. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 | |
770975217 | https://github.com/pydata/xarray/issues/4824#issuecomment-770975217 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDc3MDk3NTIxNw== | TomNicholas 35968931 | 2021-02-01T16:18:26Z | 2021-02-01T16:18:26Z | MEMBER | tl;dr, I currently think there are two issues: 1) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 | |
770973948 | https://github.com/pydata/xarray/issues/4824#issuecomment-770973948 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDc3MDk3Mzk0OA== | TomNicholas 35968931 | 2021-02-01T16:16:38Z | 2021-02-01T16:16:38Z | MEMBER | Thanks for these examples @mathause , these are useful.
Not sure if this is what you meant, but to be clear:
As far I can see here then
This is a good question, but I'm 99% sure I didn't intend for either combine function to be able to handle this case. The problem with this case is that it's not order-invariant: you could concat Try comparing it to the behaviour of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 | |
770469187 | https://github.com/pydata/xarray/issues/4824#issuecomment-770469187 | https://api.github.com/repos/pydata/xarray/issues/4824 | MDEyOklzc3VlQ29tbWVudDc3MDQ2OTE4Nw== | TomNicholas 35968931 | 2021-01-31T23:15:15Z | 2021-01-31T23:15:15Z | MEMBER | Thanks @mathause . I'm not sure exactly how it ends up with that erroneous result, but I think it should be caught by adding the same check that would fix #4077? i.e. when it realises that 4 is not < 1e-06 it would throw an error. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combine_by_coords can succed when it shouldn't 788534915 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1