issue_comments
3 rows where issue = 1424732975 and user = 39069044 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Fix coordinate attr handling in `xr.where(..., keep_attrs=True)` · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1320983738 | https://github.com/pydata/xarray/pull/7229#issuecomment-1320983738 | https://api.github.com/repos/pydata/xarray/issues/7229 | IC_kwDOAMm_X85OvJy6 | slevang 39069044 | 2022-11-19T22:32:51Z | 2022-11-19T22:32:51Z | CONTRIBUTOR |
Yeah I think this would be worth doing eventually. Trying to index a list of attributes of unpredictable length doesn't feel very xarray-like. Any further refinements to the current approach of reconstructing attributes after |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix coordinate attr handling in `xr.where(..., keep_attrs=True)` 1424732975 | |
1306498356 | https://github.com/pydata/xarray/pull/7229#issuecomment-1306498356 | https://api.github.com/repos/pydata/xarray/issues/7229 | IC_kwDOAMm_X85N35U0 | slevang 39069044 | 2022-11-08T01:45:59Z | 2022-11-08T01:45:59Z | CONTRIBUTOR | The latest commit should do what we want, consistently taking attrs of The only way it deviates from this (spelled out in the tests) is to pull coord attrs from x, then y, then cond if any of these are scalars. I think this makes sense because if I pass |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix coordinate attr handling in `xr.where(..., keep_attrs=True)` 1424732975 | |
1304244938 | https://github.com/pydata/xarray/pull/7229#issuecomment-1304244938 | https://api.github.com/repos/pydata/xarray/issues/7229 | IC_kwDOAMm_X85NvTLK | slevang 39069044 | 2022-11-04T20:55:02Z | 2022-11-05T03:37:35Z | CONTRIBUTOR | I considered the As far as passing bare arrays, despite what the docstrings say it seems like you can actually do this with
<xarray.DataArray (x: 2)>
array([1, 2])
Coordinates:
* x (x) int64 0 1
After poking around I agree that this isn't easy to totally fix. I sort of started to go down the route of I'm just keen to get this merged in some form because the regression of #6461 is pretty bad. For example: ```python ds = xr.tutorial.load_dataset('air_temperature') xr.where(ds.air>10, ds.air, 10, keep_attrs=True).to_netcdf('foo.nc') completely fails because the time attrs have been overwritten by ds.air attrsValueError: failed to prevent overwriting existing key units in attrs on variable 'time'. This is probably an encoding field used by xarray to describe how a variable is serialized. To proceed, remove this key from the variable's attributes manually. ``` I hit exactly this issue on some existing scripts so this is preventing me from upgrading beyond |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix coordinate attr handling in `xr.where(..., keep_attrs=True)` 1424732975 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1