issue_comments
4 rows where user = 18488 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- batterseapower · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
748491929 | https://github.com/pydata/xarray/issues/1553#issuecomment-748491929 | https://api.github.com/repos/pydata/xarray/issues/1553 | MDEyOklzc3VlQ29tbWVudDc0ODQ5MTkyOQ== | batterseapower 18488 | 2020-12-19T16:00:00Z | 2020-12-19T16:00:00Z | NONE | For the case of a simple vectorized ``` def reindex_vectorized(da, indexers, method=None, tolerance=None, dim=None, fill_value=None): # Reindex does not presently support vectorized lookups: https://github.com/pydata/xarray/issues/1553 # Sel does (e.g. https://github.com/pydata/xarray/issues/4630) but can't handle missing keys
``` Example: ``` sensor_data = xr.DataArray(np.arange(6).reshape((3, 2)), coords=[ ('time', [0, 2, 3]), ('sensor', ['A', 'C']), ]) reindex_vectorized(sensor_data, { 'sensor': ['A', 'A', 'A', 'B', 'C'], 'time': [0, 1, 2, 0, 0], }, method={'time': 'ffill'}) [0, 0, 2, nan, 1]reindex_vectorized(xr.DataArray(coords=[ ('sensor', []), ('time', [0, 2]) ]), { 'sensor': ['A', 'A', 'A', 'B', 'C'], 'time': [0, 1, 2, 0, 0], }, method={'time': 'ffill'}) [nan, nan, nan, nan, nan]``` |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multidimensional reindex 254927382 | |
748486801 | https://github.com/pydata/xarray/issues/4714#issuecomment-748486801 | https://api.github.com/repos/pydata/xarray/issues/4714 | MDEyOklzc3VlQ29tbWVudDc0ODQ4NjgwMQ== | batterseapower 18488 | 2020-12-19T15:13:36Z | 2020-12-19T15:14:59Z | NONE | Thanks for the response. I think
Is not equivalent to this code:
So if I understand your
I guess this works but it's a bit cumbersome and unlikely to be fast. I think there must be something I'm not understanding here - I'm not familiar with all the nuances of the Your idea of In general |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow sel's method and tolerance to vary per-dimension 771382653 | |
748479287 | https://github.com/pydata/xarray/issues/4714#issuecomment-748479287 | https://api.github.com/repos/pydata/xarray/issues/4714 | MDEyOklzc3VlQ29tbWVudDc0ODQ3OTI4Nw== | batterseapower 18488 | 2020-12-19T14:06:36Z | 2020-12-19T14:06:36Z | NONE | Thanks for the suggestion. One issue with this alternative is it creates a potentially large intermediate object. If you have T times and S sensors, and want to sample them at N (time, sensor) pairs, then the intermediate object with your approach has size |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow sel's method and tolerance to vary per-dimension 771382653 | |
748477889 | https://github.com/pydata/xarray/issues/4714#issuecomment-748477889 | https://api.github.com/repos/pydata/xarray/issues/4714 | MDEyOklzc3VlQ29tbWVudDc0ODQ3Nzg4OQ== | batterseapower 18488 | 2020-12-19T13:53:53Z | 2020-12-19T13:53:53Z | NONE | I guess it would also make sense to have this in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow sel's method and tolerance to vary per-dimension 771382653 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 2