issue_comments
6 rows where user = 8238804 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- ivirshup · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
586078165 | https://github.com/pydata/xarray/issues/3768#issuecomment-586078165 | https://api.github.com/repos/pydata/xarray/issues/3768 | MDEyOklzc3VlQ29tbWVudDU4NjA3ODE2NQ== | ivirshup 8238804 | 2020-02-14T03:15:47Z | 2020-02-14T03:15:47Z | NONE |
For Setup```python import xarray as xr import numpy as np da = xr.DataArray( np.arange(56).reshape((7, 8)), coords={ 'x': list('abcdefg'), 'y': 10 * np.arange(8) }, dims=['x', 'y'] ) ``````python xidx = np.array([1, 2, 3]) yidx = np.array([1, 2, 3]) da.isel(x=xidx, y=yidx) <xarray.DataArray (x: 3, y: 3)>array([[ 9, 10, 11],[17, 18, 19],[25, 26, 27]])Coordinates:* x (x) <U1 'b' 'c' 'd'* y (y) int64 10 20 30da.isel(x=xr.DataArray(xidx), y=xr.DataArray(yidx)) <xarray.DataArray (dim_0: 3)>array([ 9, 18, 27])Coordinates:x (dim_0) <U1 'b' 'c' 'd'y (dim_0) int64 10 20 30Dimensions without coordinates: dim_0``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing 564555854 | |
586072916 | https://github.com/pydata/xarray/issues/3768#issuecomment-586072916 | https://api.github.com/repos/pydata/xarray/issues/3768 | MDEyOklzc3VlQ29tbWVudDU4NjA3MjkxNg== | ivirshup 8238804 | 2020-02-14T02:52:34Z | 2020-02-14T02:52:34Z | NONE |
Thanks! I must have missed this, I suspect since my use was actually setting the values at some coordinates. Is there an efficient way to do that? I'd be happy to add some notes to the documentation about that. The |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing 564555854 | |
580999532 | https://github.com/pydata/xarray/issues/3731#issuecomment-580999532 | https://api.github.com/repos/pydata/xarray/issues/3731 | MDEyOklzc3VlQ29tbWVudDU4MDk5OTUzMg== | ivirshup 8238804 | 2020-02-01T06:22:37Z | 2020-02-01T06:22:37Z | NONE | This has also come up over in DimensionalData.jl, which I think is going for behavior I like. What I think would happen:
The selection is over all dimensions of that name.
The command is to reduce over dimensions of that name, the reduction should be performed over all dimensions with that name. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Repeated coordinates leads to unintuitive (broken?) indexing behaviour 557257598 | |
580534808 | https://github.com/pydata/xarray/issues/3731#issuecomment-580534808 | https://api.github.com/repos/pydata/xarray/issues/3731 | MDEyOklzc3VlQ29tbWVudDU4MDUzNDgwOA== | ivirshup 8238804 | 2020-01-31T01:07:28Z | 2020-01-31T01:07:52Z | NONE | Why not allow multiple dimensions with the same name? They can be disambiguated with positional indexing for when it matters. I think support for this would be useful for pairwise measures. Here's a fun example/ current buggy behaviour: ```python import numpy as np import xarray as xr from string import ascii_letters idx1 = xr.IndexVariable("dim1", [f"dim1-{i}" for i in ascii_letters[:10]]) idx2 = xr.IndexVariable("dim2", [f"dim2-{i}" for i in ascii_letters[:5]]) da1 = xr.DataArray(np.random.random_sample((10, 5)), coords=(idx1, idx2)) da2 = xr.DataArray(np.random.random_sample((5, 10)), coords=(idx2, idx1)) da1 @ da2 <xarray.DataArray ()>array(13.06261098)``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Repeated coordinates leads to unintuitive (broken?) indexing behaviour 557257598 | |
521596825 | https://github.com/pydata/xarray/issues/3213#issuecomment-521596825 | https://api.github.com/repos/pydata/xarray/issues/3213 | MDEyOklzc3VlQ29tbWVudDUyMTU5NjgyNQ== | ivirshup 8238804 | 2019-08-15T10:34:30Z | 2019-08-15T10:34:30Z | NONE | That's fair. I just think it would be useful to have an assurance that indices are sorted you read them. I don't see how to express this within the CF specs while still looking like a COO array though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How should xarray use/support sparse arrays? 479942077 | |
521530770 | https://github.com/pydata/xarray/issues/3213#issuecomment-521530770 | https://api.github.com/repos/pydata/xarray/issues/3213 | MDEyOklzc3VlQ29tbWVudDUyMTUzMDc3MA== | ivirshup 8238804 | 2019-08-15T06:28:24Z | 2019-08-15T07:33:04Z | NONE | Would it be feasible to use the contiguous ragged array spec or the gathering based compression when the COO coordinates are sorted? I think this could be very helpful for read efficiency, though I'm not sure if random writes were a requirement here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How should xarray use/support sparse arrays? 479942077 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3