issue_comments
where issue = 324350248 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
These facets timed out: author_association, issue
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
435733418 | https://github.com/pydata/xarray/issues/2159#issuecomment-435733418 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDQzNTczMzQxOA== | shoyer 1217238 | 2018-11-05T02:03:00Z | 2018-11-05T02:03:00Z | MEMBER |
Yes, this seems totally fine to me.
Sure, no opposition from me if you want to do it! 👍 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
435544853 | https://github.com/pydata/xarray/issues/2159#issuecomment-435544853 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDQzNTU0NDg1Mw== | shoyer 1217238 | 2018-11-03T00:22:46Z | 2018-11-03T00:22:46Z | MEMBER | @TomNicholas I agree with your steps 1/2/3 for My concern with a single Currently we always do (2) and never do (1). We definitely want an option to disable (2) for speed, and also want an option to support (1) (what you propose here). But these are distinct use cases -- we probably want to support all permutations of 1/2.
I'm not sure we need to support this yet -- it would be enough to have keyword argument for falling back to the existing behavior that only supports 1D concatenation in the order provided.
Agreed, not important unless someone really wants/needs it. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
418476055 | https://github.com/pydata/xarray/issues/2159#issuecomment-418476055 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDQxODQ3NjA1NQ== | shoyer 1217238 | 2018-09-04T18:44:35Z | 2018-09-04T22:16:34Z | MEMBER | NumPy's handling of object arrays is unfortunately inconsistent. So maybe it isn't the best idea to use NumPy arrays for this. Python's built-in list/dict might be better choices here. Something like:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
416389795 | https://github.com/pydata/xarray/issues/2159#issuecomment-416389795 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDQxNjM4OTc5NQ== | shoyer 1217238 | 2018-08-27T22:29:22Z | 2018-08-27T22:29:22Z | MEMBER | @TomNicholas I think your analysis is correct here. I suspect that in most cases we could figure out how to tile datasets by looking at 1D coordinates along each dimension (e.g., indexes for each dataset), e.g., to find a "chunk id" along each concatenated dimension. These could be used to build something like a NumPy object array of xarray.Dataset/DataArray objects, which could split up into a bunch of 1D calls to I would rather avoid using the
We could potentially just encourage using the existing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
393626605 | https://github.com/pydata/xarray/issues/2159#issuecomment-393626605 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDM5MzYyNjYwNQ== | shoyer 1217238 | 2018-05-31T18:19:32Z | 2018-05-31T18:19:32Z | MEMBER | @aluhamaa I don't think you're missing anything here. I agree that it would be pretty straightforward, it just would take a bit of work. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
391509672 | https://github.com/pydata/xarray/issues/2159#issuecomment-391509672 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDM5MTUwOTY3Mg== | shoyer 1217238 | 2018-05-23T21:57:56Z | 2018-05-23T21:57:56Z | MEMBER | @TomNicholas I think you could use the existing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
391499524 | https://github.com/pydata/xarray/issues/2159#issuecomment-391499524 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDM5MTQ5OTUyNA== | shoyer 1217238 | 2018-05-23T21:17:42Z | 2018-05-23T21:17:42Z | MEMBER | I agree with @jhamman that it would take effort from an interested developer to do this but in principle it's quite doable. I think our logic in auto_combine (which powers open_mfdataset) could probably be extended to handle concatenation across multiple dimensions. The main implementation would need to look at coordinates along concatenated dimensions to break the operation into multiple calls |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1