issue_comments
4 rows where issue = 1052736383 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- preserve chunked data when creating DataArray from itself · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
969192355 | https://github.com/pydata/xarray/issues/5983#issuecomment-969192355 | https://api.github.com/repos/pydata/xarray/issues/5983 | IC_kwDOAMm_X845xLOj | FabianHofmann 19226431 | 2021-11-15T18:23:57Z | 2021-11-15T18:23:57Z | CONTRIBUTOR | Not sure, but I'd argue to keep the Perhaps it is better to raise an error when ambiguities occur? Meaning don't allowing to pass |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
preserve chunked data when creating DataArray from itself 1052736383 | |
969145579 | https://github.com/pydata/xarray/issues/5983#issuecomment-969145579 | https://api.github.com/repos/pydata/xarray/issues/5983 | IC_kwDOAMm_X845w_zr | dcherian 2448579 | 2021-11-15T17:33:05Z | 2021-11-15T17:33:05Z | MEMBER | IMO we should raise an error asking the user to pass |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
preserve chunked data when creating DataArray from itself 1052736383 | |
969086661 | https://github.com/pydata/xarray/issues/5983#issuecomment-969086661 | https://api.github.com/repos/pydata/xarray/issues/5983 | IC_kwDOAMm_X845wxbF | FabianHofmann 19226431 | 2021-11-15T16:31:11Z | 2021-11-15T16:31:11Z | CONTRIBUTOR | Ah yes, this is indeed ambiguous. On the other hand, as long it is still supported to create DataArray's from DataArray's they should at least preserve the data format. I need this as I am creating a subclass from the xarray.DataArray (see https://github.com/PyPSA/linopy/blob/8ac34d9fdbddc1fec0c7b4781f3d49e9c5ae064e/linopy/constraints.py#L18). In case I want to convert a lazy DataArray to my custom class the chunked data is directly computed, which seems a bit weird... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
preserve chunked data when creating DataArray from itself 1052736383 | |
969073542 | https://github.com/pydata/xarray/issues/5983#issuecomment-969073542 | https://api.github.com/repos/pydata/xarray/issues/5983 | IC_kwDOAMm_X845wuOG | dcherian 2448579 | 2021-11-15T16:17:35Z | 2021-11-15T16:17:35Z | MEMBER | Can you give us a little more context about why this might be useful? IIRC we disallowed creating dataarrays from dataarrays in some other place because it leads to ambiguous situations like the following
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
preserve chunked data when creating DataArray from itself 1052736383 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2