issue_comments
5 rows where author_association = "MEMBER" and issue = 1465047346 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- (Issue #7324) added functions that return data values in memory efficient manner · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1421254445 | https://github.com/pydata/xarray/pull/7323#issuecomment-1421254445 | https://api.github.com/repos/pydata/xarray/issues/7323 | IC_kwDOAMm_X85Utp8t | dcherian 2448579 | 2023-02-07T18:25:17Z | 2023-02-07T18:25:17Z | MEMBER | Thanks @adanb13 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
(Issue #7324) added functions that return data values in memory efficient manner 1465047346 | |
1411223051 | https://github.com/pydata/xarray/pull/7323#issuecomment-1411223051 | https://api.github.com/repos/pydata/xarray/issues/7323 | IC_kwDOAMm_X85UHY4L | jhamman 2443309 | 2023-01-31T23:41:29Z | 2023-01-31T23:41:29Z | MEMBER | @adanb13 - do you have plans to revisit this PR? If not, do you mind if we close it for now? Based on the comments above, I think an issue discussing the use case and potential solutions would be a good next step. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
(Issue #7324) added functions that return data values in memory efficient manner 1465047346 | |
1328331087 | https://github.com/pydata/xarray/pull/7323#issuecomment-1328331087 | https://api.github.com/repos/pydata/xarray/issues/7323 | IC_kwDOAMm_X85PLLlP | Illviljan 14371165 | 2022-11-27T20:15:53Z | 2022-11-27T20:16:24Z | MEMBER | How about converting the dataset to dask dataframe?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
(Issue #7324) added functions that return data values in memory efficient manner 1465047346 | |
1328156723 | https://github.com/pydata/xarray/pull/7323#issuecomment-1328156723 | https://api.github.com/repos/pydata/xarray/issues/7323 | IC_kwDOAMm_X85PKhAz | shoyer 1217238 | 2022-11-27T02:31:51Z | 2022-11-27T02:31:51Z | MEMBER |
For what it's worth, I think your users will have a poor experience with encoded JSON data for very large arrays. It will be slow to compress and transfer this data. In the long term, you would probably do better to transmit the data in some binary form (e.g., by calling |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
(Issue #7324) added functions that return data values in memory efficient manner 1465047346 | |
1328156304 | https://github.com/pydata/xarray/pull/7323#issuecomment-1328156304 | https://api.github.com/repos/pydata/xarray/issues/7323 | IC_kwDOAMm_X85PKg6Q | shoyer 1217238 | 2022-11-27T02:27:07Z | 2022-11-27T02:27:07Z | MEMBER | Thanks for report and the PR! This really needs a "minimal complete verifiable" example (e.g., by creating and loading a Zarr array with random data) so others can verify your reported the performance gains: https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports https://stackoverflow.com/help/minimal-reproducible-example To be honest, this fix looks a little funny to me, because NumPy's own implementation of If you can reproduce the issue only using NumPy, it could also make more sense to file this as a upstream bug report to NumPy. The NumPy maintainers are in a better position to debug tricky memory allocation issues involving NumPy. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
(Issue #7324) added functions that return data values in memory efficient manner 1465047346 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4