html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/7323#issuecomment-1421254445,https://api.github.com/repos/pydata/xarray/issues/7323,1421254445,IC_kwDOAMm_X85Utp8t,2448579,2023-02-07T18:25:17Z,2023-02-07T18:25:17Z,MEMBER,Thanks @adanb13 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1465047346 https://github.com/pydata/xarray/pull/7323#issuecomment-1411223051,https://api.github.com/repos/pydata/xarray/issues/7323,1411223051,IC_kwDOAMm_X85UHY4L,2443309,2023-01-31T23:41:29Z,2023-01-31T23:41:29Z,MEMBER,"@adanb13 - do you have plans to revisit this PR? If not, do you mind if we close it for now? Based on the comments above, I think an issue discussing the use case and potential solutions would be a good next step.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1465047346 https://github.com/pydata/xarray/pull/7323#issuecomment-1328331087,https://api.github.com/repos/pydata/xarray/issues/7323,1328331087,IC_kwDOAMm_X85PLLlP,14371165,2022-11-27T20:15:53Z,2022-11-27T20:16:24Z,MEMBER,"How about converting the dataset to dask dataframe? ```python ddf = ds.to_dask_dataframe() ddf.to_json(filename) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1465047346 https://github.com/pydata/xarray/pull/7323#issuecomment-1328156723,https://api.github.com/repos/pydata/xarray/issues/7323,1328156723,IC_kwDOAMm_X85PKhAz,1217238,2022-11-27T02:31:51Z,2022-11-27T02:31:51Z,MEMBER,"> Use cases would be in any web service that would like to provide the final data values back to a user in JSON. For what it's worth, I think your users will have a poor experience with encoded JSON data for very large arrays. It will be slow to compress and transfer this data. In the long term, you would probably do better to transmit the data in some binary form (e.g., by calling `tobytes()` on the underlying np.ndarray objects, or by using Xarray's `to_netcdf`).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1465047346 https://github.com/pydata/xarray/pull/7323#issuecomment-1328156304,https://api.github.com/repos/pydata/xarray/issues/7323,1328156304,IC_kwDOAMm_X85PKg6Q,1217238,2022-11-27T02:27:07Z,2022-11-27T02:27:07Z,MEMBER,"Thanks for report and the PR! This really needs a ""minimal complete verifiable"" example (e.g., by creating and loading a Zarr array with random data) so others can verify your reported the performance gains: https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports https://stackoverflow.com/help/minimal-reproducible-example To be honest, this fix looks a little funny to me, because NumPy's own implementation of `tolist()` is so similar. I would love to understand what is going on. If you can reproduce the issue only using NumPy, it could also make more sense to file this as a upstream bug report to NumPy. The NumPy maintainers are in a better position to debug tricky memory allocation issues involving NumPy.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1465047346