issue_comments
6 rows where issue = 1506437087 and user = 720460 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1363988341 | https://github.com/pydata/xarray/issues/7397#issuecomment-1363988341 | https://api.github.com/repos/pydata/xarray/issues/7397 | IC_kwDOAMm_X85RTM91 | benoitespinola 720460 | 2022-12-23T14:15:25Z | 2022-12-23T14:15:53Z | NONE | Because I want to have a worry-free holidays, I wrote a bit of code that basically creates a new NetCDF file from scratch. I load the data from Xarray, change the data to Numpy arrays and use the NetCDF4 library to write the files (does what I want). In the process, I also slice the data and drop unwanted variables to keep just the bits I want (unlike my original post). If I call .load() or .compute() on my xarray variable, the memory goes crazy (even if I am dropping unwanted variables - which I would expect to release memory). The same happens for slicing followed by .compute(). Unfortunately, the MCVE will have to wait until I am back from my holidays. Happy holidays to all! |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf 1506437087 | |
1362583979 | https://github.com/pydata/xarray/issues/7397#issuecomment-1362583979 | https://api.github.com/repos/pydata/xarray/issues/7397 | IC_kwDOAMm_X85RN2Gr | benoitespinola 720460 | 2022-12-22T09:04:17Z | 2022-12-22T09:04:17Z | NONE | By the way, prior to writing this ticket, I also did the following (which did not help): Drop variables I do not care, keeping dimensions only and toce + soce ; I would expect to need less memory after that. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf 1506437087 | |
1362564754 | https://github.com/pydata/xarray/issues/7397#issuecomment-1362564754 | https://api.github.com/repos/pydata/xarray/issues/7397 | IC_kwDOAMm_X85RNxaS | benoitespinola 720460 | 2022-12-22T08:44:06Z | 2022-12-22T08:44:06Z | NONE | Answering to the question 'Did you do some processing with the data, changing attributes/encoding etc?': No processing. I do ask xarray to load the data (and I tried also loading + computing) and the final outcome is the same. I try now to do an MCVE with dummy data. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf 1506437087 | |
1362562275 | https://github.com/pydata/xarray/issues/7397#issuecomment-1362562275 | https://api.github.com/repos/pydata/xarray/issues/7397 | IC_kwDOAMm_X85RNwzj | benoitespinola 720460 | 2022-12-22T08:41:21Z | 2022-12-22T08:41:21Z | NONE | Just tested with to_zarr and it goes through:
I did an extra run using a memory profiler as such: ``` import xarray as xr import zarr from memory_profiler import profile @profile def main(): path = './data/data_*.nc' # files are: data_1.nc data_2.nc data_3.nc data_4.nc data_5.nc data = xr.open_mfdataset(path)
if name=='main':
main()
Here is the outcome for the memory profiling: ``` Line # Mem usage Increment Occurrences Line Contents ============================================================= 5 156.9 MiB 156.9 MiB 1 @profile 6 def main(): 7 156.9 MiB 0.0 MiB 1 path = './data/data_*.nc' # files are: data_1.nc data_2.nc data_3.nc data_4.nc data_5.nc
``` PS: in this test I just realized I loaded 8 files instead of 5. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf 1506437087 | |
1362544813 | https://github.com/pydata/xarray/issues/7397#issuecomment-1362544813 | https://api.github.com/repos/pydata/xarray/issues/7397 | IC_kwDOAMm_X85RNsit | benoitespinola 720460 | 2022-12-22T08:21:31Z | 2022-12-22T08:21:31Z | NONE | A single file (from ncdump -h):
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf 1506437087 | |
1361621826 | https://github.com/pydata/xarray/issues/7397#issuecomment-1361621826 | https://api.github.com/repos/pydata/xarray/issues/7397 | IC_kwDOAMm_X85RKLNC | benoitespinola 720460 | 2022-12-21T16:28:15Z | 2022-12-21T16:28:15Z | NONE | By the way, Using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory issue merging NetCDF files using xarray.open_mfdataset and to_netcdf 1506437087 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1