html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/5604#issuecomment-881641897,https://api.github.com/repos/pydata/xarray/issues/5604,881641897,IC_kwDOAMm_X840jMmp,5635139,2021-07-16T18:36:45Z,2021-07-16T18:36:45Z,MEMBER,"The memory usage does seem high. Not having the indexes aligned makes it into an expensive operation, and I would vote to have that fail by default ref (https://github.com/pydata/xarray/discussions/5499#discussioncomment-929765).
Can the input files be aligned before attempting to combine the data? Or are you not in control of the input files?
To debug the memory, you probably need to do something like use `memory_profiler`, and try for varying numbers of files — unfortunately it's a complex problem and just looking at `htop` gives very course information.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,944996552
https://github.com/pydata/xarray/issues/5604#issuecomment-881111321,https://api.github.com/repos/pydata/xarray/issues/5604,881111321,IC_kwDOAMm_X840hLEZ,5635139,2021-07-16T01:29:19Z,2021-07-16T01:29:19Z,MEMBER,"Again — where are you seeing this 1000GB or 1000x number?
(also have a look at GitHub docs on how to format the code)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,944996552
https://github.com/pydata/xarray/issues/5604#issuecomment-880853826,https://api.github.com/repos/pydata/xarray/issues/5604,880853826,MDEyOklzc3VlQ29tbWVudDg4MDg1MzgyNg==,35968931,2021-07-15T16:44:32Z,2021-07-15T16:44:32Z,MEMBER,"An example which we can reproduce locally would be the most helpful, if
possible!
On Thu, 15 Jul 2021, 12:42 tommy307507, ***@***.***> wrote:
> I also don't understand how the chunksize of v2d_time is 59 instead of 1
>
> Is v2d_time one of the dimensions being concatenated along by
> open_mfdataset?
>
> Yes, I will try the above tomorrow, and post it back here.
> I did try to pass concat_dim = [""v2d_time"", ""v3d_time"" ] but that still
> causes the problem
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> , or
> unsubscribe
>
> .
>
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,944996552
https://github.com/pydata/xarray/issues/5604#issuecomment-880809255,https://api.github.com/repos/pydata/xarray/issues/5604,880809255,MDEyOklzc3VlQ29tbWVudDg4MDgwOTI1NQ==,35968931,2021-07-15T15:51:18Z,2021-07-15T15:51:18Z,MEMBER,"> I also don't understand how the chunksize of v2d_time is 59 instead of 1
Is `v2d_time` one of the dimensions being concatenated along by `open_mfdataset`?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,944996552
https://github.com/pydata/xarray/issues/5604#issuecomment-880500336,https://api.github.com/repos/pydata/xarray/issues/5604,880500336,MDEyOklzc3VlQ29tbWVudDg4MDUwMDMzNg==,5635139,2021-07-15T08:24:12Z,2021-07-15T08:24:12Z,MEMBER,"This will likely need much more detail. Though to start: what's the source of the 1000x number?
What happens if you pass `compat=""identical"", coords=""minimal""` to `open_mfdataset`? If that fails, the opening operation may be doing some expensive alignment.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,944996552