home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 549730000

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/3096#issuecomment-549730000 https://api.github.com/repos/pydata/xarray/issues/3096 549730000 MDEyOklzc3VlQ29tbWVudDU0OTczMDAwMA== 18643609 2019-11-05T09:06:27Z 2019-11-05T09:06:27Z NONE

Coming back on this issue in order not to leave it inactive and to provide some feedback to the community.

The problem with the open_mfdataset solution was that the lazy open of a single lead time dataset was still taking 150MB in memory, leading to 150*209 = 31,35GB minimum memory requirement. When I tried with a bigger (64GB memory) machine, I was then blocked with the rechunking which was exceeding the machine's resources and making the script crash. So we ended up using a dask cluster which solved the concurrency and resources limitations.

My second use-case (https://github.com/pydata/xarray/issues/3096#issuecomment-516043946) still remains though, I am wondering if it matches the intended use of zarr and if we want to do something about it, in this case I can open a separate issue documenting it.

All in all I would say my original problem is not relevant anymore, either one can do it with open_mfdataset on a single machine as proposed by @rabernat, you just need some amount of memory (and probably much more if you need to rechunk), or you do it with a dask cluster, which is the solution we chose.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  466994138
Powered by Datasette · Queries took 242.331ms · About: xarray-datasette