issue_comments: 288829145
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/463#issuecomment-288829145 | https://api.github.com/repos/pydata/xarray/issues/463 | 288829145 | MDEyOklzc3VlQ29tbWVudDI4ODgyOTE0NQ== | 2615433 | 2017-03-23T19:08:37Z | 2017-03-23T19:08:37Z | NONE | Not sure this is good feedback at all but I just wanted to provide an additional problematic case, from my end, that is returning this "too many files" problem: NOTE: I have the latest xarray package. I have about 365 1.7MB Netcdf files that I am trying to read using open_mfdataset() and it continuously gives me the "too many files" error and completely hangs jupyter notebooks to the point where I have to ctrl+C out of it. Note that each netcdf contains a Dataset that is 195x195x1. Obviously it's not a file-size issue as I'm not dealing with multiple gigs worth of data. Should I increase the OSX open max file limit, or will that not solve anything in my case? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
94328498 |