issue_comments: 832111396
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1020#issuecomment-832111396 | https://api.github.com/repos/pydata/xarray/issues/1020 | 832111396 | MDEyOklzc3VlQ29tbWVudDgzMjExMTM5Ng== | 27021858 | 2021-05-04T17:24:15Z | 2021-05-04T17:24:15Z | NONE | @shoyer I am having a similar problem. I am reading 80 files with total 8.3 GB . So each files has around 100 MB. If I understand you right: Using mf_dataset on such data is not recommend? So best practive wouold be to loop over the files ? PS: I still tried to use some dask related operations but eachtime I try to access |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
180080354 |