issue_comments: 704359408
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/4478#issuecomment-704359408 | https://api.github.com/repos/pydata/xarray/issues/4478 | 704359408 | MDEyOklzc3VlQ29tbWVudDcwNDM1OTQwOA== | 1197350 | 2020-10-06T15:40:03Z | 2020-10-06T15:40:03Z | MEMBER | @jhnnsrs: just a tip based on my own experience. In the code above, you are not specifying any chunks explicitly. The default is then to let zarr choose the chunk sizes. Zarr's default chunks are usually much smaller than what is optimal with cloud storage. I would recommend you explicitly chunk you array and aim for chunks around 100 MB. Closing this as it appears to be fixed by @martindurant's quick response in fsspec / s3fs. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
712782711 |