html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/1517#issuecomment-324692881,https://api.github.com/repos/pydata/xarray/issues/1517,324692881,MDEyOklzc3VlQ29tbWVudDMyNDY5Mjg4MQ==,5356122,2017-08-24T16:50:45Z,2017-08-24T16:50:45Z,MEMBER,"Wow, this is great stuff!
What's `rs.randn()`?
When this makes it into the public facing API it would be nice to include some guidance on how the chunking scheme affects the run time. Imagine a plot with run time plotted as a function of chunk size or number of chunks. Of course it also depends on the data size and the number of cores available.
To say it in a different way, `array1.chunk({'place': 10})` is a performance tuning parameter, semantically no different than `array1`.
More ambitiously I could imagine an API such as `array1.chunk('place')` or `array1.chunk('auto')` meaning to figure out a reasonable chunking scheme only once `.compute()` is called so that all the compute steps are known. Maybe this is more specific to dask than xarray. I believe it would also be difficult.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,252358450