html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2624#issuecomment-453799948,https://api.github.com/repos/pydata/xarray/issues/2624,453799948,MDEyOklzc3VlQ29tbWVudDQ1Mzc5OTk0OA==,2443309,2019-01-13T03:54:07Z,2019-01-13T03:54:07Z,MEMBER,I'm going to close this as the original issue (error in compression/codecs) has been resolved. @ktyle - I'd be happy to continue this discussion on the Pangeo issue tracker if you'd like to discuss optimal chunk layout more.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,393214032 https://github.com/pydata/xarray/issues/2624#issuecomment-451206728,https://api.github.com/repos/pydata/xarray/issues/2624,451206728,MDEyOklzc3VlQ29tbWVudDQ1MTIwNjcyOA==,2443309,2019-01-03T16:59:06Z,2019-01-03T16:59:06Z,MEMBER,"@ktyle - glad to hear things are moving for you. I'm pretty sure the last chunk in each of your datasets is smaller than the rest. So after concatenation, you end up with a small chunk in the middle and at the end of the time dimension. I bet if you used a chunk size of 172 (divides evenly into 2924), you wouldn't need to rechunk.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,393214032 https://github.com/pydata/xarray/issues/2624#issuecomment-449184291,https://api.github.com/repos/pydata/xarray/issues/2624,449184291,MDEyOklzc3VlQ29tbWVudDQ0OTE4NDI5MQ==,2443309,2018-12-21T00:14:22Z,2018-12-21T00:14:22Z,MEMBER,"You can also rechunk your dataset after the fact using the `chunk` method: ```Python ds = ds.chunk({'time': 1}) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,393214032