home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 400859677

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2254#issuecomment-400859677 https://api.github.com/repos/pydata/xarray/issues/2254 400859677 MDEyOklzc3VlQ29tbWVudDQwMDg1OTY3Nw== 1217238 2018-06-27T23:18:18Z 2018-06-27T23:18:18Z MEMBER

Yes, a pull request would be appreciated! On Wed, Jun 27, 2018 at 1:53 PM Mike Neish notifications@github.com wrote:

So yes, it looks like we could fix this by checking chunks on each array independently like you suggest. There's no reason why all dask arrays need to have the same chunking for storing with to_netcdf().

I could throw together a pull request if that's all that's involved.

This is because you need to indicate chunks for variables separately, via encoding: http://xarray.pydata.org/en/stable/io.html#writing-encoded-data

Thanks! I was able to write chunked output the netCDF file by adding chunksizes to the encoding attribute of the variables. I found I also had to specify original_shape as a workaround for #2198 https://github.com/pydata/xarray/issues/2198.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/2254#issuecomment-400825442, or mute the thread https://github.com/notifications/unsubscribe-auth/ABKS1nMYi7kPsJImZLwiHGKqpVHH_UlAks5uA_DIgaJpZM4U551i .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  336273865
Powered by Datasette · Queries took 1.238ms · About: xarray-datasette