home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 400813372

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2254#issuecomment-400813372 https://api.github.com/repos/pydata/xarray/issues/2254 400813372 MDEyOklzc3VlQ29tbWVudDQwMDgxMzM3Mg== 1217238 2018-06-27T20:11:35Z 2018-06-27T20:11:35Z MEMBER

For reference, here's the full traceback: ```python


ValueError Traceback (most recent call last) <ipython-input-13-6a835b914234> in <module>() 12 13 # Save to a netCDF4 file. ---> 14 dset.to_netcdf("test.nc")

~/dev/xarray/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute) 1148 engine=engine, encoding=encoding, 1149 unlimited_dims=unlimited_dims, -> 1150 compute=compute) 1151 1152 def to_zarr(self, store=None, mode='w-', synchronizer=None, group=None,

~/dev/xarray/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, writer, encoding, unlimited_dims, compute) 701 # handle scheduler specific logic 702 scheduler = get_scheduler() --> 703 if (dataset.chunks and scheduler in ['distributed', 'multiprocessing'] and 704 engine != 'netcdf4'): 705 raise NotImplementedError("Writing netCDF files with the %s backend "

~/dev/xarray/xarray/core/dataset.py in chunks(self) 1237 for dim, c in zip(v.dims, v.chunks): 1238 if dim in chunks and c != chunks[dim]: -> 1239 raise ValueError('inconsistent chunks') 1240 chunks[dim] = c 1241 return Frozen(SortedKeysDict(chunks))

ValueError: inconsistent chunks ```

So yes, it looks like we could fix this by checking chunks on each array independently like you suggest. There's no reason why all dask arrays need to have the same chunking for storing with to_netcdf().

and replace the instances of dataset.chunks with have_chunks, then the netCDF4 file gets written without any problems (although the data seems to be stored contiguously instead of chunked).

This is because you need to indicate chunks for variables separately, via encoding: http://xarray.pydata.org/en/stable/io.html#writing-encoded-data

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  336273865
Powered by Datasette · Queries took 0.753ms · About: xarray-datasette