issue_comments: 307524160
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/1225#issuecomment-307524160 | https://api.github.com/repos/pydata/xarray/issues/1225 | 307524160 | MDEyOklzc3VlQ29tbWVudDMwNzUyNDE2MA== | 3496314 | 2017-06-09T23:32:38Z | 2017-08-30T22:26:44Z | NONE | OK, here's my code and the file that it works (fails) on. Code: ```Python import os.path import numpy as np import xarray as xr ds = xr.open_dataset('veg_hist.0_10n.90_80w.2000_2016.mode_PFT.5dates.nc') ds_out = ds.isel(lat=slice(0,16),lon=slice(0,16)) ds_out.encoding['unlimited_dims'] = 'time'ds_out.to_netcdf('test.out.nc') ``` Note that I commented out the attempt to make 'time' unlimited - if I attempt it, I get a slightly different chunk size error ('NetCDF: Bad chunk sizes'). I realize that for now I can use 'ncks' as a workaround, but seems to me that xarray should be able to do this too. File (attached) veg_hist.0_10n.90_80w.2000_2016.mode_PFT.5dates.nc.zip |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
202964277 |