html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2624#issuecomment-450692965,https://api.github.com/repos/pydata/xarray/issues/2624,450692965,MDEyOklzc3VlQ29tbWVudDQ1MDY5Mjk2NQ==,1961038,2018-12-31T21:44:39Z,2018-12-31T21:44:39Z,NONE,"Ok, thanks all for the advice. Clearly further subdivisions of the multi-level variables are in order.
However, working with a single level (sea-level pressure) from our CFSR datasets, I find that if I specify the chunksize on the Time dimension when using xr.open_mfdataset, the to_zarr function fails on the resulting dataset with a ""non-uniform chunksize"" error.
If, however, I take the resulting dataset and ""re-chunk"" with the .chunk method, although the two datasets ""look identical"", the to_zarr write succeeds.
Link to notebook:
https://nbviewer.jupyter.org/url/www.atmos.albany.edu/facstaff/ktyle/temp/Xarray_to_zarr_ex1.ipynb
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,393214032
https://github.com/pydata/xarray/issues/2624#issuecomment-449146417,https://api.github.com/repos/pydata/xarray/issues/2624,449146417,MDEyOklzc3VlQ29tbWVudDQ0OTE0NjQxNw==,1961038,2018-12-20T21:49:15Z,2018-12-20T21:59:33Z,NONE,"@rabernat Yeah I think the chunksize in the time dimension is too large:
````
Dimensions: (lat: 361, lev: 32, lon: 720, time: 2920)
Coordinates:
* lat (lat) float32 -90.0 -89.5 -89.0 -88.5 -88.0 ... 88.5 89.0 89.5 90.0
* lon (lon) float32 -180.0 -179.5 -179.0 -178.5 ... 178.5 179.0 179.5
* lev (lev) float32 1000.0 975.0 950.0 925.0 ... 50.0 30.0 20.0 10.0
* time (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00
Data variables:
g (time, lev, lat, lon) float32 dask.array
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,393214032
https://github.com/pydata/xarray/issues/1882#issuecomment-363587681,https://api.github.com/repos/pydata/xarray/issues/1882,363587681,MDEyOklzc3VlQ29tbWVudDM2MzU4NzY4MQ==,1961038,2018-02-06T22:31:31Z,2018-02-06T22:31:31Z,NONE,"Although not active on the Xarray github, I am an early adopter and active user of the software and am looking for a good excuse to go to scipy for the first time ...I would be glad to assist!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,293913247
https://github.com/pydata/xarray/issues/1020#issuecomment-250777441,https://api.github.com/repos/pydata/xarray/issues/1020,250777441,MDEyOklzc3VlQ29tbWVudDI1MDc3NzQ0MQ==,1961038,2016-09-30T15:38:01Z,2016-09-30T15:38:01Z,NONE,"Good to know, and since the system I'm running on has 96 GB of RAM, I think your statement about pandas is correct too, as I also get the memory error when running on a smaller (18GB) dataset.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,180080354