issue_comments
4 rows where issue = 202964277 and user = 3496314 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- “ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
326146218 | https://github.com/pydata/xarray/issues/1225#issuecomment-326146218 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDMyNjE0NjIxOA== | tbohn 3496314 | 2017-08-30T23:23:16Z | 2017-08-30T23:23:16Z | NONE | OK, thanks Joe and Stephan. On Wed, Aug 30, 2017 at 3:36 PM, Joe Hamman notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 | |
307524160 | https://github.com/pydata/xarray/issues/1225#issuecomment-307524160 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDMwNzUyNDE2MA== | tbohn 3496314 | 2017-06-09T23:32:38Z | 2017-08-30T22:26:44Z | NONE | OK, here's my code and the file that it works (fails) on. Code: ```Python import os.path import numpy as np import xarray as xr ds = xr.open_dataset('veg_hist.0_10n.90_80w.2000_2016.mode_PFT.5dates.nc') ds_out = ds.isel(lat=slice(0,16),lon=slice(0,16)) ds_out.encoding['unlimited_dims'] = 'time'ds_out.to_netcdf('test.out.nc') ``` Note that I commented out the attempt to make 'time' unlimited - if I attempt it, I get a slightly different chunk size error ('NetCDF: Bad chunk sizes'). I realize that for now I can use 'ncks' as a workaround, but seems to me that xarray should be able to do this too. File (attached) veg_hist.0_10n.90_80w.2000_2016.mode_PFT.5dates.nc.zip |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 | |
307524406 | https://github.com/pydata/xarray/issues/1225#issuecomment-307524406 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDMwNzUyNDQwNg== | tbohn 3496314 | 2017-06-09T23:34:44Z | 2017-06-09T23:34:44Z | NONE | (note also that for the example nc file I provided, the slice that my example code makes contains nothing but null values - but that's irrelevant - the error happens for other slices that do contain non-null values.) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 | |
307518173 | https://github.com/pydata/xarray/issues/1225#issuecomment-307518173 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDMwNzUxODE3Mw== | tbohn 3496314 | 2017-06-09T22:55:20Z | 2017-06-09T22:55:20Z | NONE | I've been encountering this as well, and I don't want to use the scipy engine workaround. If you can tell me what a "self-contained" example means, I can also try to provide one. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1