issue_comments
4 rows where issue = 437418525 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
492713273 | https://github.com/pydata/xarray/issues/2921#issuecomment-492713273 | https://api.github.com/repos/pydata/xarray/issues/2921 | MDEyOklzc3VlQ29tbWVudDQ5MjcxMzI3Mw== | mathause 10194086 | 2019-05-15T15:52:12Z | 2019-05-15T15:52:12Z | MEMBER | Today @lukasbrunner and me ran into this problem. Opening an mfdataset and saving
Opening the dataset in
the first date is Workaround``` python import xarray as xr filenames = ['file1.nc', 'file2.nc'] ds = xr.open_mfdataset(fNs) ds.load() make sure the encoding is really emptyassert not ds.time.encoding assign encoding, such that they are equalds.time.encoding.update(ds.time_bnds.encoding) saveds.to_netcdf('~/F/test.nc') ``` Btw. thanks to @klindsay28 for the nice error report. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units 437418525 | |
490177469 | https://github.com/pydata/xarray/issues/2921#issuecomment-490177469 | https://api.github.com/repos/pydata/xarray/issues/2921 | MDEyOklzc3VlQ29tbWVudDQ5MDE3NzQ2OQ== | spencerkclark 6628425 | 2019-05-07T17:37:28Z | 2019-05-07T17:37:28Z | MEMBER |
+1 I didn't fully appreciate how strict the CF conventions were regarding this at the time of #2571. Where it is unambiguous I agree we should make an effort to comply (or preserve compliance). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units 437418525 | |
490143871 | https://github.com/pydata/xarray/issues/2921#issuecomment-490143871 | https://api.github.com/repos/pydata/xarray/issues/2921 | MDEyOklzc3VlQ29tbWVudDQ5MDE0Mzg3MQ== | dcherian 2448579 | 2019-05-07T16:04:58Z | 2019-05-07T16:06:39Z | MEMBER | Thanks for the great report @klindsay28. Looks like CF recommends that
We already have special treatment for @klindsay28 Any interest in putting together a PR that would avoid setting these attributes on Ping @fmaussion for feedback. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units 437418525 | |
489752548 | https://github.com/pydata/xarray/issues/2921#issuecomment-489752548 | https://api.github.com/repos/pydata/xarray/issues/2921 | MDEyOklzc3VlQ29tbWVudDQ4OTc1MjU0OA== | klindsay28 15570875 | 2019-05-06T19:52:47Z | 2019-05-06T19:52:47Z | NONE | It looks like The possibility of this is alluded to the in a comment in #2436, which relates the issue to #1614. ``` import numpy as np import xarray as xr create time and time_bounds DataArrays for Jan-1850 and Feb-1850time_bounds_vals = np.array([[0.0, 31.0], [31.0, 59.0]]) time_vals = time_bounds_vals.mean(axis=1) time_var = xr.DataArray(time_vals, dims='time', coords={'time':time_vals}) time_bounds_var = xr.DataArray(time_bounds_vals, dims=('time', 'd2'), coords={'time':time_vals}) create Dataset of time and time_boundsds = xr.Dataset(coords={'time':time_var}, data_vars={'time_bounds':time_bounds_var}) ds.time.attrs = {'bounds':'time_bounds', 'calendar':'noleap', 'units':'days since 1850-01-01'} write Jan-1850 values to fileds.isel(time=slice(0,1)).to_netcdf('Jan-1850.nc', unlimited_dims='time') write Feb-1850 values to fileds.isel(time=slice(1,2)).to_netcdf('Feb-1850.nc', unlimited_dims='time') use open_mfdataset to read in files, combining into 1 Datasetdecode_times = True decode_cf = True ds = xr.open_mfdataset(['Jan-1850.nc'], decode_cf=decode_cf, decode_times=decode_times) print('time and time_bounds encoding, single-file open_mfdataset') print(ds.time.encoding) print(ds.time_bounds.encoding) use open_mfdataset to read in files, combining into 1 Datasetdecode_times = True decode_cf = True ds = xr.open_mfdataset(['Jan-1850.nc', 'Feb-1850.nc'], decode_cf=decode_cf, decode_times=decode_times) print('--------------------') print('time and time_bounds encoding, multi-file open_mfdataset') print(ds.time.encoding) print(ds.time_bounds.encoding) ``` produces ``` time and time_bounds encoding, single-file open_mfdataset {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (512,), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1,), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} time and time_bounds encoding, multi-file open_mfdataset {} {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units 437418525 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4