issue_comments
1 row where author_association = "NONE" and issue = 437418525 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
489752548 | https://github.com/pydata/xarray/issues/2921#issuecomment-489752548 | https://api.github.com/repos/pydata/xarray/issues/2921 | MDEyOklzc3VlQ29tbWVudDQ4OTc1MjU0OA== | klindsay28 15570875 | 2019-05-06T19:52:47Z | 2019-05-06T19:52:47Z | NONE | It looks like The possibility of this is alluded to the in a comment in #2436, which relates the issue to #1614. ``` import numpy as np import xarray as xr create time and time_bounds DataArrays for Jan-1850 and Feb-1850time_bounds_vals = np.array([[0.0, 31.0], [31.0, 59.0]]) time_vals = time_bounds_vals.mean(axis=1) time_var = xr.DataArray(time_vals, dims='time', coords={'time':time_vals}) time_bounds_var = xr.DataArray(time_bounds_vals, dims=('time', 'd2'), coords={'time':time_vals}) create Dataset of time and time_boundsds = xr.Dataset(coords={'time':time_var}, data_vars={'time_bounds':time_bounds_var}) ds.time.attrs = {'bounds':'time_bounds', 'calendar':'noleap', 'units':'days since 1850-01-01'} write Jan-1850 values to fileds.isel(time=slice(0,1)).to_netcdf('Jan-1850.nc', unlimited_dims='time') write Feb-1850 values to fileds.isel(time=slice(1,2)).to_netcdf('Feb-1850.nc', unlimited_dims='time') use open_mfdataset to read in files, combining into 1 Datasetdecode_times = True decode_cf = True ds = xr.open_mfdataset(['Jan-1850.nc'], decode_cf=decode_cf, decode_times=decode_times) print('time and time_bounds encoding, single-file open_mfdataset') print(ds.time.encoding) print(ds.time_bounds.encoding) use open_mfdataset to read in files, combining into 1 Datasetdecode_times = True decode_cf = True ds = xr.open_mfdataset(['Jan-1850.nc', 'Feb-1850.nc'], decode_cf=decode_cf, decode_times=decode_times) print('--------------------') print('time and time_bounds encoding, multi-file open_mfdataset') print(ds.time.encoding) print(ds.time_bounds.encoding) ``` produces ``` time and time_bounds encoding, single-file open_mfdataset {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (512,), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1,), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} time and time_bounds encoding, multi-file open_mfdataset {} {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units 437418525 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1