home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "NONE", issue = 437418525 and user = 15570875 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • klindsay28 · 1 ✖

issue 1

  • to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units · 1 ✖

author_association 1

  • NONE · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
489752548 https://github.com/pydata/xarray/issues/2921#issuecomment-489752548 https://api.github.com/repos/pydata/xarray/issues/2921 MDEyOklzc3VlQ29tbWVudDQ4OTc1MjU0OA== klindsay28 15570875 2019-05-06T19:52:47Z 2019-05-06T19:52:47Z NONE

It looks like ds.time.encoding is not getting set when open_mfdataset is opening multiple files. I suspect that this is leading to the surprising unit for time when the ds is written out. The code below demonstrates that ds.time.encoding is set by open_mfdataset in the single-file case and is not set in the multi-file case. However, ds.time_bounds.encoding is set in both the single- and multi-file cases.

The possibility of this is alluded to the in a comment in #2436, which relates the issue to #1614.

``` import numpy as np import xarray as xr

create time and time_bounds DataArrays for Jan-1850 and Feb-1850

time_bounds_vals = np.array([[0.0, 31.0], [31.0, 59.0]]) time_vals = time_bounds_vals.mean(axis=1)

time_var = xr.DataArray(time_vals, dims='time', coords={'time':time_vals}) time_bounds_var = xr.DataArray(time_bounds_vals, dims=('time', 'd2'), coords={'time':time_vals})

create Dataset of time and time_bounds

ds = xr.Dataset(coords={'time':time_var}, data_vars={'time_bounds':time_bounds_var}) ds.time.attrs = {'bounds':'time_bounds', 'calendar':'noleap', 'units':'days since 1850-01-01'}

write Jan-1850 values to file

ds.isel(time=slice(0,1)).to_netcdf('Jan-1850.nc', unlimited_dims='time')

write Feb-1850 values to file

ds.isel(time=slice(1,2)).to_netcdf('Feb-1850.nc', unlimited_dims='time')

use open_mfdataset to read in files, combining into 1 Dataset

decode_times = True decode_cf = True ds = xr.open_mfdataset(['Jan-1850.nc'], decode_cf=decode_cf, decode_times=decode_times)

print('time and time_bounds encoding, single-file open_mfdataset') print(ds.time.encoding) print(ds.time_bounds.encoding)

use open_mfdataset to read in files, combining into 1 Dataset

decode_times = True decode_cf = True ds = xr.open_mfdataset(['Jan-1850.nc', 'Feb-1850.nc'], decode_cf=decode_cf, decode_times=decode_times)

print('--------------------') print('time and time_bounds encoding, multi-file open_mfdataset') print(ds.time.encoding) print(ds.time_bounds.encoding) ```

produces

``` time and time_bounds encoding, single-file open_mfdataset {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (512,), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1,), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'}


time and time_bounds encoding, multi-file open_mfdataset {} {'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'} ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf with decoded time can create file with inconsistent time:units and time_bounds:units 437418525

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 380.314ms · About: xarray-datasette