home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "CONTRIBUTOR", issue = 185709414 and user = 1997005 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • NotSqrt · 3 ✖

issue 1

  • Differences on datetime values appears after writing reindexed variable on netCDF file · 3 ✖

author_association 1

  • CONTRIBUTOR · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
359839979 https://github.com/pydata/xarray/issues/1064#issuecomment-359839979 https://api.github.com/repos/pydata/xarray/issues/1064 MDEyOklzc3VlQ29tbWVudDM1OTgzOTk3OQ== NotSqrt 1997005 2018-01-23T16:07:44Z 2018-01-23T16:07:44Z CONTRIBUTOR

FYI, merged.time.encoding = {} before calling to_netcdf seems to avoid the RuntimeWarning.

{
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Differences on datetime values appears after writing reindexed variable on netCDF file 185709414
358570582 https://github.com/pydata/xarray/issues/1064#issuecomment-358570582 https://api.github.com/repos/pydata/xarray/issues/1064 MDEyOklzc3VlQ29tbWVudDM1ODU3MDU4Mg== NotSqrt 1997005 2018-01-18T08:16:06Z 2018-01-18T08:16:06Z CONTRIBUTOR

There you go !

```python import numpy import pandas import tempfile import warnings import xarray

array1 = xarray.DataArray( numpy.random.rand(5), dims=['time'], coords={'time': pandas.to_datetime(['2018-01-01', '2018-01-01 00:01', '2018-01-01 00:02', '2018-01-01 00:03', '2018-01-01 00:04'])}, name='foo' )

array2 = xarray.DataArray( numpy.random.rand(5), dims=['time'], coords={'time': pandas.to_datetime(['2018-01-01 00:05', '2018-01-01 00:05:10', '2018-01-01 00:05:20', '2018-01-01 00:05:30', '2018-01-01 00:05:40'])}, name='foo' )

with tempfile.NamedTemporaryFile() as tmp: # save first array array1.to_netcdf(tmp.name) # reload it array1_reloaded = xarray.open_dataarray(tmp.name)

# the time encoding stores minutes as int, so seconds won't be allowed at next call of to_netcdf
assert array1_reloaded.time.encoding['dtype'] == numpy.int64
assert array1_reloaded.time.encoding['units'] == 'minutes since 2018-01-01 00:00:00'

merged = xarray.merge([array1_reloaded, array2])
array1_reloaded.close()

with warnings.catch_warnings():
    warnings.filterwarnings('error', category=RuntimeWarning)
    merged.to_netcdf(tmp.name)

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Differences on datetime values appears after writing reindexed variable on netCDF file 185709414
358324488 https://github.com/pydata/xarray/issues/1064#issuecomment-358324488 https://api.github.com/repos/pydata/xarray/issues/1064 MDEyOklzc3VlQ29tbWVudDM1ODMyNDQ4OA== NotSqrt 1997005 2018-01-17T14:41:14Z 2018-01-17T14:41:14Z CONTRIBUTOR

I faced this issue when switching from a concat to a merge.

The first merged dataset had a time dimension which encoding says {'calendar': 'proleptic_gregorian', 'dtype': dtype('int64'), 'units': 'minutes since 2017-08-20 00:00:00'}, which meant that the data from the second merged dataset could not be stored with a finer resolution than minutes.

If I try to store values like '2017-08-20 00:00:30', I get the warning xarray\conventions.py:1092: RuntimeWarning: saving variable time with floating point data as an integer dtype without any _FillValue to use for NaNs.

Maybe it is similar in your case: netcdf stored the data as 'hours since XXXX', so you lose the minutes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Differences on datetime values appears after writing reindexed variable on netCDF file 185709414

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.107ms · About: xarray-datasette