home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "MEMBER", issue = 322591813 and user = 6628425 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • spencerkclark · 5 ✖

issue 1

  • cftime.datetime serialization example failing in latest doc build · 5 ✖

author_association 1

  • MEMBER · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
388907899 https://github.com/pydata/xarray/issues/2127#issuecomment-388907899 https://api.github.com/repos/pydata/xarray/issues/2127 MDEyOklzc3VlQ29tbWVudDM4ODkwNzg5OQ== spencerkclark 6628425 2018-05-14T17:59:56Z 2018-05-14T17:59:56Z MEMBER

Awesome, thanks for tracking that down in NumPy so quickly! I updated #2128 accordingly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cftime.datetime serialization example failing in latest doc build 322591813
388847354 https://github.com/pydata/xarray/issues/2127#issuecomment-388847354 https://api.github.com/repos/pydata/xarray/issues/2127 MDEyOklzc3VlQ29tbWVudDM4ODg0NzM1NA== spencerkclark 6628425 2018-05-14T14:56:46Z 2018-05-14T14:56:46Z MEMBER

Huh...my test is still triggering some failures due to this issue in #2128. Oddly on my laptop the original bug doesn't appear to exist if I use python version 3.6.5 and numpy version 1.14.3 (the versions on Travis where I'm getting a failure): ``` $ python Python 3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:44:09) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type "help", "copyright", "credits" or "license" for more information.

import numpy as np np.version '1.14.3' import datetime np.array([datetime.timedelta(7)], dtype='timedelta64[D]') array([7], dtype='timedelta64[D]') and not surprisingly the test passes: $ pytest -vv test_coding_times.py -k test_infer_cftime_datetime_units ========================================================= test session starts ========================================================== platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0 -- //anaconda/envs/xarray-docs/bin/python cachedir: ../../.pytest_cache rootdir: /Users/spencerclark/xarray-dev/xarray, inifile: setup.cfg collected 269 items / 268 deselected

test_coding_times.py::test_infer_cftime_datetime_units PASSED [100%] So it appears to be platform dependent. Trying this out on a linux machine with these versions I can reproduce the issue (which seems to persist even for individual timedeltas, explaining the test failure): $ python Python 3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:39:56) [GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux Type "help", "copyright", "credits" or "license" for more information.

import numpy as np np.version '1.14.3' import datetime np.array([datetime.timedelta(7)], dtype='timedelta64[D]') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Cannot cast datetime.timedelta object from metadata [Y] to [D] according to the rule 'same_kind' np.timedelta64(datetime.timedelta(7), 'D') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Cannot cast datetime.timedelta object from metadata [Y] to [D] according to the rule 'same_kind' np.timedelta64(datetime.timedelta(1), 'D') numpy.timedelta64(1,'D') It's not ideal, but should we try to go with pandas to do the type conversion? It seems to work on the linux platform: import pandas as pd pd.to_timedelta([datetime.timedelta(7)]) TimedeltaIndex(['7 days'], dtype='timedelta64[ns]', freq=None) pd.to_timedelta([datetime.timedelta(7)]).values array([604800000000000], dtype='timedelta64[ns]') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cftime.datetime serialization example failing in latest doc build 322591813
388817354 https://github.com/pydata/xarray/issues/2127#issuecomment-388817354 https://api.github.com/repos/pydata/xarray/issues/2127 MDEyOklzc3VlQ29tbWVudDM4ODgxNzM1NA== spencerkclark 6628425 2018-05-14T13:31:56Z 2018-05-14T13:31:56Z MEMBER

Any multiple of 7 days (one week) seems to trigger it

Interesting, thanks for investigating things further and confirming that it likely is a NumPy bug. I put up a fix following your suggestion in #2128 and also included a test.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cftime.datetime serialization example failing in latest doc build 322591813
388662640 https://github.com/pydata/xarray/issues/2127#issuecomment-388662640 https://api.github.com/repos/pydata/xarray/issues/2127 MDEyOklzc3VlQ29tbWVudDM4ODY2MjY0MA== spencerkclark 6628425 2018-05-13T23:07:15Z 2018-05-13T23:07:15Z MEMBER

It's confusing to me, because I don't see where NumPy is getting years or months metadata from the datetime.timedelta objects formed by np.diff(dates): In [12]: np.diff(dates) Out[12]: array([datetime.timedelta(31), datetime.timedelta(28), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(31), datetime.timedelta(28), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(31), datetime.timedelta(30), datetime.timedelta(31), datetime.timedelta(30)], dtype=object) Unlike np.timedelta64 objects, datetime.timedelta objects cannot be composed of units which have a varying length depending on the year (the coarsest internal resolution is days). The problem seems to occur only after calling np.unique; maybe the solution is to do the type conversion before calling np.unique? In [19]: np.unique(np.diff(dates).astype('timedelta64[ns]')) Out[19]: array([2419200000000000, 2592000000000000, 2678400000000000], dtype='timedelta64[ns]')

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cftime.datetime serialization example failing in latest doc build 322591813
388661408 https://github.com/pydata/xarray/issues/2127#issuecomment-388661408 https://api.github.com/repos/pydata/xarray/issues/2127 MDEyOklzc3VlQ29tbWVudDM4ODY2MTQwOA== spencerkclark 6628425 2018-05-13T22:42:57Z 2018-05-13T22:42:57Z MEMBER

With the specified dates in line 5 of my example, one can reproduce the error (see line 9 in the problem description). Line 10 shows that casting the result of np.unique(np.diff(dates)) as an array seems to make this type conversion work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cftime.datetime serialization example failing in latest doc build 322591813

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1434.978ms · About: xarray-datasette