home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where issue = 229807027 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • cchwala 7
  • jhamman 4
  • shoyer 1

author_association 2

  • CONTRIBUTOR 7
  • MEMBER 5

issue 1

  • Speed up `decode_cf_datetime` · 12 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
317814176 https://github.com/pydata/xarray/pull/1414#issuecomment-317814176 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNzgxNDE3Ng== jhamman 2443309 2017-07-25T17:42:58Z 2017-07-25T17:43:19Z MEMBER

Thanks @cchwala! I think the Pandas issue is sufficient on this one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
317786250 https://github.com/pydata/xarray/pull/1414#issuecomment-317786250 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNzc4NjI1MA== cchwala 102827 2017-07-25T16:03:46Z 2017-07-25T16:03:46Z CONTRIBUTOR

@jhamman @shoyer This should be ready to merge.

Should I open an xarray issue concerning the bug with pandas.to_timedelta() or is it enough to have the issue I submitted for pandas? I think the bug should be resolved in xarray when it is resolved in pandas because then the overflow check here should catch the cases I discovered.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
317055124 https://github.com/pydata/xarray/pull/1414#issuecomment-317055124 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNzA1NTEyNA== jhamman 2443309 2017-07-21T16:56:52Z 2017-07-21T16:56:52Z MEMBER

Agreed. Let's leave the overflow fix for later. I'll give this one more review and we'll try to get this merged.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
317036076 https://github.com/pydata/xarray/pull/1414#issuecomment-317036076 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNzAzNjA3Ng== shoyer 1217238 2017-07-21T15:44:15Z 2017-07-21T15:44:15Z MEMBER

Do you want to merge this PR, knowing that there still is the overflow issue that was in the code before?

This still sounds like an improvement to me!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
316963228 https://github.com/pydata/xarray/pull/1414#issuecomment-316963228 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNjk2MzIyOA== cchwala 102827 2017-07-21T10:10:54Z 2017-07-21T10:10:54Z CONTRIBUTOR

hmm... it's still complicated. To avoid the NaTs in my code, I tried to extend the current overflow check so that it switches to _decode_datetime_with_netcdf4() earlier. This was my attempt:

python (pd.to_timedelta(flat_num_dates.min(), delta) - pd.to_timedelta(1, 'd') + ref_date) (pd.to_timedelta(flat_num_dates.max(), delta) + pd.to_timedelta(1, 'd') + ref_date) But unfortunately, as shown in my notebook above, pandas.to_timedelta() has a bug and does not detect the overflow in those esoteric cases that I have identified... I have filed this Issue pandas-dev/pandas/issues/17037 because it should be solved there.

Since I do not think this will be fixed soon (I would gladly look at it, but have no time and probably not enough knowledge about the pandas core stuff), I am not sure what to do.

Do you want to merge this PR, knowing that there still is the overflow issue that was in the code before? Or should I continue to try to fix the current overflow bug in this PR?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
315859360 https://github.com/pydata/xarray/pull/1414#issuecomment-315859360 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNTg1OTM2MA== jhamman 2443309 2017-07-17T19:38:03Z 2017-07-17T19:38:03Z MEMBER

@cchwala - thanks for keeping this moving. Once you've taking another pass at the code and added a Whatsnew note, I'll give it a final review.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
315643209 https://github.com/pydata/xarray/pull/1414#issuecomment-315643209 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNTY0MzIwOQ== cchwala 102827 2017-07-16T22:41:50Z 2017-07-16T22:41:50Z CONTRIBUTOR

...but wait. The NaTs that my code produces beyond the int64 overflow should be valid dates, produced using _decode_datetime_with_netcdf4, right?

Hence, I should still add a check for NaT results and fall back to the netcdf version then.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
315637844 https://github.com/pydata/xarray/pull/1414#issuecomment-315637844 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNTYzNzg0NA== cchwala 102827 2017-07-16T21:15:04Z 2017-07-16T21:34:12Z CONTRIBUTOR

@jhamman - I found some differences between the old code in master an my code when decoding values close to the np.datetime64 overflow. My code produces NaT where the old code returned some date.

First, I wanted to test and fix that. However, I may have found that the old implementation did not behave correctly when crossing the "overflow" line just slightly.

I have summed that up in a notebook here.

My conclusion would be, that the code in this PR here is not only faster, but also more correct than the old one. However, since it is quite late in the evening and my head needs some rest, I would like to get a second (or third) opinion...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
315322859 https://github.com/pydata/xarray/pull/1414#issuecomment-315322859 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNTMyMjg1OQ== cchwala 102827 2017-07-14T10:05:04Z 2017-07-14T10:05:04Z CONTRIBUTOR

@jhamman - Sorry. I was away from office (and everything related to work) for more than a month and had to catchup with a lot of things. I will sum up my stuff and post here, hopefully after todays lunch break.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
315206339 https://github.com/pydata/xarray/pull/1414#issuecomment-315206339 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMxNTIwNjMzOQ== jhamman 2443309 2017-07-13T21:24:02Z 2017-07-13T21:24:02Z MEMBER

@cchwala - Any update on the testing / overflow issue you mentioned?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
305469383 https://github.com/pydata/xarray/pull/1414#issuecomment-305469383 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMwNTQ2OTM4Mw== cchwala 102827 2017-06-01T11:43:27Z 2017-06-01T11:43:27Z CONTRIBUTOR

Just a short notice. Sorry, for the delay. I am still working on this PR, but I am too busy right now to finish the overflow testing. I think I found some edge cases which have to be handled. I will provide more details soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027
302943727 https://github.com/pydata/xarray/pull/1414#issuecomment-302943727 https://api.github.com/repos/pydata/xarray/issues/1414 MDEyOklzc3VlQ29tbWVudDMwMjk0MzcyNw== cchwala 102827 2017-05-21T15:28:15Z 2017-05-21T15:28:15Z CONTRIBUTOR

Thanks @shoyer and @jhamman for the feedback. I will change things accordingly.

Concerning tests, I will think again about additional checking for correct handling of overflow. I must admit, that I am not 100% sure that every case is handled correctly by the current code and checked by the current tests. Will have to think about it a little when I find time within the next days...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up `decode_cf_datetime` 229807027

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.253ms · About: xarray-datasette