home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where issue = 33307883 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer · 6 ✖

issue 1

  • Only copy datetime64 data if it is using non-nanosecond precision. · 6 ✖

author_association 1

  • MEMBER 6
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
43396609 https://github.com/pydata/xarray/pull/125#issuecomment-43396609 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQzMzk2NjA5 shoyer 1217238 2014-05-17T03:39:56Z 2014-05-17T03:39:56Z MEMBER

I've been playing around with this but don't have a full fix yet. Here is my regression test for your bug:

def test_index_and_concat_datetime64(self): # regression test for #125 expected = Variable('t', pd.date_range('2011-09-01', periods=10)) times = [expected[[i]] for i in range(10)] actual = Variable.concat(times, 't') self.assertArrayEqual(expected, actual)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883
43392029 https://github.com/pydata/xarray/pull/125#issuecomment-43392029 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQzMzkyMDI5 shoyer 1217238 2014-05-17T00:24:16Z 2014-05-17T00:24:16Z MEMBER

Did you end up adding your example (concatenating the time objects) as a regression test? That seems like a good idea to ensure that this stays fixed!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883
43364230 https://github.com/pydata/xarray/pull/125#issuecomment-43364230 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQzMzY0MjMw shoyer 1217238 2014-05-16T18:30:49Z 2014-05-16T18:30:49Z MEMBER

Maybe best to open a new PR so it's clear what is new and what is just from the rebase?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883
43302438 https://github.com/pydata/xarray/pull/125#issuecomment-43302438 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQzMzAyNDM4 shoyer 1217238 2014-05-16T06:52:23Z 2014-05-16T06:52:23Z MEMBER

Let me know how this is coming along!

I'd like to release v0.1.1 within the next few days, since #129 means that ReadTheDocs wasn't able to build versioned docs for v0.1, and we should have a static version of our docs built for the current release. At this point, adding and/or documenting new features means that the docs get out of sync with the latest release on pypi, which is obviously non-ideal. For example, the tutorial now mentions loading groups from NetCDF files even though that's not in v0.1.

I'm going to save #128 and any other highly visible changes for v0.2, but if think you're close to a fix for this issue I'd love to get it in 0.1.1. If not, it's no big deal to wait for 0.2, which I'm guessing will follow within a month or so.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883
42981862 https://github.com/pydata/xarray/pull/125#issuecomment-42981862 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQyOTgxODYy shoyer 1217238 2014-05-13T16:56:33Z 2014-05-13T16:56:33Z MEMBER

Indeed, it would be nice to make that consistent!

I was making some effort to not automatically convert python or netCDF4 datetime objects into numpy.datetime64, for the hypothetical situation where people care about dates before 1677 or aren't using standard calendars (but this will change with #126). But if that's too complicated, feel free to insist on the pandas approach of converting everything to nanosecond datetime64.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883
42863294 https://github.com/pydata/xarray/pull/125#issuecomment-42863294 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQyODYzMjk0 shoyer 1217238 2014-05-12T17:39:35Z 2014-05-12T17:39:35Z MEMBER

Wow, that is nasty! As you can see, we currently have a lot of awkward hacks to work around numpy's semi-broken datetime64, and it looks like this fix broke some of them -- hence the failing Travis builds.

Maybe we need some special logic in Variable.concat instead? My guess is that the trouble here is related to all these datetime64 objects (plain np.datetime64 objects, not arrays with dtype='datetime64', because 0-dimensional datetime64 arrays are broken) being put into an array directly, which at some point implicitly converts them into integers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 158.569ms · About: xarray-datasette