issue_comments
3 rows where author_association = "MEMBER" and issue = 148876551 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Issue with GFS time reference · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
212029772 | https://github.com/pydata/xarray/issues/827#issuecomment-212029772 | https://api.github.com/repos/pydata/xarray/issues/827 | MDEyOklzc3VlQ29tbWVudDIxMjAyOTc3Mg== | jhamman 2443309 | 2016-04-19T17:32:45Z | 2016-04-19T17:32:45Z | MEMBER |
this is too bad.
This seems easy enough. It would be nice if we always had I can try to take a hack at this later this week (unless someone gets there first). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Issue with GFS time reference 148876551 | |
211138272 | https://github.com/pydata/xarray/issues/827#issuecomment-211138272 | https://api.github.com/repos/pydata/xarray/issues/827 | MDEyOklzc3VlQ29tbWVudDIxMTEzODI3Mg== | shoyer 1217238 | 2016-04-18T00:25:00Z | 2016-04-18T00:25:00Z | MEMBER | Ah, I finally figured out what's going on. We use pandas to cleanup time units in an attempt to always write ISO-8601 compatible reference times. Unfortunately, pandas interprets dates like ``` In [21]: pd.Timestamp('1-1-1 00:00:0.0') Out[21]: Timestamp('2001-01-01 00:00:00') In [25]: pd.Timestamp('01-JAN-0001 00:00:00') Out[25]: Timestamp('2001-01-01 00:00:00') ``` One might argue this is a bug in pandas, but nonetheless that's what it does. xarray can currently handle datetimes outside the range dates hangled by pandas (roughly 1700-2300), but only if pandas raises an OutOfBoundDatetime error. Two fixes that we need for this: - use netCDF4's reference time decoding (if available) before trying to use pandas in decode_cf_datetime. Note that it is important to only decode only the one reference time if possible using netCDF4, because it's a lot faster to parse dates with vectorized operations with pandas/numpy. - stop using _cleanup_netcdf_time_units, since apparently it can go wrong. cc @jhamman who has some experience with these issues |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Issue with GFS time reference 148876551 | |
210926466 | https://github.com/pydata/xarray/issues/827#issuecomment-210926466 | https://api.github.com/repos/pydata/xarray/issues/827 | MDEyOklzc3VlQ29tbWVudDIxMDkyNjQ2Ng== | shoyer 1217238 | 2016-04-17T00:09:06Z | 2016-04-17T00:09:15Z | MEMBER | When you're writing the data back to disk with to_netcdf, try writing something like: But I'm a little surprised this doesn't work by default. Xarray does use '2001-01-01' as a default reference time, but if you pulled the data from an existing dataset (rather than creating the time variable directly yourself such as with numpy or pandas), then it should save the original units in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Issue with GFS time reference 148876551 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2