issue_comments
2 rows where issue = 33272937 and user = 2443309 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- virtual variables not available when using open_dataset · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
42890839 | https://github.com/pydata/xarray/issues/121#issuecomment-42890839 | https://api.github.com/repos/pydata/xarray/issues/121 | MDEyOklzc3VlQ29tbWVudDQyODkwODM5 | jhamman 2443309 | 2014-05-12T21:28:44Z | 2014-05-12T21:28:44Z | MEMBER | Ok, I just spent a few minutes working through a possible (although not ideal) solution for this. It works although it is a bit ugly and quite a bit slower than the standard calendar option. This option returns a ``` python In [1]: import pandas as pd from netCDF4 import num2date, date2num import datetime import numpy as np from xray.conventions import decode_cf_datetime as decode units = 'days since 0001-01-01' pandas time rangetimes = pd.date_range('2001-01-01-00', end='2001-06-30-23', freq='H') numpy array of numeric dates on noleap calendarnoleap_time = date2num(times.to_pydatetime(), units, calendar='noleap') numpy array of numeric dates on standard calendarstd_time = date2num(times.to_pydatetime(), units, calendar='standard') decoding function using datetime intermediarydef nctime_to_nptime(times): new = np.empty(len(times), dtype='M8[ns]') for i, t in enumerate(times): new[i] = np.datetime64(datetime.datetime(*t.timetuple()[:6])) return new In [2]: decode noleap_time%timeit nctime_to_nptime(decode(noleap_time, units, calendar='noleap')) noleap_datetimes = nctime_to_nptime(num2date(noleap_time, units, calendar='noleap')) print 'dtype:', noleap_datetimes.dtype print noleap_datetimes 10 loops, best of 3: 38.8 ms per loop dtype: datetime64[ns] ['2000-12-31T16:00:00.000000000-0800' '2000-12-31T17:00:00.000000000-0800' '2000-12-31T18:00:00.000000000-0800' ..., '2001-06-30T14:00:00.000000000-0700' '2001-06-30T15:00:00.000000000-0700' '2001-06-30T16:00:00.000000000-0700'] In [3]: decode std_time using vectorized converter%timeit decode(std_time, units, calendar='standard') standard_datetimes = decode(std_time, units, calendar='standard') print 'dtype:', standard_datetimes.dtype print standard_datetimes 1000 loops, best of 3: 243 µs per loop dtype: datetime64[ns] ['2000-12-31T16:00:00.000000000-0800' '2000-12-31T16:59:59.000000000-0800' '2000-12-31T17:59:59.000000000-0800' ..., '2001-06-30T13:59:59.000000000-0700' '2001-06-30T14:59:59.000000000-0700' '2001-06-30T15:59:59.000000000-0700'] ``` Two three things to notice here:
- the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
virtual variables not available when using open_dataset 33272937 | |
42793512 | https://github.com/pydata/xarray/issues/121#issuecomment-42793512 | https://api.github.com/repos/pydata/xarray/issues/121 | MDEyOklzc3VlQ29tbWVudDQyNzkzNTEy | jhamman 2443309 | 2014-05-12T03:25:35Z | 2014-05-12T03:25:35Z | MEMBER | I think the simplest option would be to develop a function to cast the Does I've run into issues like this repeatedly and I think it would be really nice if the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
virtual variables not available when using open_dataset 33272937 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1