issue_comments
2 rows where issue = 62242132 and user = 167164 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Set coordinate resolution in ds.to_netcdf · 2 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 82035634 | https://github.com/pydata/xarray/issues/374#issuecomment-82035634 | https://api.github.com/repos/pydata/xarray/issues/374 | MDEyOklzc3VlQ29tbWVudDgyMDM1NjM0 | naught101 167164 | 2015-03-17T02:09:15Z | 2015-03-17T02:09:30Z | NONE | blegh... just noticed the different dates. Never mind :P Thanks for the help again |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Set coordinate resolution in ds.to_netcdf 62242132 | |
| 82034427 | https://github.com/pydata/xarray/issues/374#issuecomment-82034427 | https://api.github.com/repos/pydata/xarray/issues/374 | MDEyOklzc3VlQ29tbWVudDgyMDM0NDI3 | naught101 167164 | 2015-03-17T02:04:17Z | 2015-03-17T02:04:17Z | NONE | Ah, cool, that looks good, however, I think there might be a bug somewhere. Here's what I was getting originally: ``` In [62]: new_data['time'].encoding Out[62]: {} In [60]: new_data.to_netcdf('data/tumba_site_mean_2_year.nc') ``` resulted in: ``` $ ncdump ../projects/synthetic_forcings/data/tumba_site_mean_2_year.nc|grep time --context=2 netcdf tumba_site_mean_2_year { dimensions: time = 35088 ; y = 1 ; x = 1 ; variables: ... float time(time) ; time:calendar = "proleptic_gregorian" ; time:units = "minutes since 2000-01-01 00:30:00" ; ... time = 0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, 360, 390, 420, 450, 480, 510, 540, 570, 600, 630, 660, 690, 720, 750, 780, 810, 840, ``` And if I copy the encoding from the original file, as it was loaded: ``` In [65]: new_data['time'].encoding = data['time'].encoding In [66]: new_data['time'].encoding Out[66]: {'dtype': dtype('>f8'), 'units': 'seconds since 2002-01-01 00:30:00'} In [67]: new_data.to_netcdf('data/tumba_site_mean_2_year.nc') ``` results in ``` $ ncdump ../projects/synthetic_forcings/data/tumba_site_mean_2_year.nc|grep time --context=1 dimensions: time = 35088 ; y = 1 ; -- variables: ... double time(time) ; time:calendar = "proleptic_gregorian" ; time:units = "seconds since 2002-01-01T00:30:00" ; ... time = -63158400, -63156600, -63154800, -63153000, -63151200, -63149400, -63147600, -63145800, -63144000, -63142200, -63140400, -63138600, ``` Now the units are right, but the values are way off. I can't see anything obvious missing from the encoding, compared to the xray docs, but I'm not sure how it works. Also, since seconds are the base SI unit for time, I think it would be sensible to use seconds by default, if no encoding is given. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Set coordinate resolution in ds.to_netcdf 62242132 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1