issues
1 row where type = "issue" and user = 8363752 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
148876551 | MDU6SXNzdWUxNDg4NzY1NTE= | 827 | Issue with GFS time reference | caiostringari 8363752 | closed | 0 | 7 | 2016-04-16T18:14:33Z | 2022-01-12T14:48:24Z | 2019-02-27T01:48:20Z | NONE | I am currently translating some old ferret code into python. However, when downloading GFS operational data, there was an issue... When downloaded from ferret, the GFS file has the following time reference (using ncdump -h):
When using xarray to access the openDAP server and writing to disk using ds.to_netcdf(), the file has this time reference.
This is not really an issue while using the data inside python because the dates are translated correct. However, in my work flux, I need this file to be read for other models such as WW3. For instance, trying to read it from WW3, results in: ``` Processing data
``` Looking at the reference time, ferret gives TIME:time_origin = "01-JAN-0001 00:00:00" while xarray gives string time:units = "days since 2001-01-01". Well, there are 2000 years missing... I tried to fix it using something like:
But the reference time didn't really updated. Is there an easy way to fix the reference time to match what is in the NOAA's openDAP server ? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);