home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 223440405 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • shoyer 1
  • spencerahill 1
  • snowman2 1

author_association 2

  • CONTRIBUTOR 2
  • MEMBER 1

issue 1

  • open_mfdataset and add time dimension · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
296274069 https://github.com/pydata/xarray/issues/1380#issuecomment-296274069 https://api.github.com/repos/pydata/xarray/issues/1380 MDEyOklzc3VlQ29tbWVudDI5NjI3NDA2OQ== snowman2 8699967 2017-04-21T18:49:57Z 2017-04-21T19:44:00Z CONTRIBUTOR

Thank you @spencerahill and @shoyer. That was brilliant.

Here is the solution: ```python path_to_files = '*.grib2' def extract_date(ds): for var in ds.variables: if 'initial_time' in ds[var].attrs.keys(): grid_time = pd.to_datetime(ds[var].attrs['initial_time'], format="%m/%d/%Y (%H:%M)") if 'forecast_time' in ds[var].attrs.keys(): time_units = 'h' if 'forecast_time_units' in ds[var].attrs.keys(): time_units = str(ds[var].attrs['forecast_time_units'][0]) grid_time += np.timedelta64(int(ds[var].attrs['forecast_time'][0]), time_units)

        return ds.assign(time=grid_time)
raise ValueError("Time attribute missing: {0}".format(self.search_time_attr))

with xr.open_mfdataset(path_to_files, concat_dim='time', preprocess=extract_date, engine='pynio') as xd: print(xd) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset and add time dimension 223440405
296260389 https://github.com/pydata/xarray/issues/1380#issuecomment-296260389 https://api.github.com/repos/pydata/xarray/issues/1380 MDEyOklzc3VlQ29tbWVudDI5NjI2MDM4OQ== shoyer 1217238 2017-04-21T17:57:07Z 2017-04-21T17:57:07Z MEMBER

You can use the preprocess argument to open_mfdataset to create a new dimension from the attribute value.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset and add time dimension 223440405
296253682 https://github.com/pydata/xarray/issues/1380#issuecomment-296253682 https://api.github.com/repos/pydata/xarray/issues/1380 MDEyOklzc3VlQ29tbWVudDI5NjI1MzY4Mg== spencerahill 6200806 2017-04-21T17:29:30Z 2017-04-21T17:29:30Z CONTRIBUTOR

open_mfdataset has a 'concat_dim' optional keyword argument where you can specify the name of a new dimension that you want to concatenate your files over. You can read more about this in the API reference on open_mfdataset.

You could then overwrite the coordinate of that new dimension with your desired time coordinate. Does that help?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset and add time dimension 223440405

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.659ms · About: xarray-datasette