home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 239918314 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association

user 1

  • shoyer · 2 ✖

issue 1

  • Argmin indexes · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
312998916 https://github.com/pydata/xarray/pull/1469#issuecomment-312998916 https://api.github.com/repos/pydata/xarray/issues/1469 MDEyOklzc3VlQ29tbWVudDMxMjk5ODkxNg== shoyer 1217238 2017-07-05T05:21:45Z 2017-07-05T05:21:45Z MEMBER

OK, I think I finally understand the nuance of the return value -- thanks for describing that fully for me.

In theory (after #974 is implemented), the current return value from indexes_min should work for indexing, e.g., ```

indexes = da.indexes_min(dims='y') indexes <xarray.Dataset> Dimensions: (x: 3) Coordinates: * x (x) <U1 'c' 'b' 'a' Data variables: y (x) int64 0 1 0

da.sel(x=indexes.x, y=indexes.y) # or ds.sel(**indexes) <xarray.DataArray (x: 3)> array([ 1, 40, 5]) Coordinates: * x (x) <U1 'c' 'b' 'a' ``` So maybe that is the right choice, though I'm not entirely certain yet.

Side note: I'm still not super happy with the names idxmin and indexes_min. They look too different for methods that are only a small variation on each other. Maybe idxmin_dataset or idxmin_dict?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Argmin indexes 239918314
312745886 https://github.com/pydata/xarray/pull/1469#issuecomment-312745886 https://api.github.com/repos/pydata/xarray/issues/1469 MDEyOklzc3VlQ29tbWVudDMxMjc0NTg4Ng== shoyer 1217238 2017-07-03T22:49:39Z 2017-07-03T22:49:39Z MEMBER

A few quick thoughts on API design:

  • The most similar pandas method is called idxmin. We may not want to use the exact same name here, but it's something to keep in mind.
  • We might want two separate methods, one like this that returns an OrderedDict/Dataset and another that returns just one DataArray (for use when reducing over only one axis). I might pick idxmin and indexes_min.
  • A keep_dims=True argument like numpy is a nice way to preserve dimensions if desired.
  • I'm a little surprised that it doesn't work to unpack a Dataset with ** in isel_points -- in theory I think it should.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Argmin indexes 239918314

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 3391.821ms · About: xarray-datasette