home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER" and issue = 115979105 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • shoyer 1
  • max-sixty 1

issue 1

  • ENH: Apply numpy function to named axes · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
155470414 https://github.com/pydata/xarray/issues/652#issuecomment-155470414 https://api.github.com/repos/pydata/xarray/issues/652 MDEyOklzc3VlQ29tbWVudDE1NTQ3MDQxNA== max-sixty 5635139 2015-11-10T16:18:24Z 2015-11-10T16:18:24Z MEMBER

Wonderful, thanks @shoyer.

FWIW, reduce doesn't seem well documented. It also looks very similar to apply from pandas, albeit with the requirement (rather than the option) to lose a dimension.

I also found .groupby('x').apply(func), which I think is broadly equivalent to a pandas .apply(func, axis=0).

Closing for now, cheers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Apply numpy function to named axes 115979105
155293177 https://github.com/pydata/xarray/issues/652#issuecomment-155293177 https://api.github.com/repos/pydata/xarray/issues/652 MDEyOklzc3VlQ29tbWVudDE1NTI5MzE3Nw== shoyer 1217238 2015-11-10T05:08:05Z 2015-11-10T05:08:05Z MEMBER

Yes, use the reduce method -- which should probably be called something more appropriately generic such as aggregate given that it doesn't only do reduces, e.g.,

``` In [5]: data_array = xray.DataArray(np.random.randn(3, 4, 5), dims=['x', 'y', 'z'])

In [14]: data_array.reduce(np.nanpercentile, q=95, dim=['x', 'y']) Out[14]: <xray.DataArray (z: 5)> array([ 1.79306097, 0.9788271 , 0.9385694 , 1.30198262, 1.78693993]) Coordinates: * z (z) int64 0 1 2 3 4 ```

To apply an operation along an axis that doesn't do an aggregation, you can use the get_axis_num, e.g.,

``` In [27]: xray.DataArray(np.argsort(data_array, axis=data_array.get_axis_num('x')), data_array.coords) Out[27]: <xray.DataArray (x: 3, y: 4, z: 5)> array([[[2, 1, 1, 2, 0], [0, 2, 2, 1, 0], [1, 1, 2, 1, 0], [2, 0, 0, 0, 0]],

   [[1, 2, 2, 0, 2],
    [2, 1, 1, 0, 1],
    [2, 0, 1, 0, 2],
    [1, 1, 2, 2, 1]],

   [[0, 0, 0, 1, 1],
    [1, 0, 0, 2, 2],
    [0, 2, 0, 2, 1],
    [0, 2, 1, 1, 2]]])

Coordinates: * x (x) int64 0 1 2 * y (y) int64 0 1 2 3 * z (z) int64 0 1 2 3 4 ```

We don't currently have a convenience method for the later but that might be good idea. I'm not entirely sure if it should be called DataArray.apply, but it could reuse maybe_wrap_array from Dataset.apply and add auto-conversion from dim to axis arguments

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Apply numpy function to named axes 115979105

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 46.104ms · About: xarray-datasette