home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 224878728 and user = 6815844 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • fujiisoup · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
362902669 https://github.com/pydata/xarray/issues/1388#issuecomment-362902669 https://api.github.com/repos/pydata/xarray/issues/1388 MDEyOklzc3VlQ29tbWVudDM2MjkwMjY2OQ== fujiisoup 6815844 2018-02-04T12:20:33Z 2018-02-04T12:52:29Z MEMBER

@gajomi

Sorry for my late response and thank you for the proposal.

But aside from my previous proposal, I was thinking whether such aggregation methods (including argmin) should propagate the coordinate. For example, as you pointed out, in theory, we may be able to track x-coordinate at the argmin index after da.argmin(dim='x'). But it is not reasonable for da.mean(dim='x'). It may be reasonable for da.max(dim='x') but not for da.median(dim='x').

Such specific rules may be confusing and bring additional complexity. I think the rule we do not track coordinates after aggregations would be much simpler and easier to understand.

If we adopt the above rule, I think the argmin would give just an array of indices, ```python In [1]: import xarray as xr ...: da = xr.DataArray([[0, 3, 2], [2, 1, 4]], dims=['x', 'y'], ...: coords={'x': [1, 2], 'y': ['a', 'b', 'c']}) ...:

In [4]: da.argmin(dim='x') Out[4]: <xarray.DataArray (y: 3)> array([0, 1, 0]) Coordinates: * y (y) <U1 'a' 'b' 'c'

In [3]: da.isel(x=da.argmin(dim='x')) Out[3]: <xarray.DataArray (y: 3)> array([0, 1, 2]) Coordinates: x (y) int64 1 2 1 * y (y) <U1 'a' 'b' 'c'

```

I think your logic would be useful even we do not track the coordinate.

I would appreciate any feedback.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  argmin / argmax behavior doesn't match documentation 224878728
338397633 https://github.com/pydata/xarray/issues/1388#issuecomment-338397633 https://api.github.com/repos/pydata/xarray/issues/1388 MDEyOklzc3VlQ29tbWVudDMzODM5NzYzMw== fujiisoup 6815844 2017-10-21T13:51:04Z 2017-10-21T13:51:04Z MEMBER

I am thinking again how argmin should work with our new vectorizing indexing #1639 . It would be great if arr.isel(**arr.argmin(dim)) == arr.min(dim) could be satisfied even with a multi-dimensional array, although the behavior is different from numpy.argmin. (Maybe our current min should be replaced by arr.isel(**arr.argmin(dim)) so that it preserves the coordinates.)

(We discussed the name for this new method in #1469 but here I just use argmin for the simplicity.)

For example with a three dimensional array with dims=['x', 'y', 'z'], such as arr = xr.DataArray(np.random.randn(4, 3, 2), dims=['x', 'y', 'z']) I am thinking that... + arr.argmin() would return a xr.Dataset which contains 'x', 'y', 'z' as its data_vars. 1. ds = arr.argmin(dims=None) case: - ds['x'], ds['y'], ds['z'] would be 0d-integers. 2. ds = arr.argmin(dims=['x', 'y']) case: - ds['x'], ds['y'], ds['z'] would be 1d-integer-arrays. - The dimension of these three arrays would be 'z_argmin', where ds['z_argmin'] == arr['z']. 3. ds = arr.argmin(dims='x') case: - ds['x'], ds['y'], ds['z'] would be 2d-integer-arrays. - The dimensions of these three arrays are 'y_argmin' and 'z_argmin', where ds['y_argmin'] == arr['y'] and ds['z_argmin'] == arr['z'].

The above proposal for ii (and iii) is not quite clean, as if it is used as an argument of isel, it appends a new coordinate 'z_argmin', which is just a duplicate of 'arr['z']', i.e. arr.isel(**arr.argmin(dims=['x', 'y']))['z_argmin'] == arr['z'].

Any thoughts are welcome.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  argmin / argmax behavior doesn't match documentation 224878728
309280411 https://github.com/pydata/xarray/issues/1388#issuecomment-309280411 https://api.github.com/repos/pydata/xarray/issues/1388 MDEyOklzc3VlQ29tbWVudDMwOTI4MDQxMQ== fujiisoup 6815844 2017-06-18T14:18:40Z 2017-06-19T12:56:03Z MEMBER

I'm working to fix this and I would like to make some design decisions;

  1. What should max() look like? I guess this method should work also for multi-dimensional data. To satisfy the arr.isel_points(**arr.argmin_indices(dim)) == arg.min(dim) relation, the result array should have proper coordinates?

  2. Multiple dim arguments Currently, doc says argmin accepts multiple axes, but np.argmin does not. Can we limit argmin's arguments only str not sequence of strs?

Edit:

  1. Multi-dimensional array to isel_points Currently, isel_points only accepts 1-dimensional array, while the result of argmin_indexes can be multi-dimensional, e.g. python xr.DataArray(np.random.randn(4, 3, 2), dims=['x', 'y', 'z']).argmin_indexes(dims=['x']) Do we need special treatment for this (maybe in isel_points) or just raise an Error (current behavior)?
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  argmin / argmax behavior doesn't match documentation 224878728

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 5118.663ms · About: xarray-datasette