home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 375126758 and user = 1197350 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • rabernat · 4 ✖

issue 1

  • Multi-dimensional binning/resampling/coarsening · 4 ✖

author_association 1

  • MEMBER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
439766587 https://github.com/pydata/xarray/issues/2525#issuecomment-439766587 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzOTc2NjU4Nw== rabernat 1197350 2018-11-19T04:13:37Z 2018-11-19T04:13:37Z MEMBER

What would the coordinates look like?

  1. apply func also for coordinate
  2. always apply mean to coordinate

If I think about my applications, I would probably always want to apply mean to dimension coordinates, but would like to be able to choose for non-dimension coordinates.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758
434480457 https://github.com/pydata/xarray/issues/2525#issuecomment-434480457 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzNDQ4MDQ1Nw== rabernat 1197350 2018-10-30T21:41:17Z 2018-10-30T21:41:25Z MEMBER

I would lean towards a coordinate based representation since it's a little more usable/certain to be correct.

I feel that this could become too complex in the case of irregularly spaced coordinates. I slightly favor the index-based approach (as in my function above), which one calls like python aggregate_da(da, {'lat': 2, 'lon': 2})

If we do that, we can just use scikit-image's block_reduce function, which is vectorized and works great with apply_ufunc.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758
434294356 https://github.com/pydata/xarray/issues/2525#issuecomment-434294356 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzNDI5NDM1Ng== rabernat 1197350 2018-10-30T13:10:16Z 2018-10-30T13:10:39Z MEMBER

FYI, I do this often in my work with this sort of function:

python import xarray as xr from skimage.measure import block_reduce def aggregate_da(da, agg_dims, suf='_agg'): input_core_dims = list(agg_dims) n_agg = len(input_core_dims) core_block_size = tuple([agg_dims[k] for k in input_core_dims]) block_size = (da.ndim - n_agg)*(1,) + core_block_size output_core_dims = [dim + suf for dim in input_core_dims] output_sizes = {(dim + suf): da.shape[da.get_axis_num(dim)]//agg_dims[dim] for dim in input_core_dims} output_dtypes = da.dtype da_out = xr.apply_ufunc(block_reduce, da, kwargs={'block_size': block_size}, input_core_dims=[input_core_dims], output_core_dims=[output_core_dims], output_sizes=output_sizes, output_dtypes=[output_dtypes], dask='parallelized') for dim in input_core_dims: new_coord = block_reduce(da[dim].data, (agg_dims[dim],), func=np.mean) da_out.coords[dim + suf] = (dim + suf, new_coord) return da_out

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758
434294114 https://github.com/pydata/xarray/issues/2525#issuecomment-434294114 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzNDI5NDExNA== rabernat 1197350 2018-10-30T13:09:25Z 2018-10-30T13:09:25Z MEMBER

This is being discussed in #1192 under a different name.

Yes, we need this feature.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 66.872ms · About: xarray-datasette