home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER", issue = 333248242 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer · 2 ✖

issue 1

  • Refactor nanops · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
398931150 https://github.com/pydata/xarray/pull/2236#issuecomment-398931150 https://api.github.com/repos/pydata/xarray/issues/2236 MDEyOklzc3VlQ29tbWVudDM5ODkzMTE1MA== shoyer 1217238 2018-06-20T23:42:04Z 2018-06-20T23:42:04Z MEMBER

A module of bottleneck/numpy functions that act on numpy arrays only. A module of functions that act on numpy or dask arrays (or these could be moved into duck_array_ops).

Could you explain more detail about this idea?

OK, let me try:

  1. On numpy arrays, we use bottleneck eqiuvalents of numpy functions when possible because bottleneck is faster than numpy
  2. On dask arrays, we use dask equivalents of numpy functions.
  3. We also want to add some extra features on top of what numpy/dask/bottleneck provide, e.g., handling of min_count

We could implement this with: - nputils.nansum() is equivalent to numpy.nansum() but uses bottleneck.nansum() internally instead when possible. - duck_array_ops.nansum() uses numpy_nansum() or dask.array.nansum(), based upon the type of the inputs. - duck_array_ops.sum() uses numpy.sum() or dask.array.sum(), based upon the type of the inputs. - duck_array_ops.sum_with_mincount() adds mincount and skipna support and is used in the Dataset.sum() implementation. Its is written using duck_array_ops.nansum(), duck_array_ops.sum(), duck_array_ops.where() and duck_array_ops.isnull().

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Refactor nanops 333248242
398150942 https://github.com/pydata/xarray/pull/2236#issuecomment-398150942 https://api.github.com/repos/pydata/xarray/issues/2236 MDEyOklzc3VlQ29tbWVudDM5ODE1MDk0Mg== shoyer 1217238 2018-06-18T18:28:58Z 2018-06-18T18:28:58Z MEMBER

Very nice!

In my implementation, bottleneck is not used when skipna=False. bottleneck would be advantageous when skipna=True as numpy needs to copy the entire array once, but I think numpy's method is still OK if skipna=False.

I think this is correct -- bottleneck does not speed up non-NaN skipping functions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Refactor nanops 333248242

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 228.311ms · About: xarray-datasette