home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 272004812 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • crusaderky · 3 ✖

issue 1

  • apply_ufunc(dask='parallelized') output_dtypes for datasets · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
609650053 https://github.com/pydata/xarray/issues/1699#issuecomment-609650053 https://api.github.com/repos/pydata/xarray/issues/1699 MDEyOklzc3VlQ29tbWVudDYwOTY1MDA1Mw== crusaderky 6213168 2020-04-06T08:26:13Z 2020-04-06T08:26:13Z MEMBER

still relevant

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812
386721846 https://github.com/pydata/xarray/issues/1699#issuecomment-386721846 https://api.github.com/repos/pydata/xarray/issues/1699 MDEyOklzc3VlQ29tbWVudDM4NjcyMTg0Ng== crusaderky 6213168 2018-05-04T20:18:48Z 2018-05-04T20:21:10Z MEMBER

The key thing is that for most people it would be extremely elegant and practical to be able to duck-type wrappers around numpy, scipy, and numba kernels that automagically work with Variable, DataArray, and Dataset (see my example above). You'll agree on how ugly my 1-liner above would become: def myfunc(x): if isinstance(x, xarray.Dataset): dtype = x.dtypes else: # DataArray and Variable dtype = x.dtype return apply_ufunc(numpy_kernel, x, dask='parallelized', output_dtypes=[dtype]) If you don't like Dataset.dtype, then maybe we could add both Dataset.dtypes and DataArray.dtypes (which would be just an alias to DataArray.dtype)? I still like the former more though - I find it less confusing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812
384073275 https://github.com/pydata/xarray/issues/1699#issuecomment-384073275 https://api.github.com/repos/pydata/xarray/issues/1699 MDEyOklzc3VlQ29tbWVudDM4NDA3MzI3NQ== crusaderky 6213168 2018-04-24T20:42:57Z 2018-04-24T20:42:57Z MEMBER

@shoyer that seems counter-intuitive for me - you are returning two datasets after all. If we go with the list(dict) notation, we could also add a Dataset.dtype property, which (coherently with dims and chunks) would return a dict. This would be very handy as, in 99% of the times, people will want to write: def myfunc(x): return apply_ufunc(numpy_kernel, x, dask='parallelized', output_dtypes=[x.dtype]) which would magically work both when x is a DataArray and when it's a Dataset

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 29.798ms · About: xarray-datasette