home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 467771005 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • shoyer · 5 ✖

issue 1

  • Support for __array_function__ implementers (sparse arrays) [WIP] · 5 ✖

author_association 1

  • MEMBER 5
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
518320289 https://github.com/pydata/xarray/pull/3117#issuecomment-518320289 https://api.github.com/repos/pydata/xarray/issues/3117 MDEyOklzc3VlQ29tbWVudDUxODMyMDI4OQ== shoyer 1217238 2019-08-05T17:14:15Z 2019-08-05T17:14:15Z MEMBER

@nvictus we are good to go ahead and merge, and do follow-ups in other PRs?

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for __array_function__ implementers (sparse arrays) [WIP] 467771005
518074867 https://github.com/pydata/xarray/pull/3117#issuecomment-518074867 https://api.github.com/repos/pydata/xarray/issues/3117 MDEyOklzc3VlQ29tbWVudDUxODA3NDg2Nw== shoyer 1217238 2019-08-05T03:43:22Z 2019-08-05T03:43:22Z MEMBER

At the moment, the behavior of Variables and DataArrays is such that .data provides the duck array and .values coerces to numpy, following the original behavior for dask arrays -- which made me realize, we never asked if this behavior is desired in general?

I think the right behavior is probably for .values to be implemented by calling np.asarray() on .data. That means it should raise on sparse arrays.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for __array_function__ implementers (sparse arrays) [WIP] 467771005
517400198 https://github.com/pydata/xarray/pull/3117#issuecomment-517400198 https://api.github.com/repos/pydata/xarray/issues/3117 MDEyOklzc3VlQ29tbWVudDUxNzQwMDE5OA== shoyer 1217238 2019-08-01T18:14:38Z 2019-08-01T18:14:38Z MEMBER

2. Operations not supported by the duck type. This happens in a few cases with pydata/sparse, and would have to be solved upstream, unless it's a special case where it might be okay to coerce. e.g. what happens with binary operations that mix array types?

This is totally fine for now, as long as there are clear errors when attempting to do an unsupported operation. We can write unit tests with expected failures, which should provide a clear roadmap for things to fix upstream in sparse.

We could attempt to define a minimum required implementation, but in practice I suspect this will be hard to nail down definitively. The ultimate determinant of what works will be xarray's implementation.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for __array_function__ implementers (sparse arrays) [WIP] 467771005
512583158 https://github.com/pydata/xarray/pull/3117#issuecomment-512583158 https://api.github.com/repos/pydata/xarray/issues/3117 MDEyOklzc3VlQ29tbWVudDUxMjU4MzE1OA== shoyer 1217238 2019-07-17T21:53:50Z 2019-07-17T21:53:50Z MEMBER

Would it make sense to just assume that all non-DataArray NEP-18 compliant arrays do not contain an xarray-compliant coords attribute?

Yes, let's switch: coords = getattr(data, 'coords', None) to if isinstance(data, DataArray): coords = data.coords

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for __array_function__ implementers (sparse arrays) [WIP] 467771005
511180633 https://github.com/pydata/xarray/pull/3117#issuecomment-511180633 https://api.github.com/repos/pydata/xarray/issues/3117 MDEyOklzc3VlQ29tbWVudDUxMTE4MDYzMw== shoyer 1217238 2019-07-14T07:35:45Z 2019-07-14T07:35:45Z MEMBER

Even though it failed when I tried applying an operation on the dataset, this is still awesome!

Yes, it really is!

For this specific failure, we should think about adding an option for the default skipna value, or maybe making the semantics depend on the array type.

If someone is using xarray to wrap a computation oriented library like CuPy, they probably almost always want to set skipna=False (along with join='exact'). I don't think I've seen any deep library that has bothered to implement nanmean.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for __array_function__ implementers (sparse arrays) [WIP] 467771005

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 176.212ms · About: xarray-datasette