home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 294250748

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1375#issuecomment-294250748 https://api.github.com/repos/pydata/xarray/issues/1375 294250748 MDEyOklzc3VlQ29tbWVudDI5NDI1MDc0OA== 1217238 2017-04-14T22:46:10Z 2017-04-14T22:47:01Z MEMBER

Yes, I would say this is in scope, as long as we can keep most of the data-type specific logic out of xarray's core (which seems doable).

Currently, we define most of our operations on duck arrays in https://github.com/pydata/xarray/blob/master/xarray/core/duck_array_ops.py

There are a few other hacks throughout the codebase, which can find by searching for "dask_array_type": https://github.com/pydata/xarray/search?p=1&q=dask_array_type&type=&utf8=%E2%9C%93

It's pretty crude, but basically this would need to be extended to implement many of these methods on for sparse arrays, too. Ideally we would define xarray's adapter logic into more cleanly separated submodules, perhaps using multiple dispatch. Even better, we would make this public API, so you can write something like xarray.register_data_type(MySparseArray) to register a type as valid for xarray's .data attribute.

It looks like __array_ufunc__ will actually finally land in NumPy 1.13, which might make this easier.

See also https://github.com/pydata/xarray/pull/1118

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  221858543
Powered by Datasette · Queries took 0.577ms · About: xarray-datasette