home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 513044413

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/3147#issuecomment-513044413 https://api.github.com/repos/pydata/xarray/issues/3147 513044413 MDEyOklzc3VlQ29tbWVudDUxMzA0NDQxMw== 3019665 2019-07-19T00:33:55Z 2019-07-19T00:42:03Z NONE

Another approach for the split_by_chunks implementation would be...

python def split_by_chunks(a): for sl in da.core.slices_from_chunks(a.chunks): yield (sl, a[sl])

While a little bit more cumbersome to write, this could be implemented with .blocks and may be a bit more performant.

python def split_by_chunks(a): for i, sl in zip(np.ndindex(a.numblocks), da.core.slices_from_chunks(a.chunks)): yield (sl, a.blocks[i])

If the slices are not strictly needed, this could be simplified a bit more.

python def split_by_chunks(a): for i in np.ndindex(a.numblocks): yield a.blocks[i]

Admittedly slices_from_chunks is an internal utility function. Though it is unlikely to change. We could consider exposing it as part of the API if that is useful.

We could consider other things like making .blocks iterable, which could make this more friendly as well. Raised issue ( https://github.com/dask/dask/issues/5117 ) on this point.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  470024896
Powered by Datasette · Queries took 160.827ms · About: xarray-datasette