home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 457794714

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2714#issuecomment-457794714 https://api.github.com/repos/pydata/xarray/issues/2714 457794714 MDEyOklzc3VlQ29tbWVudDQ1Nzc5NDcxNA== 1217238 2019-01-26T02:48:01Z 2019-01-26T03:40:29Z MEMBER

The notion of "core dimensions" in apply_ufunc() is definitely quite tricky to understand.

That said, I think this is (mostly) doing the right thing: - Your inputs have dimensions ['row_a', 'dim_1'] and ['row_b', 'dim_1'] - Xarray broadcasts over dimensions that aren't included in "core dimensions" , so inputs are broadcast to have dimensions like ['row_a', 'row_b', 'dim_1'] and ['row_a', 'row_b', 'dim_1'].

This is probably especially confusing because the unlabeled versions of da and db are given "broadcastable" shapes (1000, 1, 100) and (1000, 100) rather than the fully "broadcast" shapes of (1000, 1000, 100) and (1000, 1000, 100), which would make it more obvious what is going on.

For your specific use case: maybe you meant to specify input_core_dims=[['row_a'], ['row_b']] instead? That version would give inputs with dimensions like ['dim_1', 'row_a'] and ['dim_1', 'row_b'].

More generally: I think we really need a version of apply() that doesn't do this confusing broadcasting and dimension reordering. See https://github.com/pydata/xarray/issues/1130 for discussion about that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  403378297
Powered by Datasette · Queries took 0.575ms · About: xarray-datasette