home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 1117786528

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/6517#issuecomment-1117786528 https://api.github.com/repos/pydata/xarray/issues/6517 1117786528 IC_kwDOAMm_X85CoBGg 16100116 2022-05-04T19:46:09Z 2022-05-04T19:46:09Z NONE

Overall, I just don't think this is a reliable way to trace memory allocation with NumPy. Maybe you could do better by also tracing back to source arrays with .base?

You may be right that the OWNDATA-flag is more of an internal numpy thing for its memory management, and that there is no general requirement or guarantee that higher-level libraries should avoid creating "unnecessary" layers of views.

I had just gotten used to nice behaviour form the other xarray's operations I was using (isel() and []-slicing created views as expected, while e.g. sel() and mean() which create array copies did not create any unnecessary view on top of those).

While not creating extra view-objects for viewing the entire array could also be seen as an optimization, the net benefit is not obvious since the extra checks in the if-cases of my patch add some work too. (And of course a risk that a change deep down in the indexing methods has unintended consequences.)

I would thus be OK with closing this issue as "won't fix", which I supposed you were heading towards unless a demand from others would appear.

I followed your suggestion and changed my memory_size()-function to not just care about whether the OWNDATA is True/False (or probably equivalently whether ndarray.base is None or not), but recursively following the ndarray.base.base... and tracking the id() of objects to avoid counting the same more than once. The new version behaves differently: When called on a single DataArray whose data was defined by slicing something else, it counts the size of the full base array instead of 0 (or about 100 bytes overhead) as before, but within a Dataset (or optionally a set of multiple Datasets) any other reference to the same base array won't be counted again. I can live with this new more "relative" than "absolute" definition of were memory is considered "shared".

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  1216517115
Powered by Datasette · Queries took 0.669ms · About: xarray-datasette