home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "NONE" and issue = 1216517115 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • erik-mansson 2

issue 1

  • Loading from NetCDF creates unnecessary numpy.ndarray-views that clears the OWNDATA-flag · 2 ✖

author_association 1

  • NONE · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1117786528 https://github.com/pydata/xarray/issues/6517#issuecomment-1117786528 https://api.github.com/repos/pydata/xarray/issues/6517 IC_kwDOAMm_X85CoBGg erik-mansson 16100116 2022-05-04T19:46:09Z 2022-05-04T19:46:09Z NONE

Overall, I just don't think this is a reliable way to trace memory allocation with NumPy. Maybe you could do better by also tracing back to source arrays with .base?

You may be right that the OWNDATA-flag is more of an internal numpy thing for its memory management, and that there is no general requirement or guarantee that higher-level libraries should avoid creating "unnecessary" layers of views.

I had just gotten used to nice behaviour form the other xarray's operations I was using (isel() and []-slicing created views as expected, while e.g. sel() and mean() which create array copies did not create any unnecessary view on top of those).

While not creating extra view-objects for viewing the entire array could also be seen as an optimization, the net benefit is not obvious since the extra checks in the if-cases of my patch add some work too. (And of course a risk that a change deep down in the indexing methods has unintended consequences.)

I would thus be OK with closing this issue as "won't fix", which I supposed you were heading towards unless a demand from others would appear.

I followed your suggestion and changed my memory_size()-function to not just care about whether the OWNDATA is True/False (or probably equivalently whether ndarray.base is None or not), but recursively following the ndarray.base.base... and tracking the id() of objects to avoid counting the same more than once. The new version behaves differently: When called on a single DataArray whose data was defined by slicing something else, it counts the size of the full base array instead of 0 (or about 100 bytes overhead) as before, but within a Dataset (or optionally a set of multiple Datasets) any other reference to the same base array won't be counted again. I can live with this new more "relative" than "absolute" definition of were memory is considered "shared".

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Loading from NetCDF creates unnecessary numpy.ndarray-views that clears the OWNDATA-flag 1216517115
1110681951 https://github.com/pydata/xarray/issues/6517#issuecomment-1110681951 https://api.github.com/repos/pydata/xarray/issues/6517 IC_kwDOAMm_X85CM6lf erik-mansson 16100116 2022-04-27T08:04:45Z 2022-04-27T08:22:28Z NONE

Would we take this as a PR? Is there a simpler way to express that logic?

One small simplification I realized later is that slice(None) can be used in place of slice(None, None, None). The isinstance(i, slice) and condition seemed necessary to avoid some case where i was array-like and thus gave an array of booleans with the comparison operator.

The len(key) == len(array.shape) + 1 and key[-1] is ... is to handle the case where (at least) NumpyIndexingAdapter._indexing_array_and_key() appends an extra Ellipsis at the end of a tuple that may already have one slice per array dimension. This is actually the only case I noticed in the debugging, and a test now shows that I get the desired outcome even if the len(key) == len(array.shape) alternative is skipped (although that would look intuitive to allow). Thus an alternative patch could be

```diff --git "indexing.original.py" "indexing.patched.py" --- "indexing.original.py" +++ "indexing.patched.py" @@ -709,9 +709,12 @@ def explicit_indexing_adapter( """ raw_key, numpy_indices = decompose_indexer(key, shape, indexing_support) result = raw_indexing_method(raw_key.tuple) - if numpy_indices.tuple: - # index the loaded np.ndarray - result = NumpyIndexingAdapter(np.asarray(result))[numpy_indices] + if numpy_indices.tuple and (not isinstance(result, np.ndarray) + or not all(i == slice(None) for i in numpy_indices.tuple)): + # The conditions within parentehses are to avoid unnecessary array slice/view-creation + # that would set flags['OWNDATA'] to False for no reason. + # Index the loaded np.ndarray. + result = NumpyIndexingAdapter(np.asarray(result))[numpy_indices] return result

@@ -1156,6 +1160,11 @@ class NumpyIndexingAdapter(ExplicitlyIndexedNDArrayMixin):

 def __getitem__(self, key):
     array, key = self._indexing_array_and_key(key)
  • if (len(key) == len(array.shape) + 1 and key[-1] is ...
  • and all(isinstance(i, slice) and i == slice(None) for i in key[:len(array.shape)])
  • and isinstance(array, np.ndarray)): # (This isinstance-check is because nputils.NumpyVIndexAdapter() has not been tested.)
  • Avoid unnecessary array slice/view-creation that would set flags['OWNDATA'] to False for no reason.

  • return array return array[key]

    def setitem(self, key, value): `` Here I also corrected the fact that my old diff was made against an "original" file missing the line# index the loaded np.ndarray`.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Loading from NetCDF creates unnecessary numpy.ndarray-views that clears the OWNDATA-flag 1216517115

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 23.22ms · About: xarray-datasette