home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

25 rows where author_association = "MEMBER", issue = 241578773 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer · 25 ✖

issue 1

  • WIP: indexing with broadcasting · 25 ✖

author_association 1

  • MEMBER · 25 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
337969983 https://github.com/pydata/xarray/pull/1473#issuecomment-337969983 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNzk2OTk4Mw== shoyer 1217238 2017-10-19T16:53:28Z 2017-10-19T16:53:28Z MEMBER

I closed this intentionally since I think there is a good chance GitHub won't let you open a new PR otherwise.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
337969765 https://github.com/pydata/xarray/pull/1473#issuecomment-337969765 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNzk2OTc2NQ== shoyer 1217238 2017-10-19T16:52:44Z 2017-10-19T16:52:44Z MEMBER

@fujiisoup Can you open a new pull request with this branch? I'd like to give you credit on GitHub for this (since you did most of the work), but I think if I merge this with "Squash and Merge" everything will get credited to me.

You can also try doing your own rebase to clean-up history into fewer commits if you like (or I could "squash and merge" locally in git), but I think the new PR would do a better job of preserving history anyone who wants to look at this later.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
337771698 https://github.com/pydata/xarray/pull/1473#issuecomment-337771698 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNzc3MTY5OA== shoyer 1217238 2017-10-19T01:16:48Z 2017-10-19T01:16:48Z MEMBER

I think this is ready to go in. @jhamman @fujiisoup any reason to wait?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
335281238 https://github.com/pydata/xarray/pull/1473#issuecomment-335281238 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNTI4MTIzOA== shoyer 1217238 2017-10-09T20:46:13Z 2017-10-09T20:46:13Z MEMBER

I will take a look at the multi-index issues. I suspect that many of these will be hard to resolve until we complete the refactor making indexes an explicit part the data model (https://github.com/pydata/xarray/issues/1603). It is really tricky to make things work reliably when MultiIndex levels are supposed to work like coordinates but use an entirely different mechanism. I would be OK with adding a warning that there are still a few unresolved edge cases involving MultiIndex.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
335279657 https://github.com/pydata/xarray/pull/1473#issuecomment-335279657 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNTI3OTY1Nw== shoyer 1217238 2017-10-09T20:42:53Z 2017-10-09T20:42:53Z MEMBER

@fujiisoup Thanks again for all your hard work on this and for my slow response. I've made another PR with tweaks to your logic for conflicting coordinates: https://github.com/fujiisoup/xarray/pull/5

Mostly, my PR is about simplifying the logic by removing the special case work arounds you added that check object identity (things like this_arr is self._variables[k] and v.variable is cv.variable). My concern is that nothing else in xarray relies on these types of identity checks, so adding these additional rules will make the logic harder to understand and rely on programmatically. If we try to make behavior "intuitive" for 80% of use-cases, it only makes the remaining 20% more baffling and error prone. This is the similar to the problem with dimension re-ordering and NumPy's mixed integer/slice indexing. So I would rather these leave out for now at the cost of making the API slightly more cumbersome. As I think we discussed previously, it is easier to relax error conditions in the future than to add new errors.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
330023143 https://github.com/pydata/xarray/pull/1473#issuecomment-330023143 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMDAyMzE0Mw== shoyer 1217238 2017-09-17T05:52:35Z 2017-09-17T05:52:35Z MEMBER

If k == self.name, drop the conflicted coordinate silently.

I appreciate the goal here, but this makes me a little nervous. Xarray doesn't check names for other functionality, besides deciding how to propagate names and cases where names are used to indicate how to convert a DataArray into a Dataset. So users aren't used to checking names to understand how code works.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
329366409 https://github.com/pydata/xarray/pull/1473#issuecomment-329366409 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyOTM2NjQwOQ== shoyer 1217238 2017-09-14T04:16:22Z 2017-09-14T04:16:22Z MEMBER

It occurs to me now that we actually have an pre-existing merge feature (priority_vars) that allows us to handle merges with some variables taking precedence. This feature is currently used for cases like ds.coords['x'] = data_array when data_array already has a (potentially conflicting) coordinate 'x'.

The rule we could use would be: - For .sel()/.loc[], indexed coordinates from the indexed object take precedence in the result ([obj.coords[k] for k in kwargs] for obj.sel(**kwargs)). Conflicts with indexed coordinates on indexing objects are silently ignored. - For reindex(), indexing coordinates take precedence in the result ([kwargs[k] for k in kwargs] for obj.reindex(**kwargs)). Conflicts with indexed coordinates on the indexed object are silently ignored. - For isel()/[], neither set of indexed coordinates take precedence.

Which we would use with normal rule for dimension/non-dimension coordinates: - Conflicts between dimension coordinates (except for precedence) result in an error. - Conflicts between non-dimension coordinates result in silently dropping the conflicting variable.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
328438707 https://github.com/pydata/xarray/pull/1473#issuecomment-328438707 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyODQzODcwNw== shoyer 1217238 2017-09-11T07:14:55Z 2017-09-11T07:14:55Z MEMBER

I thought we have agreed to simply neglect the coordinate conflict (comment).

To be honest, it's still not clear to me which is the right choice.

Some considerations:

  • Coordinates are likely to differ by only a small amount in some practical settings, e.g., when using method='nearest'. It will be annoying to need to ensure coordinate alignment in such cases. For example, ds.reindex_like(other, method='nearest') would no longer work.
  • Dropping coordinate coordinates is not too difficult, but is somewhat annoying, because it requires users to lookup a new method (e.g.,reset_index()). Even for me, I had to do a little bit of experimentation to pick the right method. reset_index() does not have a default of resetting all indexes, which makes this slightly more annoying still (this would not be hard to fix).
  • There are situations where silently ignoring a conflict could result in silently corrupted results. This seems most likely to me with boolean or integer (isel()) indexing, where the indexer could have entries in the wrong order. However, this is unlikely with label-based indexing (sel or reindex), because the labels are already (redundantly) specified in the indexer values.

One possible resolution is to require exactly matching dimension coordinates only for isel() but not sel. However, this could be tricky to implement (sel is written in terms of isel) and could also be surprising to users, who expect sel() and isel() to work exactly the same except for expecting coordinates vs integer positions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
328401215 https://github.com/pydata/xarray/pull/1473#issuecomment-328401215 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyODQwMTIxNQ== shoyer 1217238 2017-09-11T02:42:26Z 2017-09-11T02:42:26Z MEMBER

Two issues I encountered when testing this locally:

  1. Conflicting coordinates on indexers seem to be ignored: ``` In [4]: ds = xr.Dataset({'bar': ('x', [1, 2])}, {'x': [1, 2]})

In [5]: ds.isel(x=xr.DataArray([0, 1], [('x', [3, 4])])) Out[5]: <xarray.Dataset> Dimensions: (x: 2) Coordinates: * x (x) int64 1 2 Data variables: bar (x) int64 1 2 ``` I thought we agreed that these cases should raise an error, i.e., to require exact alignment? It's one thing to drop non-dimension coordinates, but dimension coordinates should not be ignored.

  1. The interaction with MultiIndex indexing seems to be somewhat off. Compare: ``` In [15]: midx = pd.MultiIndex.from_product([list('abc'), [0, 1]], ...: names=('one', 'two')) ...: mda = xr.DataArray(np.random.rand(6, 3), ...: [('x', midx), ('y', range(3))]) ...:

The multi-index remains with the name "x"

In [16]: mda.isel(x=xr.DataArray(np.arange(3), dims='z')) Out[16]: <xarray.DataArray (z: 3, y: 3)> array([[ 0.990021, 0.371052, 0.996406], [ 0.384432, 0.605875, 0.361161], [ 0.367431, 0.339736, 0.816142]]) Coordinates: x (z) object ('a', 0) ('a', 1) ('b', 0) * y (y) int64 0 1 2 Dimensions without coordinates: z

the multi-index is now called "z"

In [17]: mda.sel(x=xr.DataArray(mda.indexes['x'][:3], dims='z')) Out[17]: <xarray.DataArray (z: 3, y: 3)> array([[ 0.990021, 0.371052, 0.996406], [ 0.384432, 0.605875, 0.361161], [ 0.367431, 0.339736, 0.816142]]) Coordinates: x (z) object ('a', 0) ('a', 1) ('b', 0) * y (y) int64 0 1 2 * z (z) MultiIndex - one (z) object 'a' 'a' 'b' - two (z) int64 0 1 0 ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
328396835 https://github.com/pydata/xarray/pull/1473#issuecomment-328396835 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyODM5NjgzNQ== shoyer 1217238 2017-09-11T02:04:42Z 2017-09-11T02:04:42Z MEMBER

Just sent out a bunch of doc edits: https://github.com/fujiisoup/xarray/pull/4

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
327670299 https://github.com/pydata/xarray/pull/1473#issuecomment-327670299 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNzY3MDI5OQ== shoyer 1217238 2017-09-07T03:02:18Z 2017-09-07T03:02:18Z MEMBER

Thinking about boolean indexing again. I think we possibly only allow using unlabeled boolean array or boolean arrays defined along the dimension they are indexing.

My concern is that otherwise, we will rule out the possibility of making data_array[boolean_key] equivalent to data_array.where(boolean_key, drop=True). For example, consider the current behavior with your branch: ``` In [29]: da = xr.DataArray(np.arange(100).reshape(10, 10), dims=['x', 'y'])

In [30]: da[da.x > -1] Out[30]: <xarray.DataArray (x: 10, y: 10)> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) Dimensions without coordinates: x, y

In [31]: da[da.y > -1] Out[31]: <xarray.DataArray (y: 10)> array([ 0, 11, 22, 33, 44, 55, 66, 77, 88, 99]) Dimensions without coordinates: y ```

The only way these can be guaranteed to be consistent with where(drop=True) is if we only allow the first indexing argument to index along the first dimension (outer/orthogonal indexing style).

I can see some potential use for boolean indexing with a different dimension name, but I suspect it would be pretty rare. There is also an easy work around (using an integer indexer instead, which is also arguably clearer).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326818830 https://github.com/pydata/xarray/pull/1473#issuecomment-326818830 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjgxODgzMA== shoyer 1217238 2017-09-03T17:28:46Z 2017-09-03T17:28:46Z MEMBER

I considered the rasm data, where the original object stays on the logical coordinates x and y. If we have conversion DataArrays, such as a table of x and y values as a function of target coordinates lat and lon, then the coordinate projection from (x, y) to (lat, lon) can be done by .sel(x=x, y=y, method='nearest'). This might be a kind of multi-dimensional reindex?

In such a use case, it would be better for sel (or multi-dimensional reindex) to return NaN than to raise an error.

I agree, this feels closer to a use-case for multi-dimensional reindex rather than sel.

Let's recall the use cases fro these methods: - sel is for selecting data on its existing coordinates - reindex is for imposing new coordinates on data

So one possible way to define multi-dimensional reindexing would be as follows: - Given reindex arguments of the form dim=array where array is a 1D unlabeled array/list, convert them into DataArray(array, [(dim, array)]). - Do multi-dimensional indexing with broadcasting like sel, but fill in NaN for missing values (we could allow for customizing this with a fill_value argument). - Join coordinates like for sel, but coordinates from the indexers take precedence over coordinates from the object being indexed.

In practice, multi-dimensional reindex and sel are very similar if there is no overlap between coordinates on the indexer/indexed objects.

I also would like to try that, but it might be a bit tough and it would be better to do after the next release.

:+1:

So for now, let's raise a FutureWarning if supplying a DataArray with array.coords[dim].values != array.values.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326776669 https://github.com/pydata/xarray/pull/1473#issuecomment-326776669 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjc3NjY2OQ== shoyer 1217238 2017-09-03T00:27:05Z 2017-09-03T00:27:05Z MEMBER

API question. .sel(x=[0.0, 1.0], method='nearest', tolerance=0.1) should work exactly same as .reindex(x=[0.0, 1.0], method='nearest', tolerance=0.1)?

There are two key differences between sel and reindex: - reindex inserts NaN when there is not a match whereas sel raises an error - for inexact indexing (e.g., method='nearest'), the result of reindex copies the index from the indexers, whereas the result of sel copies the index from the object being indexed

My preference is to make sel work as reindex currently does and to gradually deprecate reindex method, because now the difference between these two methods are very tiny.

I'm not sure this is desirable, because it's nice to have a way to do indexing that is guaranteed not to introduce missing values.

Currently, reindex only supports indexing with 1D arguments, and the values of those arguments are taken to be the new index coordinates. I don't know quite what it would mean to reindex with a multi-dimensional indexer -- I guess the result would gain multi-dimensional coordinate indexes? Also, when reindexing like ds.reindex(x=indexer), which coordinates take precedence on the result for x -- indexer.coords['x'] or indexer.values?

I do think there is a valid concern about consistency between sel() and reindex(). Right now, coordinates and dimensions on arguments to reindex are entirely ignored. If we are ever going to allow reindexing with multi-dimensional arguments (and broadcasting), we should consider raising an error or warning now when passed indexers with inconsistent dimensions/coordinates.

From a practical perspective, writing a version of vectorized indexing that fills in NaN could be non-trivial. To enable this under the hood, I think we would need a version of ndarray.__getitem__ that uses a sentinel value (e.g., -1) to fill in NaN instead of doing indexing. I guess this could probably be done with a combination of NumPy's advanced indexing plus a mask.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326367356 https://github.com/pydata/xarray/pull/1473#issuecomment-326367356 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjM2NzM1Ng== shoyer 1217238 2017-08-31T17:32:02Z 2017-08-31T17:32:02Z MEMBER

I agree that this is pretty close! I will do another review shortly when I have time.

This is a really exciting feature and I am super excited to get it into v0.10 -- thanks again for your hard work on it! This goes a long way to completing xarray's labeled data model.

The consensus over in #1535 seems to be that we can go ahead without a deprecation warning.

Also: I agree that it would be great if others can test this, but we should also definitely make a release candidate for 0.10 to help iron out any bugs before the final release.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
325066185 https://github.com/pydata/xarray/pull/1473#issuecomment-325066185 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNTA2NjE4NQ== shoyer 1217238 2017-08-26T00:52:03Z 2017-08-26T00:52:03Z MEMBER

Thinking about this more: I still think I would prefer including all coordinates from indexers unless there is already an existing coordinates of the same name.

Reasons why I like this rule: 1. It's simple and easy to explain, without any special cases. 2. If the coordinates are descriptive of the indexers, I think they're almost certainly still descriptive of the indexed results. 3. Users seem to be happier with keeping around metadata (e.g., attrs) even in cases where it may be slightly outdated than needing to propagate it manually. One reason may be that you only need to drop irrelevant metadata once from results, but keeping it around through operations that drop it requires updating each operation.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
324960486 https://github.com/pydata/xarray/pull/1473#issuecomment-324960486 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNDk2MDQ4Ng== shoyer 1217238 2017-08-25T15:50:22Z 2017-08-25T15:50:22Z MEMBER

If ind.dims == (k, ) (indexing-DataArray has the same dimension to the dimension to be indexed along), we neglect ind.coords[k]. If ind.dims != (k, ) and ind.dims not in da.dims, then we attach a new coordinate ind.coords[ind.dims] If ind.dims != (k, ) and ind.dims in da.dims, then raise an Error.

I like these rules.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
321148108 https://github.com/pydata/xarray/pull/1473#issuecomment-321148108 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyMTE0ODEwOA== shoyer 1217238 2017-08-09T04:16:03Z 2017-08-09T04:16:03Z MEMBER

@fujiisoup you're asking good questions about how to handle coordinates.

I don't have a lot of time to think about this right now, but really briefly: - Overwrite da['x'] by ind['x'] seems bad. This seems contrary to how indexing is supposed to work. I don't think we ever want to override/change coordinates in the indexed object. - My inclination would be that it's OK to add coordinates from the indices as long as they aren't conflicting (with either other indices or the indexed objected). This will cause some increase in the number of coordinates but I don't think this would be too bad.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
320491761 https://github.com/pydata/xarray/pull/1473#issuecomment-320491761 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyMDQ5MTc2MQ== shoyer 1217238 2017-08-06T07:51:30Z 2017-08-06T07:51:30Z MEMBER

I opened another PR to your branch for consolidating the logic between dask and numpy vindex: https://github.com/fujiisoup/xarray/pull/3

You will need to install dask master to run the full test suite, but it appears to be working! The logic is slightly tricky because even with vindex we need to reorder the sliced dimensions sometimes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
319113096 https://github.com/pydata/xarray/pull/1473#issuecomment-319113096 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxOTExMzA5Ng== shoyer 1217238 2017-07-31T15:55:27Z 2017-07-31T15:55:27Z MEMBER

I think a similar logic (flatten -> lookup -> reshape) will be necessary to improve .sel method, (or indexing.get_indexer() function), as our new sel should support multi-dimensional look up.

Yes, agreed!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
318975595 https://github.com/pydata/xarray/pull/1473#issuecomment-318975595 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxODk3NTU5NQ== shoyer 1217238 2017-07-31T05:59:46Z 2017-07-31T06:00:03Z MEMBER

I added a few more commits to my PR to your branch (https://github.com/fujiisoup/xarray/pull/2): - Reorganized test_variable.py. TestVariable_withDask is a good idea, but it needs to inherit from VariableSubclassTestCases, not TestVariable. Otherwise many of the base-class tests get run twice. - Added pointwise indexing support for dask using vindex. The logic is somewhat convoluted but I think it works (review would be appreciated!). This will let us deprecate isel_points/sel_points.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
318111741 https://github.com/pydata/xarray/pull/1473#issuecomment-318111741 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxODExMTc0MQ== shoyer 1217238 2017-07-26T16:41:40Z 2017-07-26T16:41:40Z MEMBER

I'm slightly hesitating to deprecate this indexing in this PR. I guess it should go with another issue. (Some tests assume this indexing behavior.)

OK, we can certainly discuss this more broadly. But I think this test was just broken.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317795244 https://github.com/pydata/xarray/pull/1473#issuecomment-317795244 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzc5NTI0NA== shoyer 1217238 2017-07-25T16:35:09Z 2017-07-25T16:35:09Z MEMBER

As you suggested, I prepared our own indexer class. I think the codes became much cleaner.

Thanks! I'll take a look shortly.

Maybe this behavior was deprecated in numpy?

Yes, I think NumPy has deprecated/removed this sort of indexing.

In my current PR, the boolean index is simply converted to integer array by .nonzero() method, so xarray works with such boolean array with different size. Is it what we want?

No, for 1D boolean arrays I think we should insist that sizes match exactly. There is no obvious map between boolean indexer positions and array values otherwise.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317465968 https://github.com/pydata/xarray/pull/1473#issuecomment-317465968 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzQ2NTk2OA== shoyer 1217238 2017-07-24T15:48:33Z 2017-07-24T15:48:33Z MEMBER

With the current logic, we normalize everything into a standard indexer tuple in Variable.__getitem__. I think we should explicitly create different kinds of indexers, and then handle them explicitly in various backends/array wrappers, e.g., ```python

in core/indexing.py

class IndexerTuple(tuple): """Base class for xarray indexing tuples."""

def __repr__(self):
    """Repr that shows type name."""
    return type(self).__name__ + tuple.__repr__(self)

class BasicIndexer(IndexerTuple): """Tuple for basic indexing."""

class OuterIndexer(IndexerTuple): """Tuple for outer/orthogonal indexing (.oindex)."""

class VectorizedIndexer(IndexerTuple): """Tuple for vectorized indexing (.vindex)."""

in core/variable.py

class Variable(...): def _broadcast_indexes(self, key): # return a BasicIndexer if possible, otherwise an OuterIndexer if possible # and finally a VectorizedIndexer

in adapters for various backends/storage types

class DaskArrayAdapter(...): def getitem(self, key): if isinstance(key, VectorizedIndexer): raise IndexError("dask doesn't yet support vectorized indexing") ... ```

This is a little more work at the outset because we have to handle each indexer type in each backend, but it avoids the error prone broadcasting/un-broadcasting logic.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
315659641 https://github.com/pydata/xarray/pull/1473#issuecomment-315659641 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNTY1OTY0MQ== shoyer 1217238 2017-07-17T02:58:57Z 2017-07-17T02:58:57Z MEMBER

Let's not worry about supporting every indexing type with dask. I think that with my patch we can do everything we currently do. We'll want vindex support eventually as well so we can remove isel_points(), but that can come later.

I think we'll also want to make an "vectorized to orthogonal" indexing adapter that we can use netCDF4.Variable.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
315631661 https://github.com/pydata/xarray/pull/1473#issuecomment-315631661 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNTYzMTY2MQ== shoyer 1217238 2017-07-16T19:35:40Z 2017-07-16T19:35:40Z MEMBER

dask.array does support a limited form of fancy indexing via .vindex. I think we already use it in .isel_points(). It would certainly be better to use that and error in edge cases rather than silently converting to numpy arrays, which we never want to do.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 175.204ms · About: xarray-datasette