home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 191315935

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/767#issuecomment-191315935 https://api.github.com/repos/pydata/xarray/issues/767 191315935 MDEyOklzc3VlQ29tbWVudDE5MTMxNTkzNQ== 1217238 2016-03-02T16:32:43Z 2016-03-02T16:32:43Z MEMBER

The good news about writing our own custom way to select levels is that because we can avoid the stack/unstack, we can simply omit unused levels without worrying about doing dropna with unstack. So as long as we are implementing this in own other method (e.g., sel or xs), we can default to drop_level=True.

I would be OK with xs, but da.xs('bar', dim='band_wavenumber', level='band') feels much more verbose to me than da.sel(band_wavenumber={'band': 'bar'}). The later solution involves inventing no new API, and because dictionaries are not hashable there's no potential conflict with existing functionality.

Last year at the SciPy conference sprints, @jonathanrocher was working on adding similar dictionary support into .loc in pandas (i.e., da.loc[{'band': 'band'}]). I don't think he ever finished up that PR, but he might have a branch worth looking at as a starting point.

I think that this solution is better than, e.g., directly providing index level names as arguments of the sel method. This may be confusing and there may be conflict when different dimensions have the same index level names.

This is a fair point, but such scenarios are unlikely to appear in practice. We might be able to, for example, update our handling of MultiIndexes to guarantee that level names cannot conflict with other variables. This might be done by inserting dummy-variables of some sort into the _coords dict whenever a MultiIndex is added. It would take some work to ensure this works smoothly, though.

Besides this, It would be nice if the drop_level=True behavior could be applied by default to any selection (i.e., also when using loc, sel, etc.), like with Pandas. I don't know how Pandas does this (I'll look into that), but at first glance this would here imply checking for each dimension if it has a multi-index and then checking the labels for each index level.

Yes, agreed. Unfortunately the pandas code that handles this is a complete mess of spaghetti code (see pandas/core/indexers.py). So are welcome to try decoding it, but in my opinion you might be better off starting from scratch. In xarray, the function convert_label_indexer would need an updated interface that allows it to possibly return a new pandas.Index object to replace the existing index.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  134359597
Powered by Datasette · Queries took 1.022ms · About: xarray-datasette