home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

1 row where state = "open" and user = 8995328 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue 1

state 1

  • open · 1 ✖

repo 1

  • xarray 1
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1791218322 I_kwDOAMm_X85qw9KS 7964 DataArray.sel methods: "inner" and "outer" clbarnes 8995328 open 0     3 2023-07-06T09:57:15Z 2023-07-19T13:14:50Z   NONE      

Is your feature request related to a problem?

Currently, in the case of inexact matches, you can select the nearest coordinate, the next coordinate, or the previous coordinate. This works well for single matches, but not for slices. If I want to extract the ROI that definitely contains all the data from 5.5 to 10.5, I don't want 5 to 10, or 6 to 11: I want 5 to 11 (or possibly 6 to 10).

It would be helpful to be able to treat the start and stop boundary differently when it comes to inexact matches for selection.

Describe the solution you'd like

The addition of methods="inner" and methods="outer" to the DataArray.sel method (names up for debate). When paired with slice selections, "inner" would take the next-right of the left boundary and the next-left of the right boundary; "outer" would take the next-left of the left boundary and next-right of the right boundary.

This wouldn't be compatible with scalar or vector indexing. These could either default to None or "nearest". It might also be nice to treat different dimensions differently, but that would be a separate feature, and probably can already be achieved using successive calls to .sel.

"inner" is currently the default behaviour for indexing with a slice (for monotonic increasing coordinates). This seems like it could be extended for non-monotonic coordinates and for decreasing coordinates (reversing the dimension depending on the stride of the slice).

Describe alternatives you've considered

Something like numpy.logical_and(coords >= left, coords =< right) and logical indexing for inner, logical_noting the inverse for outer. This isn't very efficient for large arrays.

Additional context

I have 3D images and need to get an ROI around a particular point. I want to make sure that the extracted array is at least as large as a certain padding around the point. I could just add the resolution to the padding but this seems like it would be useful in other contexts.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7964/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 4087.718ms · About: xarray-datasette