home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 1327380960 and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • dcherian · 4 ✖

issue 1

  • Avoid calling np.asarray on lazy indexing classes · 4 ✖

author_association 1

  • MEMBER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1484205746 https://github.com/pydata/xarray/pull/6874#issuecomment-1484205746 https://api.github.com/repos/pydata/xarray/issues/6874 IC_kwDOAMm_X85Ydy6y dcherian 2448579 2023-03-26T19:58:05Z 2023-03-26T19:58:05Z MEMBER

I'd like to merge this at the end of next week.

It now has tests and should be backwards compatible with external backends.

A good next step would be to finish up #7020

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid calling np.asarray on lazy indexing classes 1327380960
1433777395 https://github.com/pydata/xarray/pull/6874#issuecomment-1433777395 https://api.github.com/repos/pydata/xarray/issues/6874 IC_kwDOAMm_X85VdbTz dcherian 2448579 2023-02-16T22:03:22Z 2023-02-16T22:03:22Z MEMBER

T_ExplicitlyIndexed may be a different thing to add

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid calling np.asarray on lazy indexing classes 1327380960
1433629099 https://github.com/pydata/xarray/pull/6874#issuecomment-1433629099 https://api.github.com/repos/pydata/xarray/issues/6874 IC_kwDOAMm_X85Vc3Gr dcherian 2448579 2023-02-16T19:52:38Z 2023-02-16T19:52:38Z MEMBER

@Illviljan feel free to push any typing changes to this PR. I think that would really help clarify the interface. I tried adding a DuckArray type but that didn't go to far.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid calling np.asarray on lazy indexing classes 1327380960
1385842347 https://github.com/pydata/xarray/pull/6874#issuecomment-1385842347 https://api.github.com/repos/pydata/xarray/issues/6874 IC_kwDOAMm_X85Smkar dcherian 2448579 2023-01-17T18:21:01Z 2023-01-17T18:26:51Z MEMBER

background

(Moving the convo out of a comment for visibility)

For reference, the code we would like is python array = as_indexable(self.array) array = array[self.key] array = array.get_duck_array()

problem

So far I have found two fail cases

1. Wrapping a backend array instance

This method is removing LazilyIndexedArray. When we do that, we sometimes have another ExplicitlyIndexed array (_ElementwiseFunctionArray) and sometimes a BackendArray. We then apply array[self.key] which returns a ExplicitlyIndexed array for the former and forces a load from disk for the latter.

One way to solve this would be to return another ExplicitlyIndexed array from the getitem on the BackendArrays. This is currently forcing a load from disk:

I tried this by making __getitem__ on a BackendArrayWrapper return a ExplicitlyIndexed array (commit). This breaks all existing backends, and is a little complicated to add to the Zarr and Scipy backends.

2. Wrapping an IndexingAdapter instance

The other case is when as_indexable returns an IndexingAdapter instance. Again, these return duck arrays from __getitem__, so we can't call get_duck_array

https://github.com/pydata/xarray/blob/522ee2210499cfa434a0ca0825790949a8e08ff0/xarray/core/indexing.py#L671-L690

how to proceed?

@shoyer I'm not sure on how to proceed. 1. For BackendArrays we could wrap them in another lazy indexing class when the Variable is created so we don't break our backend entrypoint contract (that __getitem__ can return a duck array). We might need something like this for TensorStore anyway. 2. I don't know what to do about the indexing adapters.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid calling np.asarray on lazy indexing classes 1327380960

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 139.844ms · About: xarray-datasette