home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 659129613 and user = 2448579 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

user 1

  • dcherian · 2 ✖

issue 1

  • Add ability to change underlying array type · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
739094107 https://github.com/pydata/xarray/issues/4234#issuecomment-739094107 https://api.github.com/repos/pydata/xarray/issues/4234 MDEyOklzc3VlQ29tbWVudDczOTA5NDEwNw== dcherian 2448579 2020-12-05T00:48:49Z 2020-12-05T00:48:49Z MEMBER

The indexes story will change soon, we may even have our own index classes.

We should have pretty decent support for NEP-18 arrays in DataArray.data though, so IMO that's the best thing to try out and see where the issues remain.

NEP35 is cool; looks like we should use it in our *_like functions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add ability to change underlying array type 659129613
660180546 https://github.com/pydata/xarray/issues/4234#issuecomment-660180546 https://api.github.com/repos/pydata/xarray/issues/4234 MDEyOklzc3VlQ29tbWVudDY2MDE4MDU0Ng== dcherian 2448579 2020-07-17T15:46:15Z 2020-07-17T15:46:15Z MEMBER

See similar discussion for sparse here: https://github.com/pydata/xarray/issues/3245

asarray makes sense to me.

I think we are also open to special as_sparse, as_dense, as_cupy that return xarray objects with converted arrays.

A to_numpy_data method (or as_numpy?) would always coerce to numpy appropriately.

IIRC there's some way to read from disk to GPU, isn't there? So it makes sense to expose that in our open_* functions.

Re: index variables.Can we avoid this for now? Or are there going to be performance issues? The general problem will be handled as part of the index refactor (we've deferred pint support for indexes for this reason).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add ability to change underlying array type 659129613

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 7598.627ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows