home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 484240082 and user = 35968931 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • TomNicholas · 2 ✖

issue 1

  • sparse and other duck array issues · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
873480044 https://github.com/pydata/xarray/issues/3245#issuecomment-873480044 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDg3MzQ4MDA0NA== TomNicholas 35968931 2021-07-03T22:41:39Z 2021-07-03T22:41:39Z MEMBER

Also I can add a .to_pint() method, but I don't really understand what the purpose would be? To be any use to users it would need to accept a units argument, but then it's basically just a less powerful version of pint_xarray's .quantify() accessor method. Can you see any reason to add it @keewis ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
873222015 https://github.com/pydata/xarray/issues/3245#issuecomment-873222015 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDg3MzIyMjAxNQ== TomNicholas 35968931 2021-07-02T19:43:04Z 2021-07-02T20:18:11Z MEMBER

I'm trying to implement this (I wanted pint-aware plotting to work) in #5568, but not 100% sure if I'm doing it right.

~~1) Do we want these as_*, to_* methods to live in Variable or DataArray?~~

2) What about multiply-wrapped arrays? At the moment I have essentially

python def as_numpy(self) -> T_DataArray: """ Coerces wrapped data into a numpy array, and returns it wrapped in a DataArray. """ data = self.data if isinstance(data, dask_array_type): return self.load() elif isinstance(data, cupy_array_type): return self.copy(data=data.get()) elif isinstance(data, pint_array_type): return self.copy(data=data.magnitude) elif isinstance(data, sparse_array_type): return self.copy(data=data.to_dense()) else: return self.copy(data=np.array(data)) but do I actually want multiple passes, like ```python data = self.data if isinstance(data, dask_array_type): data = self.load().data if isinstance(data, cupy_array_type): data = data.get() if isinstance(data, pint_array_type): data = data.magnitude if isinstance(data, sparse_array_type): data = data.to_dense()

return self.copy(data=np.array(data))

```

3) @jacobtomlinson currently .values will extract the values of a cupy array by special-casing it, but based on the above discussion do we actually want to break that in favour of getting them via .to_numpy() instead?

4) There are _as_sparse and _to_dense methods already, but they are private and only present on Variable. Should I expose them?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 47.424ms · About: xarray-datasette