home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

8 rows where issue = 484240082 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • shoyer · 8 ✖

issue 1

  • sparse and other duck array issues · 8 ✖

author_association 1

  • MEMBER 8
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
662783599 https://github.com/pydata/xarray/issues/3245#issuecomment-662783599 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDY2Mjc4MzU5OQ== shoyer 1217238 2020-07-23T01:58:00Z 2020-07-23T01:58:00Z MEMBER

Agreed! Sparse also needs a special case.

On Wed, Jul 22, 2020 at 6:35 PM Deepak Cherian notifications@github.com wrote:

IMO a good first pass for as_numpy would be to use np.asarray with a special case for cupy where it would use .get.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/3245#issuecomment-662778974, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJJFVW33SB5YZJWAMIFMR3R46HVVANCNFSM4IOZW5JA .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
662583820 https://github.com/pydata/xarray/issues/3245#issuecomment-662583820 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDY2MjU4MzgyMA== shoyer 1217238 2020-07-22T17:26:59Z 2020-07-22T17:26:59Z MEMBER

3. for now we are OK with special as_sparse, as_cupy, as_pint methods in xarray as long as all that logic is contained in one place: as_duck_array.py?

+1 for as_numpy(), too

One question about as_numpy: should it convert pint arrays into NumPy by stripping units? Or should it convert the arrays underlying pint and keep the units? I guess the first would probably make more sense for DataArray.as_numpy(). The second behavior could be achieved with DataArray.pint.as_numpy().

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
555590777 https://github.com/pydata/xarray/issues/3245#issuecomment-555590777 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDU1NTU5MDc3Nw== shoyer 1217238 2019-11-19T16:30:18Z 2019-11-19T16:30:18Z MEMBER

Also isn't the function astype not as_type?

Yes, but I think that's mostly works because it's so short. In general the rule is to use underscores when it improves clarity. assparse could read in at least two different ways, and one of which is not really appropriate.

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 3
}
  sparse and other duck array issues 484240082
555588678 https://github.com/pydata/xarray/issues/3245#issuecomment-555588678 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDU1NTU4ODY3OA== shoyer 1217238 2019-11-19T16:25:55Z 2019-11-19T16:25:55Z MEMBER

I like as_sparse and as_dense, because it makes clear that the objects are still xarray objects. I agree with @fujiisoup that to_sparse``todense could be confused to return sparse arrays directly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
524488632 https://github.com/pydata/xarray/issues/3245#issuecomment-524488632 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDUyNDQ4ODYzMg== shoyer 1217238 2019-08-23T23:08:28Z 2019-08-23T23:08:28Z MEMBER

Xarray's .data is basically the equivalent of pandas's .array.

On Fri, Aug 23, 2019 at 2:44 PM Maximilian Roos notifications@github.com wrote:

I do think the pandas approach is pretty good here - don't break anything relying on .values - instead have .array to get the backing array (in whatever form it might be), and to_numpy() coercing to a raw numpy array

ref https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#accessing-the-values-in-a-series-or-index

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/3245?email_source=notifications&email_token=AAJJFVUUGARF55CXIKAQHUDQGBD4NA5CNFSM4IOZW5JKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5BJCVI#issuecomment-524456277, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJJFVWSWKSJFWNUUGRO3ODQGBD4NANCNFSM4IOZW5JA .

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
524410610 https://github.com/pydata/xarray/issues/3245#issuecomment-524410610 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDUyNDQxMDYxMA== shoyer 1217238 2019-08-23T18:09:25Z 2019-08-23T18:09:25Z MEMBER

The main downside of to_numpy() is that it isn't obvious whether the result should be a DataArray containing a NumPy array or a raw NumPy array. In particular, pandas uses to_numpy() for converting into a raw NumPy, which is the opposite of our intent here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
524407909 https://github.com/pydata/xarray/issues/3245#issuecomment-524407909 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDUyNDQwNzkwOQ== shoyer 1217238 2019-08-23T18:00:45Z 2019-08-23T18:00:45Z MEMBER
  • Should we formalize the convention that .values always return a numpy array i.e. it will call todense() implicitly and that .data will always return the underlying container: sparse/dask etc.?

I think .values should either return a NumPy array or raise an exception, based upon whether or not the underlying duck array supports coercion with np.array().

We should have a separate API (maybe .to_dense() or .to_numpy_data()?) for explicitly converting into NumPy arrays. This should not be called automatically inside xarray.

Basically, we should leave the decision about whether automatic coercion is safe up to the authors of duck array libraries.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082
524125410 https://github.com/pydata/xarray/issues/3245#issuecomment-524125410 https://api.github.com/repos/pydata/xarray/issues/3245 MDEyOklzc3VlQ29tbWVudDUyNDEyNTQxMA== shoyer 1217238 2019-08-23T00:19:15Z 2019-08-23T00:19:15Z MEMBER

I think it is intentional for automatic coercion to NumPy arrays to fail. Making plot() and .values work should probably require explicitly writing datarray.to_dense() first.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  sparse and other duck array issues 484240082

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1194.652ms · About: xarray-datasette