home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "MEMBER" and issue = 427644858 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • fmaussion 3
  • shoyer 1
  • max-sixty 1

issue 1

  • WHERE function, problems with memory operations? · 5 ✖

author_association 1

  • MEMBER · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1094070525 https://github.com/pydata/xarray/issues/2861#issuecomment-1094070525 https://api.github.com/repos/pydata/xarray/issues/2861 IC_kwDOAMm_X85BNjD9 max-sixty 5635139 2022-04-09T15:41:49Z 2022-04-09T15:41:49Z MEMBER

Closing, please reopen if still an issue

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WHERE function, problems with memory operations? 427644858
478750808 https://github.com/pydata/xarray/issues/2861#issuecomment-478750808 https://api.github.com/repos/pydata/xarray/issues/2861 MDEyOklzc3VlQ29tbWVudDQ3ODc1MDgwOA== shoyer 1217238 2019-04-01T21:14:56Z 2019-04-01T21:14:56Z MEMBER

The usual recommendation is to align all of your separate datasets onto the same grid before combining them. reindex_like() and interp_like() make this pretty easy, e.g., proof.interp_like(ref).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WHERE function, problems with memory operations? 427644858
478570138 https://github.com/pydata/xarray/issues/2861#issuecomment-478570138 https://api.github.com/repos/pydata/xarray/issues/2861 MDEyOklzc3VlQ29tbWVudDQ3ODU3MDEzOA== fmaussion 10050469 2019-04-01T13:03:34Z 2019-04-01T13:03:34Z MEMBER

Thanks, I could download them. Can you tell us what the problem with these files is, that we might have to solve in xarray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WHERE function, problems with memory operations? 427644858
478546784 https://github.com/pydata/xarray/issues/2861#issuecomment-478546784 https://api.github.com/repos/pydata/xarray/issues/2861 MDEyOklzc3VlQ29tbWVudDQ3ODU0Njc4NA== fmaussion 10050469 2019-04-01T11:47:27Z 2019-04-01T11:47:27Z MEMBER

Up to now I never thought about that the 'notnull' method is acting on more than only the data itself

All xarray operations will return xarray objects. And xarray will try to match coordinates wherever possible.

However, the coordinates are already mathematically identical

In your example above, they are not. Can you help us to reproduce the error with a Minimal Complete Verifiable Example?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WHERE function, problems with memory operations? 427644858
478538543 https://github.com/pydata/xarray/issues/2861#issuecomment-478538543 https://api.github.com/repos/pydata/xarray/issues/2861 MDEyOklzc3VlQ29tbWVudDQ3ODUzODU0Mw== fmaussion 10050469 2019-04-01T11:17:46Z 2019-04-01T11:19:28Z MEMBER

xarray is "coordinate aware", i.e. it will try hard to prevent users doing bad things with non matching coordinates (yes, the fact that your ref and proof are "not entirely consistent somehow regarding coordinates" looks like you are doing bad things ;-).

If I understand what you want, this should do the trick:

python proof["WSS"].where(ref["WSS"].notnull().data) # use .data here to get back to numpy

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WHERE function, problems with memory operations? 427644858

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 725.804ms · About: xarray-datasette