home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 1742035781 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • dcherian 1
  • chfite 1

author_association 2

  • MEMBER 1
  • NONE 1

issue 1

  • Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1577528999 https://github.com/pydata/xarray/issues/7894#issuecomment-1577528999 https://api.github.com/repos/pydata/xarray/issues/7894 IC_kwDOAMm_X85eBy6n chfite 59711987 2023-06-05T21:59:45Z 2023-06-05T21:59:45Z NONE

```

input array

array = xr.DataArray([1,3,6,np.nan,19,20,13], dims=['time'], coords=[pd.date_range('2023-06-05 00:00','2023-06-05 06:00',freq='H')])

array xarray.DataArray(time: 7 array([ 1., 3., 6., nan, 19., 20., 13.]) Coordinates: time (time) datetime64[ns] 2023-06-05 ... 2023-06-05T06: Indexes: (1) Attributes: (0)

however the integrated value ends up as a NaN

array.integrate('time') xarray.DataArray array(nan) Coordinates: (0) Indexes: (0) Attributes: (0)

if one still wanted to know the integrated values for where there were values it would essentially by like integrating the separate chunks for where the valid values existed

first chunk

array.isel(time=slice(0,3)).integrate('time') xarray.DataArray array(2.34e+13) Coordinates: (0) Indexes: (0) Attributes: (0)

second chunk

array.isel(time=slice(4,7)).integrate('time') xarray.DataArray array(1.296e+14) Coordinates: (0) Indexes: (0) Attributes: (0)

and then the sum would be the fully integrated area

``` @dcherian I essentially was wondering whether it was possible for a skipna argument or some kind of NaN handling to be implemented that would allow users to avoid integrating in chunks due to the presence of NaNs. I do not work in dev so I would not know how to implement this, but I thought I'd see if others had thoughts.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? 1742035781
1577474914 https://github.com/pydata/xarray/issues/7894#issuecomment-1577474914 https://api.github.com/repos/pydata/xarray/issues/7894 IC_kwDOAMm_X85eBlti dcherian 2448579 2023-06-05T21:05:47Z 2023-06-05T21:05:57Z MEMBER

but is it not possible for it to calculate the integrated values where there were regular values?

@chfite Can you provide an example of what you would want it to do please

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? 1742035781

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.975ms · About: xarray-datasette