home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 594669577 and user = 10194086 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • mathause · 2 ✖

issue 1

  • compose weighted with groupby, coarsen, resample, rolling etc. · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
656060893 https://github.com/pydata/xarray/issues/3937#issuecomment-656060893 https://api.github.com/repos/pydata/xarray/issues/3937 MDEyOklzc3VlQ29tbWVudDY1NjA2MDg5Mw== mathause 10194086 2020-07-09T11:00:46Z 2020-07-09T11:00:46Z MEMBER

No that won't work. You need to mask the weights where the data is NaN. An untested and not very efficient way may be:

```python def coarsen_weighted_mean(da, weights, dims, skipna=None, boundary="exact"):

weighted_sum = (da * weights).coarsen(dims, boundary=boundary).sum(skipna=skipna)

masked_weights = weights.where(da.notnull())
sum_of_weights = masked_weights.coarsen(dims, boundary=boundary).sum()
valid_weights = sum_of_weights != 0
sum_of_weights = sum_of_weights.where(valid_weights)

return weighted_sum / sum_of_weights

```

An example (without NaNs though):

```python import xarray as xr import numpy as np

air = xr.tutorial.open_dataset("air_temperature").air weights = np.cos(np.deg2rad(air.lat))

we need to rename them from "lat"

weights.name = "weights"

c_w_m = coarsen_weighted_mean(air, weights, dict(lat=2), boundary="trim")

to compare do it for one slice

alt = air.isel(lat=slice(0, 2)).weighted(weights).mean("lat")

compare if it is the same

xr.testing.assert_allclose(c_w_m.isel(lat=0, drop=True), alt) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  compose weighted with groupby, coarsen, resample, rolling etc. 594669577
655983882 https://github.com/pydata/xarray/issues/3937#issuecomment-655983882 https://api.github.com/repos/pydata/xarray/issues/3937 MDEyOklzc3VlQ29tbWVudDY1NTk4Mzg4Mg== mathause 10194086 2020-07-09T08:21:39Z 2020-07-09T08:21:39Z MEMBER

That's currently not possible. What you can try is the following:

(ds * coslat_weights).coarsen(lat=2, lon=2).sum() / coslat_weights.coarsen(lat=2, lon=2).sum()

but this only works if you don't have any NaNs.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  compose weighted with groupby, coarsen, resample, rolling etc. 594669577

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2239.228ms · About: xarray-datasette