home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 588126763 and user = 9312831 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • miniufo · 3 ✖

issue 1

  • consecutive time selection · 3 ✖

author_association 1

  • NONE 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
605456013 https://github.com/pydata/xarray/issues/3896#issuecomment-605456013 https://api.github.com/repos/pydata/xarray/issues/3896 MDEyOklzc3VlQ29tbWVudDYwNTQ1NjAxMw== miniufo 9312831 2020-03-28T14:39:06Z 2020-03-28T14:39:06Z NONE

Minor comment / nit: you don't really need the astype(np.float), the result should already be of dtype float since there are missing values after the rolling sum.

You're right. That is used for debuging the intermediate results.

Thanks again @keewis @max-sixty.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  consecutive time selection 588126763
605395619 https://github.com/pydata/xarray/issues/3896#issuecomment-605395619 https://api.github.com/repos/pydata/xarray/issues/3896 MDEyOklzc3VlQ29tbWVudDYwNTM5NTYxOQ== miniufo 9312831 2020-03-28T05:02:59Z 2020-03-28T05:02:59Z NONE

Hi @keewis, this is really a smart way, using rolling twice. I've refactored the code slightly as: ```python def continuous_meet(cond, count, dim): """ Continuously meet a given condition along a dimension. """ _found = cond.rolling(dim={dim:count}, center=True).sum().fillna(0).astype(np.float)

detected = (
    _found.rolling(dim={dim:count}, center=True) 
    .reduce(lambda a, axis: (a == count).any(axis=axis)) 
    .fillna(False) 
    .astype(bool) 
)

if count % 2 == 0:
    return detected.shift({dim:-1}).fillna(False)

return detected

sst = xr.DataArray( np.array( [0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0., 1., 1., 1., 1., 1., 1., 0., 0., 0.] ), dims="time", coords={"time": np.arange(24)}, name="sst", )

ElNino = continuous_meet(sst > 0.5, count=5, dim='time')

sst.plot.step(linewidth=3) sst.where(ElNino).plot.step(linewidth=2) `` Note that whencountis a even number, truly centeredrollingcannot be obtained. So we need toshift` the result by -1. Is this perfect? I didn't check the performance.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  consecutive time selection 588126763
604776072 https://github.com/pydata/xarray/issues/3896#issuecomment-604776072 https://api.github.com/repos/pydata/xarray/issues/3896 MDEyOklzc3VlQ29tbWVudDYwNDc3NjA3Mg== miniufo 9312831 2020-03-27T02:01:54Z 2020-03-27T02:01:54Z NONE

Hi @max-sixty, thanks for your kind help. But I found it works not as I expected. If the SST has the values [..., 0, 0, 1, 1, 1, 1, 1, 0, 0, ...], then the method you suggested will give [..., F, F, F, F, T, F, F, F, F, ...]. That is only one True for a 5 consecutive 1s. But I would expect five True like [..., F, F, T, T, T, T, T, F, F, ...]. Any suggestion?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  consecutive time selection 588126763

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1200.3ms · About: xarray-datasette