home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where issue = 1403614394 and user = 73678798 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • patrick-naylor · 7 ✖

issue 1

  • Cumulative examples · 7 ✖

author_association 1

  • CONTRIBUTOR 7
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1292363055 https://github.com/pydata/xarray/pull/7152#issuecomment-1292363055 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85NB-Uv patrick-naylor 73678798 2022-10-26T17:19:20Z 2022-10-26T17:19:20Z CONTRIBUTOR

Thanks @Illviljan, @dcherian, and @keewis so much for the help.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287246401 https://github.com/pydata/xarray/pull/7152#issuecomment-1287246401 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MudJB patrick-naylor 73678798 2022-10-21T17:28:12Z 2022-10-21T17:28:12Z CONTRIBUTOR

@dcherian Do you think it would be better to finish this PR as the creation of _aggregations.py to give the cum methods better documentation? Then start a new one to fix #6528?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1284713697 https://github.com/pydata/xarray/pull/7152#issuecomment-1284713697 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85Mkyzh patrick-naylor 73678798 2022-10-19T23:52:47Z 2022-10-19T23:52:47Z CONTRIBUTOR

I've merged the cumulative and reduction files into generate_aggregations.py and _aggregations.py. This uses the original version of reductions with an additional statement on the dataset methods that adds the original coordinates back in.

Using apply_ufunc and np.cumsum/cumprod has some issues as it only finds the cumulative across one axis which makes iterating through each dimension necessary. This makes it slower than the original functions and also causes some problems with the groupby method.

Happy for any input on how the method using apply_ufunc might be usable or on any ways to change the current method.

I'm getting a few issues I don't quite understand: - When running pytest on my local repository I get no errors but it's failing the checks here with a NotImplementedError - Black is having an issue with some of the strings in generate_aggregations. It's saying it cannot parse what should be valid code.

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1275334035 https://github.com/pydata/xarray/pull/7152#issuecomment-1275334035 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MBA2T patrick-naylor 73678798 2022-10-11T22:12:13Z 2022-10-11T22:12:13Z CONTRIBUTOR

python def cumsum(..., dim): return xr.apply_ufunc( np.cumsum if skipna else np.nancumsum, obj, input_core_dims=[dim], output_core_dims=[dim], kwargs={"axis": -1}, ) # now transpose dimensions back to input order

I'm running into an issue with variables without the core dimensions. Would it be better to do a work around in cumsum or in apply_unfunc like you mentioned in #6391

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1275123446 https://github.com/pydata/xarray/pull/7152#issuecomment-1275123446 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MANb2 patrick-naylor 73678798 2022-10-11T18:43:59Z 2022-10-11T18:43:59Z CONTRIBUTOR

Thanks @dcherian, I'll try to work that in. Is there a particular reason why there is no cumprod for GroupBy objects?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1273835619 https://github.com/pydata/xarray/pull/7152#issuecomment-1273835619 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85L7TBj patrick-naylor 73678798 2022-10-10T21:27:46Z 2022-10-10T21:27:46Z CONTRIBUTOR

Great I'll start working on that. Shouldn't take too long

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1273829157 https://github.com/pydata/xarray/pull/7152#issuecomment-1273829157 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85L7Rcl patrick-naylor 73678798 2022-10-10T21:17:23Z 2022-10-10T21:17:23Z CONTRIBUTOR

@Illviljan That is definitely something I could do. Are there any other methods I should be including in this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.328ms · About: xarray-datasette