home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

13 rows where author_association = "MEMBER" and issue = 1403614394 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • Illviljan 7
  • dcherian 4
  • keewis 2

issue 1

  • Cumulative examples · 13 ✖

author_association 1

  • MEMBER · 13 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1292355722 https://github.com/pydata/xarray/pull/7152#issuecomment-1292355722 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85NB8iK Illviljan 14371165 2022-10-26T17:14:12Z 2022-10-26T17:14:12Z MEMBER

@patrick-naylor, feel free to try out a better default example if you want.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287899447 https://github.com/pydata/xarray/pull/7152#issuecomment-1287899447 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85Mw8k3 keewis 14808389 2022-10-22T19:59:16Z 2022-10-22T19:59:48Z MEMBER

well, that's just it: there's no way to discriminate between docstrings and multi-line triple-quoted strings without parsing the python code (which is definitely out of scope), so blackdoc doesn't even attempt to. Instead, any line that begins with >>> will be assumed to be doctest lines. So with that, the numbers mean: line 201, character 27.

When created that file we decided to skip the generate_* file, but because the file was renamed that rule does not work anymore. Could you update the pre-commit configuration?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287850448 https://github.com/pydata/xarray/pull/7152#issuecomment-1287850448 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MwwnQ Illviljan 14371165 2022-10-22T16:40:12Z 2022-10-22T16:40:12Z MEMBER

@keewis What does the numbers mean in Cannot parse: 201:27? Because I can't find any functions with docstrings in this file, it's just a bunch of multiline strings that are defined to variables and in my mind shouldn't trigger blackdoc.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287825909 https://github.com/pydata/xarray/pull/7152#issuecomment-1287825909 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85Mwqn1 keewis 14808389 2022-10-22T15:21:44Z 2022-10-22T15:24:45Z MEMBER

do you understand this blackdoc error?

so the reason is the position of the triple quotes: with python def f(): """ >>> 1 + 2""" pass the extracted line will become: 1 + 2""", which when handed to black results in that error.

Technically, that's a bug (the triple quotes mark the end of the docstring), but one that is a bit tricky to fix: blackdoc is implemented using a line parser, which does not work too well if the transition happen somewhere within the line.

My guess is that it would have to start counting quotes which I've tried to avoid up until now since there's a lot of details to get right (see also keewis/blackdoc#145)

Edit: for now, I guess it would be fine to add something like The closing quotes are not on their own line. to the error message

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287768499 https://github.com/pydata/xarray/pull/7152#issuecomment-1287768499 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85Mwcmz Illviljan 14371165 2022-10-22T11:38:03Z 2022-10-22T11:38:03Z MEMBER

@dcherian cumsum for resample fails for some reason do you have any ideas?

```python import numpy as np import pandas as pd import xarray as xr da = xr.DataArray( np.array([1, 2, 3, 1, 2, np.nan]), dims="time", coords=dict( time=("time", pd.date_range("01-01-2001", freq="M", periods=6)), labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ), ) ds = xr.Dataset(dict(da=da)) a = ds.resample(time="3M") a.cumsum()

Traceback (most recent call last):

File "C:\Users\J.W\anaconda3\envs\xarray-tests\lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec exec(code, globals, locals)

File "c:\users\j.w\documents\github\xarray\xarray\util\untitled2.py", line 20, in <module> a.cumsum()

File "C:\Users\J.W\Documents\GitHub\xarray\xarray\core_aggregations.py", line 4921, in cumsum return self.reduce(

File "C:\Users\J.W\Documents\GitHub\xarray\xarray\core\resample.py", line 395, in reduce return super().reduce(

File "C:\Users\J.W\Documents\GitHub\xarray\xarray\core\groupby.py", line 1357, in reduce return self.map(reduce_dataset)

File "C:\Users\J.W\Documents\GitHub\xarray\xarray\core\resample.py", line 342, in map return combined.rename({self._resample_dim: self._dim})

File "C:\Users\J.W\Documents\GitHub\xarray\xarray\core\dataset.py", line 3646, in rename return self._rename(name_dict=name_dict, **names)

File "C:\Users\J.W\Documents\GitHub\xarray\xarray\core\dataset.py", line 3587, in _rename raise ValueError(

ValueError: cannot rename 'resample_dim' because it is not a variable or dimension in this dataset ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287768091 https://github.com/pydata/xarray/pull/7152#issuecomment-1287768091 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85Mwcgb Illviljan 14371165 2022-10-22T11:35:14Z 2022-10-22T11:35:14Z MEMBER

@keewis do you understand this blackdoc error?

``` trim trailing whitespace.................................................Passed fix end of files.........................................................Passed check yaml...............................................................Passed debug statements (python)................................................Passed mixed line ending........................................................Passed autoflake................................................................Passed isort....................................................................Passed pyupgrade................................................................Passed black....................................................................Passed black-jupyter............................................................Passed blackdoc.................................................................Failed - hook id: blackdoc - exit code: 123

error: cannot format /code/xarray/util/generate_aggregations.py: Cannot parse: 201:27: EOF in multi-line string Oh no! 💥 💔 💥 215 files left unchanged, 1 file fails to reformat. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287253517 https://github.com/pydata/xarray/pull/7152#issuecomment-1287253517 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85Mue4N dcherian 2448579 2022-10-21T17:35:03Z 2022-10-21T17:35:03Z MEMBER

o you think it would be better to finish this PR as the creation of _aggregations.py to give the cum methods better documentation? Then start a new one to fix https://github.com/pydata/xarray/issues/6528?

Sure that would be a good intermediate step. Let us know if you need help.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1287138940 https://github.com/pydata/xarray/pull/7152#issuecomment-1287138940 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MuC58 dcherian 2448579 2022-10-21T15:42:12Z 2022-10-21T15:42:12Z MEMBER

Thanks for taking this on @patrick-naylor ! This is a decent-sized project!

Using apply_ufunc and np.cumsum/cumprod has some issues as it only finds the cumulative across one axis which makes iterating through each dimension necessary.

np.cumsum only supports an integer axis so this is OK?

flox doesn't support cumsum at the moment (https://github.com/xarray-contrib/flox/issues/91) so we can delete that bit and just have one code path.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1286660760 https://github.com/pydata/xarray/pull/7152#issuecomment-1286660760 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MsOKY Illviljan 14371165 2022-10-21T08:51:12Z 2022-10-21T08:51:12Z MEMBER

I don't think you have flox installed, if it's not installed the code will take the old path. Do conda install flox and I think you'll get the NotImplementedError. Then you maybe have to change the default settings in cumsum so flox is not used.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1275131950 https://github.com/pydata/xarray/pull/7152#issuecomment-1275131950 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85MAPgu dcherian 2448579 2022-10-11T18:51:26Z 2022-10-11T18:51:26Z MEMBER

Is there a particular reason why there is no cumprod for GroupBy objects?

Nope. Just wasn't added in :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1274963421 https://github.com/pydata/xarray/pull/7152#issuecomment-1274963421 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85L_mXd dcherian 2448579 2022-10-11T16:27:15Z 2022-10-11T16:30:32Z MEMBER

Thanks @patrick-naylor !

Instead of using Dataset.reduce I think we want something like python def cumsum(..., dim): return xr.apply_ufunc( np.cumsum if skipna else np.nancumsum, obj, input_core_dims=[dim], output_core_dims=[dim], kwargs={"axis": -1}, ) # now transpose dimensions back to input order

to fix #6528.

At the moment, this should also work on GroupBy objects quite nicely.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1273832790 https://github.com/pydata/xarray/pull/7152#issuecomment-1273832790 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85L7SVW Illviljan 14371165 2022-10-10T21:23:17Z 2022-10-10T21:23:17Z MEMBER

Right now, I think cumsum and cumprod is enough. numpy-groupies has a few more examples that I suppose we could support in the future.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394
1273827112 https://github.com/pydata/xarray/pull/7152#issuecomment-1273827112 https://api.github.com/repos/pydata/xarray/issues/7152 IC_kwDOAMm_X85L7Q8o Illviljan 14371165 2022-10-10T21:14:28Z 2022-10-10T21:14:28Z MEMBER

Very nice, this is something that's been on the TODO list! :)

I believe we wanted to rename generate_reductions.py to generate_aggregations.py so cumsum et al could be included and generated there as well. Is there a lot of work for you if try to merge these into that one?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cumulative examples 1403614394

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.366ms · About: xarray-datasette