issue_comments
10 rows where issue = 425320466 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Allow grouping by dask variables · 10 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1101512178 | https://github.com/pydata/xarray/issues/2852#issuecomment-1101512178 | https://api.github.com/repos/pydata/xarray/issues/2852 | IC_kwDOAMm_X85Bp73y | dcherian 2448579 | 2022-04-18T15:45:41Z | 2022-04-18T15:45:41Z | MEMBER | You can do this with flox now. Eventually we can update xarray to support grouping by a dask variable. The limitation will be that the user will have to provide "expected groups" so that we can construct the output coordinate. |
{ "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
1100985429 | https://github.com/pydata/xarray/issues/2852#issuecomment-1100985429 | https://api.github.com/repos/pydata/xarray/issues/2852 | IC_kwDOAMm_X85Bn7RV | stale[bot] 26384082 | 2022-04-18T00:43:46Z | 2022-04-18T00:43:46Z | NONE | In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here or remove the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
653016746 | https://github.com/pydata/xarray/issues/2852#issuecomment-653016746 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDY1MzAxNjc0Ng== | rabernat 1197350 | 2020-07-02T13:48:39Z | 2020-07-02T13:48:39Z | MEMBER | 👀 cc @chiaral |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
652898319 | https://github.com/pydata/xarray/issues/2852#issuecomment-652898319 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDY1Mjg5ODMxOQ== | C-H-Simpson 20053498 | 2020-07-02T09:29:32Z | 2020-07-02T09:29:55Z | NONE | I'm going to share a code snippet that might be useful to people reading this issue. I wanted to group my data by month and year, and take the mean for each group. I did not want to use My solution was to use Here is the code: ``` def _grouped_mean( data: np.ndarray, months: np.ndarray, years: np.ndarray) -> np.ndarray: """similar to grouping year_month MultiIndex, but faster.
def _wrapped_grouped_mean(da: xr.DataArray) -> xr.DataArray: """similar to grouping by a year_month MultiIndex, but faster.
``` |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
478624700 | https://github.com/pydata/xarray/issues/2852#issuecomment-478624700 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDQ3ODYyNDcwMA== | jmichel-otb 10595679 | 2019-04-01T15:23:35Z | 2019-04-01T15:23:35Z | CONTRIBUTOR | That's a tough question ;) In the current dataset I have 950 unique labels, but in my use cases it can be be a lot more (e.g. agricultaral crops) or a lot less (adminstrative boundaries or regions). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
478621867 | https://github.com/pydata/xarray/issues/2852#issuecomment-478621867 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDQ3ODYyMTg2Nw== | shoyer 1217238 | 2019-04-01T15:16:30Z | 2019-04-01T15:16:30Z | MEMBER | Roughly how many unique labels do you have? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
478563375 | https://github.com/pydata/xarray/issues/2852#issuecomment-478563375 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDQ3ODU2MzM3NQ== | dcherian 2448579 | 2019-04-01T12:43:03Z | 2019-04-01T12:43:03Z | MEMBER | It sounds like there is an apply_ufunc solution to your problem but I dont know how to write it! ;) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
478488200 | https://github.com/pydata/xarray/issues/2852#issuecomment-478488200 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDQ3ODQ4ODIwMA== | jmichel-otb 10595679 | 2019-04-01T08:37:42Z | 2019-04-01T08:37:42Z | CONTRIBUTOR | Many thanks for your answers @shoyer and @rabernat . I am relatively new to I will give a try to I also had the following idea. Given that:
* I know exactly beforehand which labels (or groups) I want to analyse,
* I do not actually need the discovery of unique labels that Maybe there is already something like that in xarray, or maybe this is something I can derive from the implementation of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
478415169 | https://github.com/pydata/xarray/issues/2852#issuecomment-478415169 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDQ3ODQxNTE2OQ== | shoyer 1217238 | 2019-04-01T02:31:58Z | 2019-04-01T02:31:58Z | MEMBER | The current design of This makes operations that group over large keys stored in dask inefficient. This could be done efficiently ( |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 | |
476678007 | https://github.com/pydata/xarray/issues/2852#issuecomment-476678007 | https://api.github.com/repos/pydata/xarray/issues/2852 | MDEyOklzc3VlQ29tbWVudDQ3NjY3ODAwNw== | rabernat 1197350 | 2019-03-26T14:41:59Z | 2019-03-26T14:41:59Z | MEMBER |
It is very hard to make this sort of groupby lazy, because you are grouping over the variable In this specific example, it sounds like what you want is to compute the histogram of labels. That could be accomplished without groupby. For example, you could use apply_ufunc together with So my recommendation is to think of a way to accomplish what you want that does not involve groupby. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow grouping by dask variables 425320466 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 6