issue_comments
10 rows where author_association = "CONTRIBUTOR" and user = 16700639 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- bzah · 10 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1472135607 | https://github.com/pydata/xarray/pull/5873#issuecomment-1472135607 | https://api.github.com/repos/pydata/xarray/issues/5873 | IC_kwDOAMm_X85XvwG3 | bzah 16700639 | 2023-03-16T14:54:51Z | 2023-03-16T14:54:51Z | CONTRIBUTOR | Hey, thanks @dcherian for taking over and merging this PR ! (and sorry for not being active on it myself for the past year...) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow indexing unindexed dimensions using dask arrays 1028755077 | |
1033607850 | https://github.com/pydata/xarray/pull/6059#issuecomment-1033607850 | https://api.github.com/repos/pydata/xarray/issues/6059 | IC_kwDOAMm_X849m5qq | bzah 16700639 | 2022-02-09T10:30:40Z | 2022-02-09T10:40:08Z | CONTRIBUTOR | @mathause This PR goes beyond what is currently implemented in numpy. For now, all weighted quantiles PR on numpy are more or less based on "linear" method (method 7) and none have been merged. I plan to work on integrating weights with the other interpolation methods but don't have the time right now. I'll probably pick some ideas from here. As for the numerics here, everything looks good to me.
The only limitations I can see are:
- This only handles sampling weights, which is fine I guess.
- Some interpolation methods are missing, they can be added later.
- ~A |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Weighted quantile 1076265104 | |
1021975396 | https://github.com/pydata/xarray/pull/6059#issuecomment-1021975396 | https://api.github.com/repos/pydata/xarray/issues/6059 | IC_kwDOAMm_X8486htk | bzah 16700639 | 2022-01-26T08:32:56Z | 2022-02-07T16:57:56Z | CONTRIBUTOR | FYI, weighted quantiles topic will be discussed in numpy's triage meeting of today (17:00 UTC). I'm not a maintainer but I'm sure you are welcomed to join in if you are interested. Meeting information: https://hackmd.io/68i_JvOYQfy9ERiHgXMPvg |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Weighted quantile 1076265104 | |
991632287 | https://github.com/pydata/xarray/pull/6068#issuecomment-991632287 | https://api.github.com/repos/pydata/xarray/issues/6068 | IC_kwDOAMm_X847Gxuf | bzah 16700639 | 2021-12-11T12:49:15Z | 2021-12-11T12:49:15Z | CONTRIBUTOR |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DOC: Add "auto" to dataarray `chunk` method 1077040836 | |
991611195 | https://github.com/pydata/xarray/pull/6068#issuecomment-991611195 | https://api.github.com/repos/pydata/xarray/issues/6068 | IC_kwDOAMm_X847Gsk7 | bzah 16700639 | 2021-12-11T11:39:43Z | 2021-12-11T11:39:57Z | CONTRIBUTOR | I also noticed the Tuples types Would it make sense to move this in Dataset to have the same api for both ? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DOC: Add "auto" to dataarray `chunk` method 1077040836 | |
944328081 | https://github.com/pydata/xarray/issues/2511#issuecomment-944328081 | https://api.github.com/repos/pydata/xarray/issues/2511 | IC_kwDOAMm_X844SU2R | bzah 16700639 | 2021-10-15T14:03:21Z | 2021-10-15T14:03:21Z | CONTRIBUTOR | I'll drop a PR, it might be easier to try and play with this than a piece of code lost in an issue. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Array indexing with dask arrays 374025325 | |
931430066 | https://github.com/pydata/xarray/issues/2511#issuecomment-931430066 | https://api.github.com/repos/pydata/xarray/issues/2511 | IC_kwDOAMm_X843hH6y | bzah 16700639 | 2021-09-30T15:30:02Z | 2021-10-06T09:48:19Z | CONTRIBUTOR | Okay I could re do my test.
If I manually call I'm sorry I cannot share as is my code, the relevant portion is really in the middle of many things. I'll try to get a minimalist version of it to share with you. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Array indexing with dask arrays 374025325 | |
930153816 | https://github.com/pydata/xarray/issues/2511#issuecomment-930153816 | https://api.github.com/repos/pydata/xarray/issues/2511 | IC_kwDOAMm_X843cQVY | bzah 16700639 | 2021-09-29T13:02:15Z | 2021-10-06T09:46:10Z | CONTRIBUTOR | @pl-marasco Ok that's strange. I should have saved my use case :/ I will try to reproduce it and will provide a gist of it soon. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Array indexing with dask arrays 374025325 | |
932229595 | https://github.com/pydata/xarray/issues/2511#issuecomment-932229595 | https://api.github.com/repos/pydata/xarray/issues/2511 | IC_kwDOAMm_X843kLHb | bzah 16700639 | 2021-10-01T13:29:32Z | 2021-10-01T13:29:32Z | CONTRIBUTOR | @pl-marasco Thanks for the example ! With it I have the same result as you, it takes the same time with patch or with compute. However, I could construct an example giving very different results. It is quite close to my original code: ``` time_start = time.perf_counter() COORDS = dict( time=pd.date_range("2042-01-01", periods=200, freq=pd.DateOffset(days=1)), ) da = xr.DataArray( np.random.rand(200 * 3500 * 350).reshape((200, 3500, 350)), dims=('time', 'x', 'y'), coords=COORDS ).chunk(dict(time=-1, x=100, y=100))
``` (Basically I want for each month the first event occurring in it). Without the patch and uncommenting |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Array indexing with dask arrays 374025325 | |
922942743 | https://github.com/pydata/xarray/issues/2511#issuecomment-922942743 | https://api.github.com/repos/pydata/xarray/issues/2511 | IC_kwDOAMm_X843Av0X | bzah 16700639 | 2021-09-20T13:45:56Z | 2021-09-20T13:45:56Z | CONTRIBUTOR | I wrote a very naive fix, it works but seems to perform really slowly, I would appreciate some feedback (I'm a beginner with Dask).
Basically, I added The patch: ``` class VectorizedIndexer(ExplicitIndexer): """Tuple for vectorized indexing.
``` |
{ "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 } |
Array indexing with dask arrays 374025325 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 4