issues
9 rows where type = "pull" and user = 20629530 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2075019328 | PR_kwDOAMm_X85juCQ- | 8603 | Convert 360_day calendars by choosing random dates to drop or add | aulemahal 20629530 | closed | 0 | 3 | 2024-01-10T19:13:31Z | 2024-04-16T14:53:42Z | 2024-04-16T14:53:42Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8603 |
Small PR to add a new "method" to convert to and from 360_day calendars. The current two methods (chosen with the This new option will randomly chose the days, one for each fifth of the year (72-days period). It emulates the method of the LOCA datasets (see web page and article ). February 29th is always removed/added when the source/target is a leap year. I copied the implementation from xclim (which I wrote), see code here . |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
991544027 | MDExOlB1bGxSZXF1ZXN0NzI5OTkzMTE0 | 5781 | Add encodings to save_mfdataset | aulemahal 20629530 | open | 0 | 1 | 2021-09-08T21:24:13Z | 2022-10-06T21:44:18Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5781 |
Simply add a |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
906175200 | MDExOlB1bGxSZXF1ZXN0NjU3MjA1NTM2 | 5402 | `dt.to_pytimedelta` to allow arithmetic with cftime objects | aulemahal 20629530 | open | 0 | 1 | 2021-05-28T22:48:50Z | 2022-06-09T14:50:16Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5402 |
When playing with cftime objects a problem I encountered many times is that I can sub two arrays and them add it back to another. Subtracting to cftime datetime arrays result in an array of Example: ```python import xarray as xr da = xr.DataArray(xr.cftime_range('1900-01-01', freq='D', periods=10), dims=('time',)) An array of timedelta64[ns]dt = da - da[0] da[-1] + dt # Fails ``` However, if the two arrays were of 'O' dtype, then the subtraction would be made by This solution here adds a The user still has to check if the data is in cftime or numpy to adapt the operation (calling Also, this doesn't work with dask arrays because loading a dask array triggers the variable constructor and thus recasts the array of I realize I maybe should have opened an issue before, but I had this idea and it all rushed along. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
870312451 | MDExOlB1bGxSZXF1ZXN0NjI1NTMwMDQ2 | 5233 | Calendar utilities | aulemahal 20629530 | closed | 0 | 16 | 2021-04-28T20:01:33Z | 2021-12-30T22:54:49Z | 2021-12-30T22:54:11Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5233 |
So:
I'm not sure where to expose the function. Should the range-generators be accessible directly like The |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
650044968 | MDExOlB1bGxSZXF1ZXN0NDQzNjEwOTI2 | 4193 | Fix polyfit fail on deficient rank | aulemahal 20629530 | closed | 0 | 5 | 2020-07-02T16:00:21Z | 2020-08-20T14:20:43Z | 2020-08-20T08:34:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4193 |
Fixes #4190. In cases where the input matrix had a deficient rank (matrix rank != order) because of the number of NaN values, polyfit would fail, simply because numpy's lstsq returned an empty array for the residuals (instead of a size 1 array). This fixes the problem by catching the case and returning The other point in the issue was that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
635542241 | MDExOlB1bGxSZXF1ZXN0NDMxODg5NjQ0 | 4135 | Correct dask handling for 1D idxmax/min on ND data | aulemahal 20629530 | closed | 0 | 1 | 2020-06-09T15:36:09Z | 2020-06-25T16:09:59Z | 2020-06-25T03:59:52Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4135 |
Based on comments on dask/dask#3096, I fixed the dask indexing error that occurred when I believe this doesn't conflict with #3936. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
625942676 | MDExOlB1bGxSZXF1ZXN0NDI0MDQ4Mzg3 | 4099 | Allow non-unique and non-monotonic coordinates in get_clean_interp_index and polyfit | aulemahal 20629530 | closed | 0 | 0 | 2020-05-27T18:48:58Z | 2020-06-05T15:46:00Z | 2020-06-05T15:46:00Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4099 |
Pull #3733 added
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
612846594 | MDExOlB1bGxSZXF1ZXN0NDEzNzEzODg2 | 4033 | xr.infer_freq | aulemahal 20629530 | closed | 0 | 3 | 2020-05-05T19:39:05Z | 2020-05-30T18:11:36Z | 2020-05-30T18:08:27Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4033 |
This PR adds a Two things are problematic right now and I would like to get feedback on how to implement them if this PR gets the dev's approval. 1) 2) As of now, Another option, cleaner but longer, would be to reimplement |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4033/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
557627188 | MDExOlB1bGxSZXF1ZXN0MzY5MTg0Mjk0 | 3733 | Implementation of polyfit and polyval | aulemahal 20629530 | closed | 0 | 9 | 2020-01-30T16:58:51Z | 2020-03-26T00:22:17Z | 2020-03-25T17:17:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3733 |
Following discussions in #3349, I suggest here an implementation of My implementation mostly duplicates the code of Questions: 1 ) Are the functions where they should go? 2 ) Should xarray's implementation really replicate the behaviour of numpy's? A lot of extra code could be removed if we'd say we only want to compute and return the residuals and the coefficients. All the other variables are a few lines of code away for the user that really wants them, and they don't need the power of xarray and dask anyway. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);