issues
9 rows where user = 3621629 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
448478648 | MDExOlB1bGxSZXF1ZXN0MjgyMjM5MDI0 | 2991 | ENH: str accessor | 0x0L 3621629 | closed | 0 | 6 | 2019-05-25T16:10:22Z | 2019-06-10T13:11:14Z | 2019-06-10T13:11:11Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2991 | Hello, Some of the pandas str functionalities. Instead of wrapping pandas internal as in #2983 I copy/pasted the code since it's simple and tiny. Currently it's a bit more restrictive than pandas since it expects all elements to be string like.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2991/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
447856992 | MDU6SXNzdWU0NDc4NTY5OTI= | 2983 | string accessor | 0x0L 3621629 | closed | 0 | 1 | 2019-05-23T20:25:01Z | 2019-06-10T13:11:10Z | 2019-06-10T13:11:10Z | CONTRIBUTOR | Hello, I have written a small wrapper around pandas internal methods https://gist.github.com/0x0L/ef78c80a42892c0f832c91357914a5a4 Missing methods are those involving list or list of arrays (join, split, partition, ...) Let me know if there's enough interest to turn this into a full commit |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2983/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
452734140 | MDExOlB1bGxSZXF1ZXN0Mjg1NTY0OTEz | 3001 | BUG: fix safe_cast_to_index | 0x0L 3621629 | closed | 0 | 3 | 2019-06-05T21:52:57Z | 2019-06-10T04:48:45Z | 2019-06-10T04:48:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3001 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
452729969 | MDU6SXNzdWU0NTI3Mjk5Njk= | 3000 | Slowness when cftime is installed | 0x0L 3621629 | closed | 0 | 0 | 2019-06-05T21:40:42Z | 2019-06-10T04:48:44Z | 2019-06-10T04:48:44Z | CONTRIBUTOR | With ```python import numpy as np import pandas as pd import xarray as xr da = xr.DataArray(np.random.randn(5000, 500)) df = da.to_pandas() with pandas%time df_stacked = df.stack() Wall time: 48.3 ms%time df_unstacked = df_stacked.unstack() Wall time: 368 mswith xarray%time da_stacked = da.stack(stacked_dim=('dim_0', 'dim_1')) Wall time: 1.03 s%time da_unstacked = da_stacked.unstack('stacked_dim') Wall time: 78.2 ms``` prun points to The behaviour is also incorrect for empty indexes: ```python da[:0].stack(dim=['dim_0', 'dim_1']).dim.to_index() CFTimeIndex([], dtype='object', name='dim')``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
276688437 | MDU6SXNzdWUyNzY2ODg0Mzc= | 1742 | Performance regression when selecting | 0x0L 3621629 | closed | 0 | 1 | 2017-11-24T19:34:29Z | 2019-06-06T19:08:06Z | 2019-06-06T19:08:06Z | CONTRIBUTOR | Hello, I just noticed a performance drop in 0.10 after a ```python import numpy as np import pandas as pd import xarray as xr np.random.seed(1234) ds = xr.Dataset({k: pd.DataFrame(np.random.randn(2500, 2000)) for k in range(20)}) mask = (np.random.randn(2000) > -0.2).astype(bool) %timeit ds.sel(dim_0=slice(50, 1250), dim_1=mask) %timeit ds[0].sel(dim_0=slice(50, 1250), dim_1=mask) xarray 0.9.6 -> 120 ± 0.4 ms, 4.2 ± 0.02 msxarray 0.10 -> 190 ± 0.4 ms, 6.8 ± 0.03 ms``` This was run in a docker image. Strangely I can't reproduce it natively on macos (performance is the same as in 0.10 in docker for both versions). On a window box, with similar but "real" netcdf dataset performance is halved. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
274996832 | MDU6SXNzdWUyNzQ5OTY4MzI= | 1726 | Behavior of dataarray with no dimensions | 0x0L 3621629 | closed | 0 | 3 | 2017-11-17T21:07:55Z | 2018-01-11T21:24:43Z | 2018-01-11T21:24:43Z | CONTRIBUTOR | Consider ```python type(np.array([1.0]).mean()) -> numpy.float64type(pd.Series([1.0]).mean()) -> floattype(xr.DataArray([1.0]).mean()) -> xarray.core.dataarray.DataArray``` The issue is that this dimensionless data array won't be cast into float by numpy/pandas when constructing a new ndarray/dataframe. You'll have to do it explicitly. Not a big deal but it feels weird. I'm sure there's a real technical reason (keeping metadata ?) behind this behavior but I couldn't find any discussion about it. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
275813162 | MDExOlB1bGxSZXF1ZXN0MTUzOTY2MTk0 | 1734 | pandas casting issues | 0x0L 3621629 | closed | 0 | 7 | 2017-11-21T18:21:56Z | 2018-01-11T21:24:43Z | 2018-01-11T21:24:43Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1734 |
Added a comment about constructing |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
275789502 | MDExOlB1bGxSZXF1ZXN0MTUzOTQ4Njk1 | 1733 | Rank Methods | 0x0L 3621629 | closed | 0 | 9 | 2017-11-21T17:03:41Z | 2017-12-18T16:51:05Z | 2017-12-18T16:51:00Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1733 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
275461273 | MDU6SXNzdWUyNzU0NjEyNzM= | 1731 | Rank function | 0x0L 3621629 | closed | 0 | 4 | 2017-11-20T18:55:33Z | 2017-12-18T16:51:00Z | 2017-12-18T16:51:00Z | CONTRIBUTOR | Hi, I think xarray is missing a rank function.
Is there any reason not to expose a wrapper to See also https://github.com/pydata/xarray/issues/1635 [edit] Although moving rank is mentioned in the whats-new for v0.9.2 I wasn't able to find that functionality nor a trace of it in the code :) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1731/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);