issues
5 rows where state = "closed" and user = 102827 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
374279704 | MDU6SXNzdWUzNzQyNzk3MDQ= | 2514 | interpolate_na with limit argument changes size of chunks | cchwala 102827 | closed | 0 | 8 | 2018-10-26T08:31:35Z | 2021-03-26T19:50:50Z | 2021-03-26T19:50:50Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python import pandas as pd import xarray as xr import numpy as np t = pd.date_range(start='2018-01-01', end='2018-02-01', freq='H') foo = np.sin(np.arange(len(t))) bar = np.cos(np.arange(len(t))) foo[1] = np.NaN bar[2] = np.NaN ds_test = xr.Dataset(data_vars={'foo': ('time', foo), 'bar': ('time', bar)}, coords={'time': t}).chunk() print(ds_test)
print("\n\n### After Output of the above code. Note the different chunk sizes, depending on the value of After
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
376162232 | MDExOlB1bGxSZXF1ZXN0MjI3NDQzNTI3 | 2532 | [WIP] Fix problem with wrong chunksizes when using rolling_window on dask.array | cchwala 102827 | closed | 0 | 2 | 2018-10-31T21:12:03Z | 2021-03-26T19:50:50Z | 2021-03-26T19:50:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2532 |
Short summaryThe two rolling-window functions for will be fixed to preserve Long summaryThe specific initial problem with chunksizes and which adds a small array with a small chunk to the initial array. There is another related problem where For some (historic) reason there are these two rolling-window functions for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
376154741 | MDU6SXNzdWUzNzYxNTQ3NDE= | 2531 | DataArray.rolling() does not preserve chunksizes in some cases | cchwala 102827 | closed | 0 | 2 | 2018-10-31T20:50:33Z | 2021-03-26T19:50:49Z | 2021-03-26T19:50:49Z | CONTRIBUTOR | This issue was found and discussed in the related issue #2514 I open a separate issue for clarity. Code Sample, a copy-pastable example if possible```python import pandas as pd import numpy as np import xarray as xr t = pd.date_range(start='2018-01-01', end='2018-02-01', freq='H') bar = np.sin(np.arange(len(t))) baz = np.cos(np.arange(len(t))) da_test = xr.DataArray(data=np.stack([bar, baz]), coords={'time': t, 'sensor': ['one', 'two']}, dims=('sensor', 'time')) print(da_test.chunk({'time': 100}).rolling(time=60).mean().chunks) print(da_test.chunk({'time': 100}).rolling(time=60).count().chunks)
Problem descriptionDataArray.rolling() does not preserve the chunksizes, apparently depending on the applied method. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
229807027 | MDExOlB1bGxSZXF1ZXN0MTIxMzc5NjAw | 1414 | Speed up `decode_cf_datetime` | cchwala 102827 | closed | 0 | 12 | 2017-05-18T21:15:40Z | 2017-07-26T07:40:24Z | 2017-07-25T17:42:52Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1414 |
Instead of casting the input numeric dates to float, they are now
casted to nanoseconds as int64 which makes On my machine all existing tests for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
226549366 | MDU6SXNzdWUyMjY1NDkzNjY= | 1399 | `decode_cf_datetime()` slow because `pd.to_timedelta()` is slow if floats are passed | cchwala 102827 | closed | 0 | 6 | 2017-05-05T11:48:00Z | 2017-07-25T17:42:52Z | 2017-07-25T17:42:52Z | CONTRIBUTOR | Hi,
Here is a notebook that shows the differences. Working with integers is approx. one order of magnitude faster. Hence, it would be great to automatically do the conversion from raw time value floats to integers in nanoseconds where possible (likely limited to resolutions bellow days or hours to avoid coping with different durations numbers of nanoseconds within e.g. different months). As alternative, maybe avoid forcing the cast to floats and indicate in the docstring that the raw values should be integers to speed up the conversion. This could possibly also be resolved in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);