issues
32 rows where comments = 4, repo = 13221727 and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1250939008 | I_kwDOAMm_X85Kj9CA | 6646 | `dim` vs `dims` | max-sixty 5635139 | closed | 0 | 4 | 2022-05-27T16:15:02Z | 2024-04-29T18:24:56Z | 2024-04-29T18:24:56Z | MEMBER | What is your issue?I've recently been hit with this when experimenting with Should we standardize on one of these? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2126375172 | I_kwDOAMm_X85-vekE | 8726 | PRs requiring approval & merging main? | max-sixty 5635139 | closed | 0 | 4 | 2024-02-09T02:35:58Z | 2024-02-09T18:23:52Z | 2024-02-09T18:21:59Z | MEMBER | What is your issue?Sorry I haven't been on the calls at all recently (unfortunately the schedule is difficult for me). Maybe this was discussed there? PRs now seem to require a separate approval prior to merging. Is there an upside to this? Is there any difference between those who can approve and those who can merge? Otherwise it just seems like more clicking. PRs also now seem to require merging the latest main prior to merging? I get there's some theoretical value to this, because changes can semantically conflict with each other. But it's extremely rare that this actually happens (can we point to cases?), and it limits the immediacy & throughput of PRs. If the bad outcome does ever happen, we find out quickly when main tests fail and can revert. (fwiw I wrote a few principles around this down a while ago here; those are much stronger than what I'm suggesting in this issue though) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1923431725 | I_kwDOAMm_X85ypT0t | 8264 | Improve error messages | max-sixty 5635139 | open | 0 | 4 | 2023-10-03T06:42:57Z | 2023-10-24T18:40:04Z | MEMBER | Is your feature request related to a problem?Coming back to xarray, and using it based on what I remember from a year ago or so, means I make lots of mistakes. I've also been using it outside of a repl, where error messages are more important, given I can't explore a dataset inline. Some of the error messages could be much more helpful. Take one example:
The second sentence is nice. But the first could be give us much more information:
- Which variables conflict? I'm merging four objects, so would be so helpful to know which are causing the issue.
- What is the conflict? Is one a superset and I can Having these good is really useful, lets folks stay in the flow while they're working, and it signals that we're a well-built, refined library. Describe the solution you'd likeI'm not sure the best way to surface the issues — error messages make for less legible contributions than features or bug fixes, and the primary audience for good error messages is often the opposite of those actively developing the library. They're also more difficult to manage as GH issues — there could be scores of marginal issues which would often be out of date. One thing we do in PRQL is have a file that snapshots error messages Any other ideas? Describe alternatives you've consideredNo response Additional contextA couple of specific error-message issues: - https://github.com/pydata/xarray/issues/2078 - https://github.com/pydata/xarray/issues/5290 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8264/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1939241220 | PR_kwDOAMm_X85cmBPP | 8296 | mypy 1.6.0 passing | max-sixty 5635139 | closed | 0 | 4 | 2023-10-12T06:04:46Z | 2023-10-12T22:13:18Z | 2023-10-12T19:06:13Z | MEMBER | 0 | pydata/xarray/pulls/8296 | I did the easy things, but will need help for the final couple on Because we don't pin mypy (should we?), this blocks other PRs if we gate them on mypy passing |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1902155047 | PR_kwDOAMm_X85aonTY | 8208 | Use a bound `TypeVar` for `DataArray` and `Dataset` methods | max-sixty 5635139 | closed | 0 | 4 | 2023-09-19T03:46:33Z | 2023-09-28T16:46:54Z | 2023-09-19T19:20:42Z | MEMBER | 1 | pydata/xarray/pulls/8208 |
Edit: I added a comment outlining a problem with this I think we should be using a So this unifies all This does require some manual casts; I think because when there's a concrete path for both One alternative — a minor change — would be to bound on |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1905824568 | I_kwDOAMm_X85xmJM4 | 8221 | Frequent doc build timeout / OOM | max-sixty 5635139 | open | 0 | 4 | 2023-09-20T23:02:37Z | 2023-09-21T03:50:07Z | MEMBER | What is your issue?I'm frequently seeing It's after 1552 seconds, so it not being a round number means it might be the memory? It follows Here's an example: https://readthedocs.org/projects/xray/builds/21983708/ Any thoughts for what might be going on? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1410340285 | PR_kwDOAMm_X85A3emZ | 7166 | Fix doctest warnings, enable errors in CI | max-sixty 5635139 | closed | 0 | 4 | 2022-10-16T01:29:36Z | 2023-09-20T09:31:58Z | 2022-10-16T21:26:46Z | MEMBER | 0 | pydata/xarray/pulls/7166 |
I'm not confident about the CI change; either we can merge with the "this is a trial message" and see how it goes, or split that into a separate PR and discuss. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1326238990 | I_kwDOAMm_X85PDM0O | 6870 | `rolling_exp` loses coords | max-sixty 5635139 | closed | 0 | 4 | 2022-08-02T18:27:44Z | 2023-09-19T01:13:23Z | 2023-09-19T01:13:23Z | MEMBER | What happened?We lose the time coord here — ```python ds = xr.tutorial.load_dataset("air_temperature") ds.rolling_exp(time=5).mean() <xarray.Dataset> Dimensions: (lat: 25, time: 2920, lon: 53) Coordinates: * lat (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0 * lon (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0 Dimensions without coordinates: time Data variables: air (time, lat, lon) float32 241.2 242.5 243.5 ... 296.4 296.1 295.7 ``` (I realize I wrote this, I didn't think this used to happen, but either it always did or I didn't write good enough tests... mea culpa) What did you expect to happen?We keep the time coords, like we do for normal
Minimal Complete Verifiable Example
MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.13 (main, May 24 2022, 21:13:51)
[Clang 13.1.6 (clang-1316.0.21.2)]
python-bits: 64
OS: Darwin
OS-release: 21.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2022.6.0
pandas: 1.4.3
numpy: 1.21.6
scipy: 1.8.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.12.0
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2021.12.0
distributed: 2021.12.0
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: 0.2.1
fsspec: 2021.11.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 62.3.2
pip: 22.1.2
conda: None
pytest: 7.1.2
IPython: 8.4.0
sphinx: 4.3.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1878099779 | PR_kwDOAMm_X85ZYBES | 8138 | Allow `apply_ufunc` to ignore missing core dims | max-sixty 5635139 | closed | 0 | 4 | 2023-09-01T22:09:20Z | 2023-09-17T08:20:19Z | 2023-09-17T08:20:13Z | MEMBER | 0 | pydata/xarray/pulls/8138 |
This probably needs a review and another turn, maybe some more tests on multiple objects etc. There are a couple of questions inline. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1216982208 | PR_kwDOAMm_X8422vsw | 6522 | Update issue template to include a checklist | max-sixty 5635139 | closed | 0 | 4 | 2022-04-27T08:19:49Z | 2022-05-01T22:14:35Z | 2022-05-01T22:14:32Z | MEMBER | 0 | pydata/xarray/pulls/6522 | This replaces https://github.com/pydata/xarray/pull/5787. Please check out the previews in the most recent comment there |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1218094019 | PR_kwDOAMm_X8426dL2 | 6534 | Attempt to improve CI caching | max-sixty 5635139 | closed | 0 | 4 | 2022-04-28T02:16:29Z | 2022-04-28T23:55:16Z | 2022-04-28T06:30:25Z | MEMBER | 0 | pydata/xarray/pulls/6534 | Currently about 40% of the time is taken by installing things, hopefully we can cut that down |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1125026615 | PR_kwDOAMm_X84yH6vN | 6241 | Remove old PR template | max-sixty 5635139 | closed | 0 | 4 | 2022-02-05T20:43:01Z | 2022-02-06T14:25:35Z | 2022-02-05T21:59:06Z | MEMBER | 0 | pydata/xarray/pulls/6241 | { "url": "https://api.github.com/repos/pydata/xarray/issues/6241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
965939431 | MDExOlB1bGxSZXF1ZXN0NzA4MjIwNDIy | 5690 | Change annotations to allow str keys | max-sixty 5635139 | closed | 0 | 4 | 2021-08-11T05:17:02Z | 2021-08-19T17:55:49Z | 2021-08-19T17:27:01Z | MEMBER | 0 | pydata/xarray/pulls/5690 |
IIUC:
- This test below would fail mypy because the key of If that's correct, we should change all our |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
907715257 | MDU6SXNzdWU5MDc3MTUyNTc= | 5409 | Split up tests? | max-sixty 5635139 | open | 0 | 4 | 2021-05-31T21:07:53Z | 2021-06-16T15:51:19Z | MEMBER | Currently a large share of our tests are in There's a case for splitting these up:
- Many of the tests are somewhat duplicated between the files (and If we do this, we could start on the margin — new tests around some specific functionality — e.g. join / rolling / reindex / stack (just a few from browsing through) — could go into a new respective |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5409/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
728903948 | MDExOlB1bGxSZXF1ZXN0NTA5NTEyNjM4 | 4537 | Adjust tests to use updated pandas syntax for offsets | max-sixty 5635139 | closed | 0 | 4 | 2020-10-25T00:05:55Z | 2021-04-19T06:10:33Z | 2021-03-06T23:02:02Z | MEMBER | 0 | pydata/xarray/pulls/4537 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
704586766 | MDExOlB1bGxSZXF1ZXN0NDg5NDg1ODQw | 4435 | Release notes for 0.16.1 | max-sixty 5635139 | closed | 0 | 4 | 2020-09-18T18:54:14Z | 2020-09-20T00:08:13Z | 2020-09-20T00:06:41Z | MEMBER | 0 | pydata/xarray/pulls/4435 | Please make suggestions for any changes! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
326711578 | MDU6SXNzdWUzMjY3MTE1Nzg= | 2188 | Allow all dims-as-kwargs methods to take a dict instead | max-sixty 5635139 | closed | 0 | 4 | 2018-05-26T05:22:55Z | 2020-08-24T10:21:58Z | 2020-08-24T05:24:32Z | MEMBER | Follow up to https://github.com/pydata/xarray/pull/2174 Pasting from https://github.com/pydata/xarray/pull/2174#issuecomment-392111566
...potentially |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2188/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
575078455 | MDExOlB1bGxSZXF1ZXN0MzgzMjkxOTgz | 3824 | Transpose coords by default | max-sixty 5635139 | closed | 0 | 4 | 2020-03-04T01:45:23Z | 2020-05-06T16:39:39Z | 2020-05-06T16:39:35Z | MEMBER | 0 | pydata/xarray/pulls/3824 |
After doing this, I realize it needs to wait until 0.16, assuming 0.15.1 is our next release. If so, this should hang out until then. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
305663416 | MDU6SXNzdWUzMDU2NjM0MTY= | 1992 | Canonical approach for new vectorized functions | max-sixty 5635139 | closed | 0 | 4 | 2018-03-15T18:09:08Z | 2020-02-29T07:22:01Z | 2020-02-29T07:22:00Z | MEMBER | We are moving some code over from pandas to Xarray, and one of the biggest missing features is exponential functions, e.g. It looks like we can write these as gufuncs without too much trouble in numba. But I also notice that numbagg hasn't changed in a while and that we chose bottleneck for many of the functions in Xarray.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
567993968 | MDU6SXNzdWU1Njc5OTM5Njg= | 3782 | Add groupby.pipe? | max-sixty 5635139 | closed | 0 | 4 | 2020-02-20T01:33:31Z | 2020-02-21T14:37:44Z | 2020-02-21T14:37:44Z | MEMBER | MCVE Code Sample```python In [1]: import xarray as xr In [3]: import numpy as np In [4]: ds = xr.Dataset( ...: {"foo": (("x", "y"), np.random.rand(4, 3))}, ...: coords={"x": [10, 20, 30, 40], "letters": ("x", list("abba"))}, ...: ) In [5]: ds.groupby('letters') In [8]: ds.groupby('letters').sum(...) / ds.groupby('letters').count(...) In [9]: ds.groupby('letters').pipe(lambda x: x.sum() / x.count())AttributeError Traceback (most recent call last) <ipython-input-9-c9b142ea051b> in <module> ----> 1 ds.groupby('letters').pipe(lambda x: x.sum() / x.count()) AttributeError: 'DatasetGroupBy' object has no attribute 'pipe' ``` Expected OutputI think we could add Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
526160032 | MDExOlB1bGxSZXF1ZXN0MzQzNjcxODU0 | 3555 | Tweaks to release instructions | max-sixty 5635139 | closed | 0 | 4 | 2019-11-20T20:14:20Z | 2019-11-21T14:45:26Z | 2019-11-21T14:45:21Z | MEMBER | 0 | pydata/xarray/pulls/3555 | A few small tweaks, including a script for getting all recent contributors (feel free to edit if my bash is bad). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
509660775 | MDExOlB1bGxSZXF1ZXN0MzMwMTY0MDg0 | 3421 | Allow ellipsis (...) in transpose | max-sixty 5635139 | closed | 0 | 4 | 2019-10-20T22:15:12Z | 2019-10-28T23:47:15Z | 2019-10-28T21:12:49Z | MEMBER | 0 | pydata/xarray/pulls/3421 |
HT to @crusaderky for the idea! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
493108860 | MDU6SXNzdWU0OTMxMDg4NjA= | 3308 | NetCDF tests failing | max-sixty 5635139 | closed | 0 | 4 | 2019-09-13T02:29:39Z | 2019-09-13T15:36:27Z | 2019-09-13T15:32:46Z | MEMBER | (edit: original failure was mistaken) Does anyone know off hand why this is failing?
Worst case we could drop it... https://github.com/pydata/xarray/issues/3293 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
404829089 | MDExOlB1bGxSZXF1ZXN0MjQ4OTIyNzA5 | 2727 | silence a couple of warnings in tests | max-sixty 5635139 | closed | 0 | 4 | 2019-01-30T15:37:41Z | 2019-01-30T19:06:37Z | 2019-01-30T19:06:34Z | MEMBER | 0 | pydata/xarray/pulls/2727 | The only other warning is: ``` xarray/tests/test_dataset.py::TestDataset::test_convert_dataframe_with_many_types_and_multiindex /Users/maximilian/workspace/xarray/xarray/core/dataset.py:3146: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. data = np.asarray(series).reshape(shape) /usr/local/lib/python3.7/site-packages/pandas/core/apply.py:286: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. results[i] = self.f(v) ``` I'm not sure what we want to do here - potentially we should make a choice between:
- the worse behavior of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
399549346 | MDU6SXNzdWUzOTk1NDkzNDY= | 2683 | Travis failing on segfault at print_versions | max-sixty 5635139 | closed | 0 | 4 | 2019-01-15T21:45:30Z | 2019-01-18T21:47:44Z | 2019-01-18T21:47:44Z | MEMBER | master is breaking on both the docs and python3.6
Has anyone seen this before? I can't replicate locally, but I likely don't have the same dependencies |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
304021813 | MDU6SXNzdWUzMDQwMjE4MTM= | 1978 | Efficient rolling 'trick' | max-sixty 5635139 | closed | 0 | 4 | 2018-03-10T00:29:33Z | 2018-03-10T01:23:06Z | 2018-03-10T01:23:06Z | MEMBER | Based off http://www.rigtorp.se/2011/01/01/rolling-statistics-numpy.html, we wrote up a function that 'tricks' numpy into presenting an array that looks rolling, but without the O^2 memory requirements Would people be interested in this going into xarray? It seems to work really well on a few use-cases, but I imagine it's enough trickery that we might not want to support it in xarray.
And, to be clear, it's strictly worse where we have rolling algos. But where we don't, you get a rolling ```python def rolling_window_numpy(a, window): """ Make an array appear to be rolling, but using only a view http://www.rigtorp.se/2011/01/01/rolling-statistics-numpy.html """ shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) def rolling_window(da, span, dim=None, new_dim='dim_0'): """ Adds a rolling dimension to a DataArray using only a view """ original_dims = da.dims da = da.transpose(*tuple(d for d in da.dims if d != dim) + (dim,))
testsimport numpy as np import pandas as pd import pytest import xarray as xr @pytest.fixture def da(dims): return xr.DataArray( np.random.rand(5, 10, 15), dims=(list('abc'))).transpose(*dims) @pytest.fixture(params=[ list('abc'), list('bac'), list('cab'), ]) def dims(request): return request.param def test_iterate_imputation_fills_missing(sample_data): sample_data.iloc[2, 2] = pd.np.nan result = iterate_imputation(sample_data) assert result.shape == sample_data.shape assert result.notnull().values.all() def test_rolling_window(da, dims):
def test_rolling_window_values():
``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
298443020 | MDExOlB1bGxSZXF1ZXN0MTcwMDgxNTU5 | 1925 | flake8 passes | max-sixty 5635139 | closed | 0 | 4 | 2018-02-20T01:09:05Z | 2018-02-22T02:20:58Z | 2018-02-20T18:04:59Z | MEMBER | 0 | pydata/xarray/pulls/1925 | I was still getting stickler errors for code I hadn't changed. Flake8 should now pass |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
271991268 | MDExOlB1bGxSZXF1ZXN0MTUxMjMyOTA2 | 1696 | ffill & bfill methods | max-sixty 5635139 | closed | 0 | 4 | 2017-11-07T21:32:19Z | 2017-11-12T00:14:33Z | 2017-11-12T00:14:29Z | MEMBER | 0 | pydata/xarray/pulls/1696 |
No docs / docstrings. No backfill. But otherwise is this a reasonable layout?
Do we prefer |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
149229636 | MDExOlB1bGxSZXF1ZXN0NjY4OTc3MjI= | 832 | WIP for transitioning from Panel docs | max-sixty 5635139 | closed | 0 | 4 | 2016-04-18T18:16:41Z | 2016-08-08T17:08:23Z | 2016-08-08T17:08:23Z | MEMBER | 0 | pydata/xarray/pulls/832 | A start for some docs on transitioning from pandas Panel to xarray. This is some way from the final version - but putting it out there and will iterate. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
153640301 | MDU6SXNzdWUxNTM2NDAzMDE= | 846 | Inconsistent handling of .item with PeriodIndex | max-sixty 5635139 | closed | 0 | 4 | 2016-05-08T06:51:03Z | 2016-05-11T05:05:36Z | 2016-05-11T05:05:36Z | MEMBER | Is this an inconsistency? With DatetimeIndex, ``` python In [14]: da=xr.DataArray(pd.DataFrame(pd.np.random.rand(10), index=pd.DatetimeIndex(start='2000', periods=10,freq='A'))) In [15]: p=da['dim_0'][0] In [16]: p.values Out[16]: numpy.datetime64('2000-12-31T00:00:00.000000000') In [17]: p.item() Out[17]: 978220800000000000L ``` But with a PeriodIndex, ``` python In [22]: da=xr.DataArray(pd.DataFrame(pd.np.random.rand(10), index=pd.PeriodIndex(start='2000', periods=10))) In [23]: p=da['dim_0'][0] In [24]: p.values Out[24]: Period('2000', 'A-DEC') In [25]: p.item() AttributeError: 'pandas._period.Period' object has no attribute 'item' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
132237664 | MDExOlB1bGxSZXF1ZXN0NTg2NTY0NTA= | 749 | Retain label type in .to_dataset | max-sixty 5635139 | closed | 0 | 4 | 2016-02-08T19:38:59Z | 2016-02-16T03:51:06Z | 2016-02-14T23:34:56Z | MEMBER | 0 | pydata/xarray/pulls/749 | Not sure if this was a deliberate choice (maybe so integer indexing wasn't disrupted?). This allows objects as dataset keys / labels. Let me know your thoughts and I'll add a what's new. (I know I have a few PRs dangling - apologies - will go back and clean them up soon) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
115364729 | MDExOlB1bGxSZXF1ZXN0NDk4NzQyODc= | 646 | Selection works with PeriodIndex-like indexes | max-sixty 5635139 | closed | 0 | 4 | 2015-11-05T20:13:44Z | 2015-11-18T01:03:06Z | 2015-11-06T05:44:01Z | MEMBER | 0 | pydata/xarray/pulls/646 | Resolves the most pressing issue in #645 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);