issues
45 rows where comments = 1 and user = 14371165 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2241095068 | PR_kwDOAMm_X85sixE5 | 8935 | Use Variable.stack instead of np.ravel | Illviljan 14371165 | open | 0 | 1 | 2024-04-12T23:04:35Z | 2024-04-13T08:27:13Z | MEMBER | 1 | pydata/xarray/pulls/8935 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1410608825 | PR_kwDOAMm_X85A4RjC | 7173 | Add LineCollection plot | Illviljan 14371165 | open | 0 | 1 | 2022-10-16T20:16:28Z | 2024-04-07T20:26:44Z | MEMBER | 1 | pydata/xarray/pulls/7173 | This adds a line plotter based on I wanted to replace xref: 48205622 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7173/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2215603817 | I_kwDOAMm_X86ED25p | 8892 | ffill's tolerance argument can be strings | Illviljan 14371165 | open | 0 | 1 | 2024-03-29T15:49:40Z | 2024-04-02T01:50:34Z | MEMBER | What happened?
But our typing assumes it's floats only: https://github.com/pydata/xarray/blob/2120808bbe45f3d4f0b6a01cd43bac4df4039092/xarray/core/resample.py#L69-L94 What did you expect to happen?Since our pytests pass, mypy should pass as well. Minimal Complete Verifiable Example```python import numpy as np import pandas as pd import xarray as xr https://github.com/pydata/xarray/blob/2120808bbe45f3d4f0b6a01cd43bac4df4039092/xarray/tests/test_groupby.py#L2016Test tolerance keyword for upsample methods bfill, pad, nearesttimes = pd.date_range("2000-01-01", freq="1D", periods=2) times_upsampled = pd.date_range("2000-01-01", freq="6h", periods=5) array = xr.DataArray(np.arange(2), [("time", times)]) Forward fillactual = array.resample(time="6h").ffill(tolerance="12h") expected = xr.DataArray([0.0, 0.0, 0.0, np.nan, 1.0], [("time", times_upsampled)]) xr.testing.assert_identical(expected, actual) ``` Environmentmaster |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2215324218 | PR_kwDOAMm_X85rKmW7 | 8890 | Add typing to test_groupby.py | Illviljan 14371165 | closed | 0 | 1 | 2024-03-29T13:13:59Z | 2024-03-29T16:38:17Z | 2024-03-29T16:38:16Z | MEMBER | 0 | pydata/xarray/pulls/8890 | Enforce typing on all tests in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8890/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2203493958 | PR_kwDOAMm_X85qiskT | 8868 | Try ruff lint for numpy 2.0 | Illviljan 14371165 | closed | 0 | 1 | 2024-03-22T23:31:04Z | 2024-03-22T23:34:11Z | 2024-03-22T23:33:03Z | MEMBER | 1 | pydata/xarray/pulls/8868 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1418830429 | PR_kwDOAMm_X85BT_a6 | 7194 | Align naming convention with plt.subplots | Illviljan 14371165 | closed | 0 | 1 | 2022-10-21T20:31:34Z | 2024-03-13T21:44:17Z | 2022-10-23T11:58:12Z | MEMBER | 0 | pydata/xarray/pulls/7194 | I noticed that the normal notation for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2034545604 | PR_kwDOAMm_X85hnyZ9 | 8538 | Check that compat is equal to identical only once in dataset concat | Illviljan 14371165 | closed | 0 | 1 | 2023-12-10T21:42:20Z | 2023-12-13T09:27:11Z | 2023-12-13T09:27:11Z | MEMBER | 0 | pydata/xarray/pulls/8538 | Small change to avoid triggering several if-checks unnecessarily. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8538/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1966567753 | PR_kwDOAMm_X85eBydo | 8386 | Use get_args for duckarray checks | Illviljan 14371165 | closed | 0 | 1 | 2023-10-28T11:58:57Z | 2023-10-28T12:46:10Z | 2023-10-28T12:45:32Z | MEMBER | 1 | pydata/xarray/pulls/8386 | xref: #8376 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1944068053 | PR_kwDOAMm_X85c1sC5 | 8314 | Align MemoryCachedArray and PandasIndexingAdapter more | Illviljan 14371165 | closed | 0 | 1 | 2023-10-15T21:42:27Z | 2023-10-16T20:01:21Z | 2023-10-16T20:01:20Z | MEMBER | 0 | pydata/xarray/pulls/8314 | Seen in #8294. The issue is the IndexVariable, ExplicitlyIndexedNDArrayMixin lacks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1944059054 | PR_kwDOAMm_X85c1qR0 | 8312 | Fix typing issues in tests | Illviljan 14371165 | closed | 0 | 1 | 2023-10-15T21:11:12Z | 2023-10-16T15:09:57Z | 2023-10-16T15:09:57Z | MEMBER | 0 | pydata/xarray/pulls/8312 | Seen in #8294. These tests implicitly made sure the type was correct in a way that type checkers wont understand. Make it explicit instead. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8312/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1944083743 | PR_kwDOAMm_X85c1vGB | 8315 | Handle numpy missing the array api function astype | Illviljan 14371165 | closed | 0 | 1 | 2023-10-15T22:32:17Z | 2023-10-16T06:25:20Z | 2023-10-16T06:25:19Z | MEMBER | 0 | pydata/xarray/pulls/8315 | This is how our get_array_namespace works: https://github.com/pydata/xarray/blob/dafd726c36e24ac77427513a4a149a6933353b66/xarray/core/duck_array_ops.py#L44-L48 Which usually works. But not for astype. Using np.array_api doesn't work because you have to use np.array_api.Array instead of np.ndarray: ```python import numpy.array_api as nxp nxp.astype(np.array([1, 2,]), np.dtype(float)) Traceback (most recent call last): File "C:\Users\J.W\AppData\Local\Temp\ipykernel_8616\23329947.py", line 1, in <cell line: 1> nxp.astype(np.array([1, 2,]), np.dtype(float)) File "C:\Users\J.W\anaconda3\envs\xarray-tests\lib\site-packages\numpy\array_api_data_type_functions.py", line 20, in astype return Array._new(x._array.astype(dtype=dtype, copy=copy)) AttributeError: 'numpy.ndarray' object has no attribute '_array' ``` I found it simpler to just change astype here. An alternative solution would be to use: https://github.com/data-apis/array-api-compat https://github.com/tomwhite/cubed/pull/317 Seen in #8294. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1938790253 | PR_kwDOAMm_X85ckfW9 | 8295 | Remove real, imag, astype methods from NamedArray | Illviljan 14371165 | closed | 0 | 1 | 2023-10-11T21:44:26Z | 2023-10-13T15:58:07Z | 2023-10-13T15:58:06Z | MEMBER | 0 | pydata/xarray/pulls/8295 | These methods are not in the Array API. Instead convert the methods to functions in similar fashion as the array api. https://data-apis.org/array-api/latest/API_specification/index.html Not sure how to handle array compliant functions with an axis argument (max for example) but that's for a future PR. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1931329996 | PR_kwDOAMm_X85cK3Or | 8281 | Add high level from_array function in namedarray | Illviljan 14371165 | closed | 0 | 1 | 2023-10-07T12:19:23Z | 2023-10-10T17:10:37Z | 2023-10-10T17:10:37Z | MEMBER | 1 | pydata/xarray/pulls/8281 | The Idea is to avoid as much normalization in the NamedArray class as possible. Different types are handled before initializing instead.
References: https://github.com/tomwhite/cubed/blob/ea885193dd37d27917a24878b51bb086aaef5fb1/cubed/core/ops.py#L34 https://stackoverflow.com/questions/74633074/how-to-type-hint-a-generic-numpy-array https://numpy.org/doc/stable/reference/arrays.scalars.html#scalars https://github.com/numpy/numpy/blob/040ed2dc9847265c581a342301dd87d2b518a3c2/numpy/init.pyi#L1423 https://github.com/numpy/numpy/blob/040ed2dc9847265c581a342301dd87d2b518a3c2/numpy/_typing/_array_like.py#L32 Mypy issues: https://github.com/python/typing/issues/548 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1916331957 | PR_kwDOAMm_X85bYX4z | 8240 | Bind T_DuckArray to NamedArray | Illviljan 14371165 | closed | 0 | 1 | 2023-09-27T21:11:58Z | 2023-09-28T16:18:26Z | 2023-09-28T16:18:26Z | MEMBER | 0 | pydata/xarray/pulls/8240 | Binding allows typing the .data property. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1549889322 | PR_kwDOAMm_X85IKePm | 7460 | Add abstractmethods to backend classes | Illviljan 14371165 | open | 0 | 1 | 2023-01-19T20:19:36Z | 2023-07-29T11:42:33Z | MEMBER | 1 | pydata/xarray/pulls/7460 | It's been unclear to me what methods are necessary to implement or not. I think decorating with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1699099029 | PR_kwDOAMm_X85P8IoD | 7824 | Improve concat performance | Illviljan 14371165 | closed | 0 | 1 | 2023-05-07T14:54:06Z | 2023-06-02T14:36:11Z | 2023-06-02T14:36:11Z | MEMBER | 0 | pydata/xarray/pulls/7824 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1710752209 | PR_kwDOAMm_X85QjIMH | 7844 | Improve to_dask_dataframe performance | Illviljan 14371165 | closed | 0 | 1 | 2023-05-15T20:08:24Z | 2023-05-25T20:08:54Z | 2023-05-25T20:08:54Z | MEMBER | 0 | pydata/xarray/pulls/7844 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1697987899 | PR_kwDOAMm_X85P4g05 | 7820 | Pin pint to 0.20 | Illviljan 14371165 | closed | 0 | 1 | 2023-05-05T17:59:40Z | 2023-05-06T07:27:28Z | 2023-05-06T07:27:28Z | MEMBER | 0 | pydata/xarray/pulls/7820 | Newest pint crashes our tests for some reason, pin it for now.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1638243008 | PR_kwDOAMm_X85MxepP | 7668 | Pull Request Labeler - Use a released version | Illviljan 14371165 | closed | 0 | 1 | 2023-03-23T20:18:49Z | 2023-03-23T20:29:04Z | 2023-03-23T20:29:04Z | MEMBER | 0 | pydata/xarray/pulls/7668 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1554036799 | PR_kwDOAMm_X85IYHUz | 7472 | Avoid in-memory broadcasting when converting to_dask_dataframe | Illviljan 14371165 | closed | 0 | 1 | 2023-01-24T00:15:01Z | 2023-01-26T17:00:24Z | 2023-01-26T17:00:23Z | MEMBER | 0 | pydata/xarray/pulls/7472 | Turns out that there's a call to
Debugging script:
```python
import dask.array as da
import xarray as xr
import numpy as np
chunks = 5000
# I have to restart the pc if running with this:
# dim1_sz = 100_000
# dim2_sz = 100_000
# Does not crash when using the following constants, >5 gig RAM increase though:
dim1_sz = 40_000
dim2_sz = 40_000
x = da.random.random((dim1_sz, dim2_sz), chunks=chunks)
ds = xr.Dataset(
{
"x": xr.DataArray(
data=x,
dims=["dim1", "dim2"],
coords={"dim1": np.arange(0, dim1_sz), "dim2": np.arange(0, dim2_sz)},
)
}
)
# with dask.config.set(**{"array.slicing.split_large_chunks": True}):
df = ds.to_dask_dataframe()
print(df)
```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1555497796 | PR_kwDOAMm_X85Ic_wm | 7474 | Add benchmarks for to_dataframe and to_dask_dataframe | Illviljan 14371165 | closed | 0 | 1 | 2023-01-24T18:48:26Z | 2023-01-24T21:00:39Z | 2023-01-24T20:13:30Z | MEMBER | 0 | pydata/xarray/pulls/7474 | Related to #7472. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1523232313 | PR_kwDOAMm_X85G2q5I | 7426 | Add lazy backend ASV test | Illviljan 14371165 | closed | 0 | 1 | 2023-01-06T22:01:26Z | 2023-01-12T16:00:05Z | 2023-01-11T18:56:25Z | MEMBER | 0 | pydata/xarray/pulls/7426 | This tests xr.open_dataset without any slow file reading that can quickly become the majority of the performance time. Related to #7374. Timings for the new ASV-tests: ``` [ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok
[ 50.85%] ··· ======== ============
chunks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1485474624 | PR_kwDOAMm_X85E1fDn | 7370 | absolufy-imports - Only in xarray folder | Illviljan 14371165 | closed | 0 | 1 | 2022-12-08T21:57:58Z | 2022-12-10T11:42:32Z | 2022-12-09T16:55:12Z | MEMBER | 0 | pydata/xarray/pulls/7370 | This reverts some of commit 6e77f5e8942206b3e0ab08c3621ade1499d8235b and #7204. Apparently using it on all folders is not a good idea, follow pandas example. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1445870847 | PR_kwDOAMm_X85CuWCd | 7281 | Use a default value for constant dimensions | Illviljan 14371165 | closed | 0 | 1 | 2022-11-11T18:41:16Z | 2022-11-20T09:59:38Z | 2022-11-20T09:59:38Z | MEMBER | 0 | pydata/xarray/pulls/7281 |
With main we get 18 for constant arrays, but 36 if markersize was undefined. This seems a bit inconsistent to me. This PR adds a default value instead for constant arrays Follow up to #7272. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1440711212 | PR_kwDOAMm_X85Cc31j | 7272 | Handle division by zero in _Normalize._calc_widths | Illviljan 14371165 | closed | 0 | 1 | 2022-11-08T18:35:55Z | 2022-11-11T06:27:50Z | 2022-11-11T06:27:50Z | MEMBER | 0 | pydata/xarray/pulls/7272 | Found an issue when constant values was used. Now if constant values are found it'll default to the minimum width value instead. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1422864082 | PR_kwDOAMm_X85BhW94 | 7218 | Rename FacetGrid.axes to FacetGrid.axs in tests | Illviljan 14371165 | closed | 0 | 1 | 2022-10-25T17:59:39Z | 2022-10-27T17:45:20Z | 2022-10-27T17:45:19Z | MEMBER | 0 | pydata/xarray/pulls/7218 | Follow up to #7194. This fixes all the warnings related to the change. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1410526253 | PR_kwDOAMm_X85A4Bki | 7169 | Rework docs about scatter plots | Illviljan 14371165 | closed | 0 | 1 | 2022-10-16T15:37:25Z | 2022-10-17T13:40:01Z | 2022-10-17T13:40:01Z | MEMBER | 0 | pydata/xarray/pulls/7169 | Show off some more possibilities with the scatter plot. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1340745375 | PR_kwDOAMm_X849RXXN | 6923 | Add Self in xarray.core.types | Illviljan 14371165 | closed | 0 | 1 | 2022-08-16T18:48:15Z | 2022-08-22T12:24:05Z | 2022-08-22T12:24:05Z | MEMBER | 0 | pydata/xarray/pulls/6923 | Adds Wont really become useful until https://github.com/python/mypy/issues/11871 is fixed. But it can be used with pyright at least. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6923/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1052952145 | PR_kwDOAMm_X84uf1L8 | 5988 | Check for py version instead of try/except when importing entry_points | Illviljan 14371165 | closed | 0 | 1 | 2021-11-14T14:23:18Z | 2022-08-12T09:08:25Z | 2021-11-14T20:16:57Z | MEMBER | 0 | pydata/xarray/pulls/5988 | This removes the need for the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5988/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1099631638 | PR_kwDOAMm_X84w0pGe | 6159 | Import Literal from typing instead of typing_extensions | Illviljan 14371165 | closed | 0 | 1 | 2022-01-11T21:26:59Z | 2022-08-12T09:06:58Z | 2022-01-11T21:59:16Z | MEMBER | 0 | pydata/xarray/pulls/6159 | Small edit to #6121. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1171424128 | PR_kwDOAMm_X840jZCq | 6371 | Remove test_rasterio_vrt_network | Illviljan 14371165 | closed | 0 | 1 | 2022-03-16T18:49:29Z | 2022-08-12T09:06:02Z | 2022-03-17T06:25:22Z | MEMBER | 0 | pydata/xarray/pulls/6371 | This test has been failing with a 404 error for a while. Remove the test because a lot of the functionality is implemented in rioxarray.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1167394407 | PR_kwDOAMm_X840WQW1 | 6351 | Run pyupgrade on core/groupby | Illviljan 14371165 | closed | 0 | 1 | 2022-03-12T20:46:15Z | 2022-08-12T09:05:37Z | 2022-03-13T04:21:54Z | MEMBER | 0 | pydata/xarray/pulls/6351 | Minor touch up looking through #5950.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1133868003 | PR_kwDOAMm_X84yl7lK | 6270 | Update pyupgrade to py38-plus | Illviljan 14371165 | closed | 0 | 1 | 2022-02-12T10:58:00Z | 2022-08-12T09:05:31Z | 2022-02-12T13:50:31Z | MEMBER | 0 | pydata/xarray/pulls/6270 | xref: #6244 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1034382021 | PR_kwDOAMm_X84tlqIi | 5893 | Only run asv benchmark when labeled | Illviljan 14371165 | closed | 0 | 1 | 2021-10-24T10:44:17Z | 2022-08-12T09:02:27Z | 2021-10-24T11:35:42Z | MEMBER | 0 | pydata/xarray/pulls/5893 | Small fix to #5796. The benchmark was only intended to run when the PR has the label |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1332546810 | PR_kwDOAMm_X8482ZvM | 6897 | Type xr.tutorial | Illviljan 14371165 | closed | 0 | 1 | 2022-08-09T00:20:19Z | 2022-08-12T08:59:30Z | 2022-08-10T07:40:18Z | MEMBER | 0 | pydata/xarray/pulls/6897 | Add some typing to the open_dataset functions. Was doing some debugging and I only got |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1046458609 | I_kwDOAMm_X84-X7Dx | 5945 | Start using `|` instead of `Union` or `Optional` when typing | Illviljan 14371165 | closed | 0 | 1 | 2021-11-06T08:12:57Z | 2022-06-04T04:26:03Z | 2022-06-04T04:26:03Z | MEMBER | Is your feature request related to a problem? Please describe.
To make it easier reading the typing it is now possible to use Here's an example how it looks like in pandas: https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/_core.py#L116-L134 Describe the solution you'd like
Replace for example:
* This would likely require adding References https://www.python.org/dev/peps/pep-0604/ |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5945/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1236115720 | PR_kwDOAMm_X84306YV | 6609 | Add setuptools as dependency in ASV benchmark CI | Illviljan 14371165 | closed | 0 | 1 | 2022-05-14T20:33:09Z | 2022-05-15T17:14:24Z | 2022-05-14T23:06:44Z | MEMBER | 0 | pydata/xarray/pulls/6609 | Adding
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1174585854 | I_kwDOAMm_X85GAsH- | 6384 | xr.concat adds an extra array around elements | Illviljan 14371165 | closed | 0 | 1 | 2022-03-20T15:25:49Z | 2022-03-21T04:49:23Z | 2022-03-21T04:49:23Z | MEMBER | What happened?When concatenating dataarrays with Minimal Complete Verifiable Example```Python import numpy as np import xarray as xr shape = (2, 3, 4) darray = xr.DataArray(np.linspace(0, 1, num=np.prod(shape)).reshape(shape)) bins = [-1, 0, 1, 2] a = darray.groupby_bins("dim_0", bins).mean(...) a_nan = np.nan * a.isel(**{"dim_0_bins": -1}) out = xr.concat([a, a_nan], dim="dim_0_bins") print(out["dim_0_bins"]) ``` Relevant log outputCurrent result:
Should be: ```python <xarray.DataArray 'dim_0_bins' (dim_0_bins: 4)> array([Interval(-1, 0, closed='right'), Interval(0, 1, closed='right'), Interval(1, 2, closed='right'), Interval(1, 2, closed='right')], dtype=object) Coordinates: * dim_0_bins (dim_0_bins) object (-1, 0] (0, 1] (1, 2] (1, 2] ``` Anything else we need to know?No response Environment
xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.6 | packaged by conda-forge | (default, Jul 11 2021, 03:37:25) [MSC v.1916 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
LOCALE: ('Swedish_Sweden', '1252')
libhdf5: 1.10.6
libnetcdf: 4.7.4
xarray: 0.16.3.dev99+gc19467fb
pandas: 1.3.1
numpy: 1.21.5
scipy: 1.7.1
netCDF4: 1.5.6
pydap: installed
h5netcdf: 0.11.0
h5py: 2.10.0
Nio: None
zarr: 2.8.3
cftime: 1.5.0
nc_time_axis: 1.3.1
PseudoNetCDF: installed
rasterio: 1.2.6
cfgrib: None
iris: 3.0.4
bottleneck: 1.3.2
dask: 2021.10.0
distributed: 2021.10.0
matplotlib: 3.4.3
cartopy: 0.19.0.post1
seaborn: 0.11.1
numbagg: 0.2.1
fsspec: 2021.11.1
cupy: None
pint: 0.17
sparse: 0.12.0
setuptools: 49.6.0.post20210108
pip: 21.2.4
conda: None
pytest: 6.2.4
IPython: 7.31.0
sphinx: 4.3.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
931796211 | MDU6SXNzdWU5MzE3OTYyMTE= | 5546 | Limit number of displayed dimensions in repr | Illviljan 14371165 | closed | 0 | 1 | 2021-06-28T17:25:18Z | 2022-01-03T17:38:48Z | 2022-01-03T17:38:48Z | MEMBER | What happened: Dimension doesn't seem to be limited when there are too many of them. See example below. This slows down the repr significantly and is quite unreadable to me. What you expected to happen: To be limited so that it aligns with whatever the maximum line length is for variables. It's also fine if it continues on a couple of rows below in similar fashion to variables. Minimal Complete Verifiable Example:
This is probably a bit of an edge case. My real datasets usually have around 12 "dimensions" and coords, +2000 variables, 50 attrs.
Anything else we need to know?: Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 15:50:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 byteorder: little LC_ALL: None LANG: en libhdf5: 1.10.6 libnetcdf: None xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.3 scipy: 1.6.3 netCDF4: None pydap: None h5netcdf: None h5py: 3.2.1 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.05.0 distributed: 2021.05.0 matplotlib: 3.4.2 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.1.2 conda: 4.10.1 pytest: 6.2.4 IPython: 7.24.1 sphinx: 4.0.2 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5546/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
781168967 | MDExOlB1bGxSZXF1ZXN0NTUwOTM4NDAy | 4776 | Speed up missing._get_interpolator | Illviljan 14371165 | closed | 0 | 1 | 2021-01-07T09:32:40Z | 2021-05-18T18:17:06Z | 2021-01-08T15:55:39Z | MEMBER | 0 | pydata/xarray/pulls/4776 | Importing scipy.interpolate is slow and should only be done when necessary. Test case from 200ms to 6ms.
<sub>By default, the upstream dev CI is disabled on pull request and push events. You can override this behavior per commit by adding a |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
797453161 | MDExOlB1bGxSZXF1ZXN0NTY0NDQ4Mzg3 | 4850 | Allow "unit" in label_from_attrs | Illviljan 14371165 | closed | 0 | 1 | 2021-01-30T16:02:34Z | 2021-05-18T18:16:58Z | 2021-01-30T16:26:04Z | MEMBER | 0 | pydata/xarray/pulls/4850 | It is also popular to call units |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
891165324 | MDExOlB1bGxSZXF1ZXN0NjQ0MDc2NDg3 | 5297 | Add whats new for dataset interpolation with non-numerics | Illviljan 14371165 | closed | 0 | 1 | 2021-05-13T16:00:51Z | 2021-05-18T18:15:35Z | 2021-05-13T16:30:21Z | MEMBER | 0 | pydata/xarray/pulls/5297 | Follow up for #5008 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
892748525 | MDExOlB1bGxSZXF1ZXN0NjQ1MzcyMjE3 | 5319 | Convert new_shape from list to tuple in _unstack_once | Illviljan 14371165 | closed | 0 | 1 | 2021-05-16T20:04:05Z | 2021-05-18T18:14:00Z | 2021-05-16T23:50:10Z | MEMBER | 0 | pydata/xarray/pulls/5319 | Having
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
892411608 | MDExOlB1bGxSZXF1ZXN0NjQ1MTIxMzQ0 | 5314 | Add version variable for optional imports in pycompat | Illviljan 14371165 | closed | 0 | 1 | 2021-05-15T10:43:35Z | 2021-05-18T18:13:32Z | 2021-05-16T23:50:31Z | MEMBER | 0 | pydata/xarray/pulls/5314 | It was difficult to do version checks with optional imports so I added variables in pycompat and removed some of the imports I found. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
779938616 | MDU6SXNzdWU3Nzk5Mzg2MTY= | 4770 | Interpolation always returns floats | Illviljan 14371165 | open | 0 | 1 | 2021-01-06T03:16:43Z | 2021-01-12T16:30:54Z | MEMBER | What happened: When interpolating datasets integer arrays are forced to floats. What you expected to happen: To retain the same dtype after interpolation. Minimal Complete Verifiable Example: ```python import numpy as np import dask.array as da a = np.arange(0, 2) b = np.core.defchararray.add("long_variable_name", a.astype(str)) coords = dict(time=da.array([0, 1])) data_vars = dict() for v in b: data_vars[v] = xr.DataArray( name=v, data=da.array([0, 1], dtype=int), dims=["time"], coords=coords, ) ds1 = xr.Dataset(data_vars) print(ds1) Out[35]: <xarray.Dataset> Dimensions: (time: 4) Coordinates: * time (time) float64 0.0 0.5 1.0 2.0 Data variables: long_variable_name0 (time) int32 dask.array<chunksize=(4,), meta=np.ndarray> long_variable_name1 (time) int32 dask.array<chunksize=(4,), meta=np.ndarray> Interpolate:ds1 = ds1.interp( time=da.array([0, 0.5, 1, 2]), assume_sorted=True, method="linear", kwargs=dict(fill_value="extrapolate"), ) dask array thinks it's an integer array:print(ds1.long_variable_name0) Out[55]: <xarray.DataArray 'long_variable_name0' (time: 4)> dask.array<dask_aware_interpnd, shape=(4,), dtype=int32, chunksize=(4,), chunktype=numpy.ndarray> Coordinates: * time (time) float64 0.0 0.5 1.0 2.0 But once computed it turns out is a float:print(ds1.long_variable_name0.compute()) Out[38]: <xarray.DataArray 'long_variable_name0' (time: 4)> array([0. , 0.5, 1. , 2. ]) Coordinates: * time (time) float64 0.0 0.5 1.0 2.0 ``` Anything else we need to know?:
An easy first step is to also force The more difficult way is to somehow be able to change back the dataarrays into the old dtype without affecting performance. I did a test simply adding I was thinking the conversion to floats in scipy could be avoided altogether by adding a (non-)public option to ignore any dtype checks and just let the user handle the "unsafe" interpolations. Related: https://github.com/scipy/scipy/issues/11093 Environment: Output of <tt>xr.show_versions()</tt>xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows libhdf5: 1.10.4 libnetcdf: None xarray: 0.16.2 pandas: 1.1.5 numpy: 1.17.5 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2020.12.0 distributed: 2020.12.0 matplotlib: 3.3.2 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 51.0.0.post20201207 pip: 20.3.3 conda: 4.9.2 pytest: 6.2.1 IPython: 7.19.0 sphinx: 3.4.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4770/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);