issues
10 rows where repo = 13221727 and "updated_at" is on date 2023-09-19 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, comments, author_association, draft, state_reason, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1902482417 | I_kwDOAMm_X85xZZPx | 8209 | Protect `main` from mistaken pushes | max-sixty 5635139 | closed | 0 | 5 | 2023-09-19T08:36:45Z | 2023-09-19T22:00:42Z | 2023-09-19T22:00:41Z | MEMBER | What is your issue?Hi team — apologies but I mistakenly pushed to main. Less than a minute later I pushed another commit reverting the commit. I'll check my git shortcuts tomorrow to ensure this can't happen by default — I thought I had the push remote set correctly, but possibly something doesn't use that. Would we consider adding protection from pushes to (Will close this issue) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8209/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1901520970 | PR_kwDOAMm_X85amcNE | 8203 | Add T_DuckArray type hint to Variable.data | Illviljan 14371165 | closed | 0 | 3 | 2023-09-18T18:33:50Z | 2023-09-19T17:41:28Z | 2023-09-19T15:23:25Z | MEMBER | 0 | pydata/xarray/pulls/8203 | The typing of Variable.data has been the very wide This has led to confusion in downstream functions that uses This PR is a start at cleaning this up by defining a T_DuckArray typevar that symbolizes an array api compliant array. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1673579421 | I_kwDOAMm_X85jwMud | 7765 | Revisiting Xarray's Minimum dependency versions policy | jhamman 2443309 | open | 0 | 9 | 2023-04-18T17:46:03Z | 2023-09-19T15:54:09Z | MEMBER | What is your issue?We have recently had a few reports expressing frustration with our minimum dependency version policy. This issue aims to discuss if changes to our policy are needed. Background
Diagnosis
Discussion questions
Action items
xref: https://github.com/pydata/xarray/issues/4179, https://github.com/pydata/xarray/pull/7461 Moderators note: I suspect a number of folks will want to comment on this issue with "Please support Python 3.8 for longer...". If that is the nature of your comment, please just give this a ❤️ reaction rather than filling up the discussion. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
reopened | xarray 13221727 | issue | |||||||
1902086612 | PR_kwDOAMm_X85aoYuf | 8206 | flox: Set fill_value=np.nan always. | dcherian 2448579 | open | 0 | 0 | 2023-09-19T02:19:49Z | 2023-09-19T02:23:26Z | MEMBER | 1 | pydata/xarray/pulls/8206 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1869879398 | PR_kwDOAMm_X85Y8P4c | 8118 | Add Coordinates `set_xindex()` and `drop_indexes()` methods | benbovy 4160723 | open | 0 | 0 | 2023-08-28T14:28:24Z | 2023-09-19T01:53:18Z | MEMBER | 0 | pydata/xarray/pulls/8118 |
I don't think that we need to copy most API from Dataset / DataArray to ```python import dask.array as da import numpy as np import xarray as xr coords = ( xr.Coordinates( coords={"x": da.arange(100_000_000), "y": np.arange(100)}, indexes={}, ) .set_xindex("x", DaskIndex) .set_xindex("y", xr.indexes.PandasIndex) ) ds = xr.Dataset(coords=coords) <xarray.Dataset>Dimensions: (x: 100000000, y: 100)Coordinates:* x (x) int64 dask.array<chunksize=(16777216,), meta=np.ndarray>* y (y) int64 0 1 2 3 4 5 6 7 8 9 10 ... 90 91 92 93 94 95 96 97 98 99Data variables:emptyIndexes:x DaskIndex``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1801393806 | PR_kwDOAMm_X85VVV4q | 7981 | Document that Coarsen accepts coord func as callable | TomNicholas 35968931 | open | 0 | 0 | 2023-07-12T17:01:31Z | 2023-09-19T01:18:49Z | MEMBER | 0 | pydata/xarray/pulls/7981 | Documents a hidden feature I noticed yesterday, corrects incorrect docstrings, and tidies up some of the typing internally.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1865945636 | PR_kwDOAMm_X85YvIJ4 | 8114 | Move `.rolling_exp` functions from `reduce` to `apply_ufunc` | max-sixty 5635139 | closed | 0 | 10 | 2023-08-24T21:57:19Z | 2023-09-19T01:13:27Z | 2023-09-19T01:13:22Z | MEMBER | 0 | pydata/xarray/pulls/8114 |
A similar change should solve #6528, but let's get one finished first... ~Posting for discussion, will comment inline~ Ready for merge |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1326238990 | I_kwDOAMm_X85PDM0O | 6870 | `rolling_exp` loses coords | max-sixty 5635139 | closed | 0 | 4 | 2022-08-02T18:27:44Z | 2023-09-19T01:13:23Z | 2023-09-19T01:13:23Z | MEMBER | What happened?We lose the time coord here — ```python ds = xr.tutorial.load_dataset("air_temperature") ds.rolling_exp(time=5).mean() <xarray.Dataset> Dimensions: (lat: 25, time: 2920, lon: 53) Coordinates: * lat (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0 * lon (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0 Dimensions without coordinates: time Data variables: air (time, lat, lon) float32 241.2 242.5 243.5 ... 296.4 296.1 295.7 ``` (I realize I wrote this, I didn't think this used to happen, but either it always did or I didn't write good enough tests... mea culpa) What did you expect to happen?We keep the time coords, like we do for normal
Minimal Complete Verifiable Example
MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.13 (main, May 24 2022, 21:13:51)
[Clang 13.1.6 (clang-1316.0.21.2)]
python-bits: 64
OS: Darwin
OS-release: 21.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2022.6.0
pandas: 1.4.3
numpy: 1.21.6
scipy: 1.8.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.12.0
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2021.12.0
distributed: 2021.12.0
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: 0.2.1
fsspec: 2021.11.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 62.3.2
pip: 22.1.2
conda: None
pytest: 7.1.2
IPython: 8.4.0
sphinx: 4.3.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1225191984 | I_kwDOAMm_X85JBvIw | 6570 | h5netcdf-engine now reads attributes with array length 1 as scalar | erik-mansson 16100116 | closed | 0 | 1 | 2022-05-04T10:34:06Z | 2023-09-19T01:02:24Z | 2023-09-19T01:02:24Z | NONE | What is your issue?The h5netcdf engine for reading NetCDF4-files was recently changed https://github.com/h5netcdf/h5netcdf/pull/151 so that when reading attributes, any 1D array/list of length 1 gets turned into a scalar element/item. The change happened with version 0.14.0. The issue is that the xarray documentation still describes the old h5netcdf-behaviour on https://docs.xarray.dev/en/stable/user-guide/io.html?highlight=attributes%20h5netcdf#netcdf Could we mention this also on https://docs.xarray.dev/en/stable/generated/xarray.open_dataset.html#xarray.open_dataset under the engine argument, or just make sure it links to the above page? I initially looked under https://docs.xarray.dev/en/stable/user-guide/io.html?highlight=string#string-encoding because my issue was for a string array/list, but maybe too much to mention there if this is a general change that affects attributes of all types. As explained on the h5netcdf-issue tracker, the reason for dropping/squeezing 1-length-array attributes to scalars, is for compatibility with the other NetCDF4-engine or NetCDF in general (and there might be some varying opinions about how good that is, vs. fully using features available in HDF5). (Interesting to note is that when writing, an attribute with a python list of length 1 does give an array of length 1 in the HDF5/NetCDF4-file, the dropping of array dimension only happens only when reading.) Adding the invalid_netcdf=True argument when loading does not change the behaviour. Maybe it could be interesting to use it to generally allow 1-length attribute arrays? Now, I think every usage of array-attributes will need conversions like
Minimal exampleThis serves to clarify what happens. The issue is not about reverting to the old behaviour (although I liked it), just updating the xarray documentation. ``` import xarray as xr import numpy as np ds = xr.Dataset() ds['stuff'] = xr.DataArray(np.random.randn(2), dims='x') ds['stuff'].attrs['strings_0D_one'] = 'abc' ds['stuff'].attrs['strings_1D_two'] = ['abc', 'def'] ds['stuff'].attrs['strings_1D_one'] = ['abc'] path = 'demo.nc' ds.to_netcdf(path, engine='h5netcdf', format='netCDF4') ds2 = xr.load_dataset(path, engine='h5netcdf') print(type(ds2['stuff'].attrs['strings_0D_one']).name, repr(ds2['stuff'].attrs['strings_0D_one'])) print(type(ds2['stuff'].attrs['strings_1D_two']).name, repr(ds2['stuff'].attrs['strings_1D_two'])) print(type(ds2['stuff'].attrs['strings_1D_one']).name, repr(ds2['stuff'].attrs['strings_1D_one'])) ``` With h5netcdf: 0.12.0 (python: 3.7.9, OS: Windows, OS-release: 10, libhdf5: 1.10.4, xarray: 0.20.1, pandas: 1.3.4, numpy: 1.21.5, netCDF4: None, h5netcdf: 0.12.0, h5py: 2.10.0) the printouts are:
With h5netcdf: 1.0.0 (python: 3.8.11, OS: Linux, OS-release: 3.10.0-1160.49.1.el7.x86_64, libhdf5: 1.10.4, xarray: 0.20.1, pandas: 1.4.2, numpy: 1.21.2, netCDF4: None, h5netcdf: 1.0.0, h5py: 2.10.0) the printouts are:
I have tested that direct reading by h5py.File gives str, ndarray, ndarray so the change is not in the writing or h5py. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1899413232 | PR_kwDOAMm_X85affPA | 8194 | remove invalid statement from doc/user-guide/io.rst | kmuehlbauer 5821660 | closed | 0 | 0 | 2023-09-16T12:01:42Z | 2023-09-19T01:02:23Z | 2023-09-19T01:02:23Z | MEMBER | 0 | pydata/xarray/pulls/8194 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8194/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);