issues
11 rows where user = 4666753 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
668256331 | MDU6SXNzdWU2NjgyNTYzMzE= | 4288 | hue argument for xarray.plot.step() for plotting multiple histograms over shared bins | jaicher 4666753 | open | 0 | 2 | 2020-07-30T00:30:37Z | 2022-04-17T19:27:28Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. I love how efficiently we can plot line data for different observations using Describe the solution you'd like I think we should have a hue kwarg for Describe alternatives you've considered
Additional context I didn't evaluate the other plotting functions implemented, but I suspect that others could appropriately consider a hue argument but do not yet support doing so. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1177669703 | PR_kwDOAMm_X84029qU | 6402 | No chunk warning if empty | jaicher 4666753 | closed | 0 | 6 | 2022-03-23T06:43:54Z | 2022-04-09T20:27:46Z | 2022-04-09T20:27:40Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6402 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1177665302 | I_kwDOAMm_X85GMb8W | 6401 | Unnecessary warning when specifying `chunks` opening dataset with empty dimension | jaicher 4666753 | closed | 0 | 0 | 2022-03-23T06:38:25Z | 2022-04-09T20:27:40Z | 2022-04-09T20:27:40Z | CONTRIBUTOR | What happened?I receive unnecessary warnings when opening Zarr datasets with empty dimensions/arrays using the If an array has zero size (due to an empty dimension), it is saved as a single chunk regardless of Dask chunking on other dimensions (#5742). If the What did you expect to happen?I expect no warning to be raised when there is no data:
Minimal Complete Verifiable Example```Python import xarray as xr import numpy as np each
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
552987067 | MDU6SXNzdWU1NTI5ODcwNjc= | 3712 | [Documentation/API?] {DataArray,Dataset}.sortby is stable sort? | jaicher 4666753 | open | 0 | 0 | 2020-01-21T16:27:37Z | 2022-04-09T02:26:34Z | CONTRIBUTOR | I noticed that It is not explicitly stated in the docs that the sorting will be stable. If this function is meant to always be stable, I think the documentation should explicitly state this. If not, I think it would be helpful to have an optional argument to ensure that the sort is kept stable in case the implementation changes in the future. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3712/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 } |
xarray 13221727 | issue | ||||||||
1076377174 | I_kwDOAMm_X85AKDZW | 6062 | Import hangs when matplotlib installed but no display available | jaicher 4666753 | closed | 0 | 3 | 2021-12-10T03:12:55Z | 2021-12-29T07:56:59Z | 2021-12-29T07:56:59Z | CONTRIBUTOR | What happened: On a device with no display available, importing xarray without setting the matplotlib backend hangs on import of matplotlib.pyplot since #5794 was merged. What you expected to happen: I expect to be able to run |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1076384122 | PR_kwDOAMm_X84vp8Ru | 6064 | Revert "Single matplotlib import" | jaicher 4666753 | closed | 0 | 3 | 2021-12-10T03:24:54Z | 2021-12-29T07:56:59Z | 2021-12-29T07:56:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6064 | Revert pydata/xarray#5794, which causes failure to import when used without display (issue #6062).
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1033142897 | I_kwDOAMm_X849lIJx | 5883 | Failing parallel writes to_zarr with regions parameter? | jaicher 4666753 | closed | 0 | 1 | 2021-10-22T03:33:02Z | 2021-10-22T18:37:06Z | 2021-10-22T18:37:06Z | CONTRIBUTOR | What happened: Following guidance on how to use regions keyword in What you expected to happen: I expect all the writes to take place safely so long as the regions I write to do not overlap (they do not). Minimal Complete Verifiable Example: ```python path = "tmp.zarr" NTHREADS = 4 # when 1, things work as expected import multiprocessing.dummy as mp # threads, instead of processes import numpy as np import dask.array as da import xarray as xr dummy values for metadataxr.Dataset( {"x": (("a", "b"), -da.ones((10, 7), chunks=(None, 1)))}, {"apple": ("a", -da.ones(10, dtype=int, chunks=(1,)))}, ).to_zarr(path, mode="w", compute=False) actual values to saveds = xr.Dataset( {"x": (("a", "b"), np.random.uniform(size=(10, 7)))}, {"apple": ("a", np.arange(10))}, ) save them using NTHREADSwith mp.Pool(NTHREADS) as p: p.map( lambda idx: ds.isel(a=slice(idx, 1 + idx)).to_zarr(path, mode="r+", region=dict(a=slice(idx, 1 + idx))), range(10) ) ds_roundtrip = xr.open_zarr(path).load() # open what we just saved over multiple threads perfect match for x on some slices of a, but when NTHREADS > 1, x has very different value or NaN on other slices of axr.testing.assert_allclose(ds, ds_roundtrip) # fails when NTHREADS > 1. ``` Anything else we need to know?:
Environment: Output of <tt>xr.show_versions()</tt>``` INSTALLED VERSIONS ------------------ commit: None python: 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.4.72-microsoft-standard-WSL2 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.8.0 xarray: 0.19.0 pandas: 1.3.3 numpy: 1.21.2 scipy: 1.7.1 netCDF4: 1.5.7 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.10.1 cftime: 1.5.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.08.1 distributed: 2021.08.1 matplotlib: 3.4.1 cartopy: None seaborn: 0.11.2 numbagg: None pint: None setuptools: 58.2.0 pip: 21.3 conda: None pytest: None IPython: 7.28.0 sphinx: None ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
980605063 | MDExOlB1bGxSZXF1ZXN0NzIwODExNjQ4 | 5742 | Fix saving chunked datasets with zero length dimensions | jaicher 4666753 | closed | 0 | 2 | 2021-08-26T20:12:08Z | 2021-10-10T00:12:34Z | 2021-10-10T00:02:42Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5742 | This fixes #5741 by loading to memory all variables with zero length before saving with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
980549418 | MDU6SXNzdWU5ODA1NDk0MTg= | 5741 | Dataset.to_zarr fails on dask array with zero-length dimension (ZeroDivisonError) | jaicher 4666753 | closed | 0 | 0 | 2021-08-26T18:57:00Z | 2021-10-10T00:02:42Z | 2021-10-10T00:02:42Z | CONTRIBUTOR | What happened: I have an What you expected to happen: I expect it to save without any errors. Minimal Complete Verifiable Example: the following commands fail. ```python import numpy as np import xarray as xr ds = xr.Dataset( {"x": (("a", "b", "c"), np.empty((75, 0, 30))), "y": (("a", "c"), np.random.normal(size=(75, 30)))}, {"a": np.arange(75), "b": [], "c": np.arange(30)}, ).chunk({}) ds.to_zarr("fails.zarr") # RAISES ZeroDivisionError ``` Anything else we need to know?: If we load all the empty arrays to numpy, it is able to save correctly. That is:
I'll make a PR using this solution, but not sure if this is a deeper bug that should be fixed in zarr or in a nicer way. Environment: Output of <tt>xr.show_versions()</tt>``` INSTALLED VERSIONS ------------------ commit: None python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.4.72-microsoft-standard-WSL2 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.8.0 xarray: 0.19.0 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.7.1 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.9.3 cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.08.1 distributed: 2021.08.1 matplotlib: 3.4.1 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: None IPython: 7.22.0 sphinx: None ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
559958918 | MDExOlB1bGxSZXF1ZXN0MzcxMDM0MDY2 | 3752 | Fix swap_dims() index names (issue #3748) | jaicher 4666753 | closed | 0 | 5 | 2020-02-04T20:25:18Z | 2020-02-24T23:33:05Z | 2020-02-24T22:34:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3752 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
559841620 | MDU6SXNzdWU1NTk4NDE2MjA= | 3748 | `swap_dims()` incorrectly changes underlying index name | jaicher 4666753 | closed | 0 | 1 | 2020-02-04T16:41:25Z | 2020-02-24T22:34:58Z | 2020-02-24T22:34:58Z | CONTRIBUTOR | MCVE Code Sample```python import xarray as xr create data array with named dimension and named coordinatex = xr.DataArray([1], {"idx": [2], "y": ("idx", [3])}, ["idx"], name="x") what's our current index? (idx, this is fine)x.indexes prints "idx: Int64Index([2], dtype='int64', name='idx')"swap dim so that y is our dimension, what's index now?x.swap_dims({"idx": "y"}).indexes prints "y: Int64Index([3], dtype='int64', name='idx')"``` The dimension name is appropriately swapped but the pandas index name is incorrect. Expected Output```python swap dim so that y is our dimension, what's index now?x.swap_dims({"idx": "y"}).indexes prints "y: Int64Index([3], dtype='int64', name='y')"``` Problem DescriptionThis is a problem because running Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);