issues
9 rows where user = 5637662 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, closed_at, state_reason, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
818944970 | MDU6SXNzdWU4MTg5NDQ5NzA= | 4975 | scatter plot with row or col gets hue wrong | dschwoerer 5637662 | closed | 0 | 3 | 2021-03-01T14:54:57Z | 2023-03-13T19:47:51Z | 2023-03-13T19:47:51Z | CONTRIBUTOR | What happened: The colorbar/hue is only for the last subplot, the colorbar for the other figures is ignored. What you expected to happen: hue/colorbar is correct - the total min/max values are calculated and used instead. Minimal Complete Verifiable Example: ```python import xarray as xr import numpy as np ds=xr.Dataset() ds["a"]=("x","y"), np.arange(4).reshape(2,2) ds.plot.scatter("a","a",row="x", hue="a") import matplotlib.pyplot as plt plt.show() ``` Anything else we need to know?: replacing col for row yields same wrong result I verified this is in master (5735e163bea43ec9bc3c2e640fbf25a1d4a9d0c0) and 0.16.2 Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] python-bits: 64 OS: Linux OS-release: 4.12.14-122.57-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.3 xarray: 0.16.2 pandas: 1.0.5 numpy: 1.19.4 scipy: 1.5.2 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 3.1.0 Nio: None zarr: None cftime: 1.1.3 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 2021.01.1 distributed: None matplotlib: 3.3.4 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.1.3 pip: 20.2.2 conda: None pytest: None IPython: 7.18.1 sphinx: 3.2.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
987551524 | MDU6SXNzdWU5ODc1NTE1MjQ= | 5762 | Plotting of labelled data fails | dschwoerer 5637662 | closed | 0 | 3 | 2021-09-03T08:45:34Z | 2023-03-10T20:01:19Z | 2023-03-10T20:01:18Z | CONTRIBUTOR | What happened: Xarray has some assumption what is or is not plottable. Xarray should not do that, and just ask the plotting library, if it actually can. What you expected to happen: No additional checking, just plot it. If something cannot be plotted, matplotlib (or whatever backend is used) will anyway check, and know better. Minimal Complete Verifiable Example: ```python import xarray as xr import matplotlib.pyplot as plt da = xr.DataArray(data=[1, 2], coords={"x": ["abc", "cde"]}, dims="x") print(da) try: da.plot() except TypeError: plt.plot(da.x, da) print("But it is possible") plt.show() ``` Anything else we need to know?:
I can submit a PR to remove Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: fcebe5e5f3bcd2d93df614966431c845384a3b2f python: 3.9.7 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] python-bits: 64 OS: Linux OS-release: 5.13.12-200.fc34.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.3 xarray: 0.18.2 pandas: 1.2.5 numpy: 1.20.1 scipy: 1.6.2 netCDF4: 1.5.5.1 pydap: None h5netcdf: None h5py: 3.1.0 Nio: None zarr: None cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 2021.08.1 distributed: None matplotlib: 3.4.3 cartopy: None seaborn: None numbagg: None pint: 0.16.1 setuptools: 54.2.0 pip: 21.0.1 conda: None pytest: 6.2.2 IPython: 7.20.0 sphinx: 3.4.3 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
not_planned | xarray 13221727 | issue | ||||||
1412895383 | I_kwDOAMm_X85UNxKX | 7181 | xarray 2022.10.0 much slower then 2022.6.0 | dschwoerer 5637662 | closed | 0 | 17 | 2022-10-18T09:38:52Z | 2022-11-30T23:36:56Z | 2022-11-30T23:36:56Z | CONTRIBUTOR | What is your issue?xbout's test suite finishes with 2022.6.0 in less than an our, with 2022.10.0 it gets aborted after 6 hours. I haven't managed to debug what is the issue. Git bisect will not work, as 2022.9.0 is broken due to #7111 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
799335646 | MDExOlB1bGxSZXF1ZXN0NTY1OTk1OTI3 | 4857 | Add support for errorbars in scatter plots | dschwoerer 5637662 | open | 0 | 0 | 2021-02-02T14:33:23Z | 2022-06-09T14:50:17Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4857 |
I have added the possibility to add I know that this needs tests and so on, but I wanted to know whether this is of general interest? I have found https://github.com/pydata/xarray/pull/2264 for dataarrays, which wasn't merged, and one of the issues was that the return type changed, as is here the case. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
987559143 | MDExOlB1bGxSZXF1ZXN0NzI2NjI3NTQ5 | 5763 | remove _ensure_plottable | dschwoerer 5637662 | open | 0 | 6 | 2021-09-03T08:55:19Z | 2022-06-09T14:50:16Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5763 | The plotting backend does more reliable checking and thus removing avoids false negatives, which are causing easily avoidable plot failures
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
819030802 | MDExOlB1bGxSZXF1ZXN0NTgyMTk4MjUx | 4978 | ensure all plots share the same hue | dschwoerer 5637662 | closed | 0 | 3 | 2021-03-01T16:23:45Z | 2021-05-14T17:24:18Z | 2021-05-14T17:24:18Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4978 | by specifing vmin and vmax, the colorbar is the correct one for all subplots
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
805389572 | MDU6SXNzdWU4MDUzODk1NzI= | 4885 | Dataset.mean changes variables without specified dimension | dschwoerer 5637662 | closed | 0 | 2 | 2021-02-10T10:37:07Z | 2021-04-24T20:00:45Z | 2021-04-24T20:00:45Z | CONTRIBUTOR | What happened:
If I apply What you expected to happen: Variables without the dimension are not changed. Minimal Complete Verifiable Example: ```python import xarray as xr ds = xr.Dataset() ds["pos"] = [1, 2, 3] ds["data"] = ("pos", "time"), [[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]] ds["var"] = "pos", [2, 3, 4] print(ds.mean(dim="time")) ``` Anything else we need to know?:
That makes it unnecessarily slow, as variables without that dimensions wouldn't need to be read from disk.
It is easy enough to work around:
Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] python-bits: 64 OS: Linux OS-release: 5.10.13-200.fc33.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.3 xarray: 0.16.2 pandas: 1.0.5 numpy: 1.19.4 scipy: 1.5.2 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.1.3 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: None distributed: None matplotlib: 3.3.4 cartopy: None seaborn: None numbagg: None pint: 0.13 setuptools: 49.1.3 pip: 20.2.2 conda: None pytest: 6.0.2 IPython: 7.18.1 sphinx: 3.2.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
865002281 | MDExOlB1bGxSZXF1ZXN0NjIxMTM2NzUw | 5207 | Skip mean over empty axis | dschwoerer 5637662 | closed | 0 | 3 | 2021-04-22T14:13:33Z | 2021-04-24T20:00:45Z | 2021-04-24T20:00:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5207 | Avoids changing the datatype if the data does not have the requested axis.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
700907310 | MDU6SXNzdWU3MDA5MDczMTA= | 4420 | Multi-mesh support | dschwoerer 5637662 | open | 0 | 0 | 2020-09-14T08:52:46Z | 2020-09-14T08:52:46Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. I am not sure what the best way to have data from different meshes is. My simulations use logical Cartesian meshes, but they can be of different sizes (number of points). They describe 2D surfaces in 3D and aren't necessarily connected. Describe the solution you'd like I would like to store the different meshes in the same dataset with a nice interface. Describe alternatives you've considered I have considered several options: 1) I could use a list of datasets. That would be easy to implement, but it feels wrong, as this is really one dataset. 2) Add another dimension some index for the mesh number. That would require to expand all meshes to the largest one (in each spatial dimension). Then the coordinates (x,y,z) would also be required to be 3D, a 2D slice for each index. That has some overhead, as all structures are expanded, and it is not easy to see what shape each slice has. This might also cause issues with plotting, as the data seems to be 3D rather then 2D. 3) merge all data, e.g. merging all data in x direction, with an index-offset in x, that different meshes have different indices. Then only the y-dimension would need to be alligned, thus it would involve less storage cost. Plotting would work somewhat - only ensuring that non-connected meshes are not plotted connected might be a bit tricky. 4) suffix all data and coordinates with an index. Would allow to e.g. plot by iterating over the index - a variation of 1) but allows to store as one file 5) use unstructured grids. That would avoid the additional storage cost, as the full grid info is anyway stored, but then plotting or searching in the data will be (much) more expensive. Additional context
The data is not point-centered, but area/volume based (see #1475) - thus recovering whether data needs to be plotted together or not in 3) would be doable, but transforming from the xarray format to the
format for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4420/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);