issues
7 rows where repo = 13221727, type = "issue" and user = 32069530 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1115166039 | I_kwDOAMm_X85CeBVX | 6196 | Wrong list of coordinate when a singleton coordinate exists | lanougue 32069530 | open | 0 | 5 | 2022-01-26T15:41:37Z | 2023-03-01T19:55:13Z | NONE | What happened?Here is some simple code:
What did you expect to happen?I expect that a singleton coordinate of a dataset not to be a coordinate of other coordinates present in the dataset Minimal Complete Verifiable Example
Relevant log outputNo response Anything else we need to know?No response EnvironmentINSTALLED VERSIONScommit: None python: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:57:06) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 3.12.53-60.30-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.19.0 pandas: 1.3.5 numpy: 1.20.3 scipy: 1.6.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.10.0 distributed: 2021.10.0 matplotlib: 3.2.2 cartopy: None seaborn: None numbagg: None pint: None setuptools: 60.5.0 pip: 21.3.1 conda: None pytest: None IPython: 7.31.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
911513701 | MDU6SXNzdWU5MTE1MTM3MDE= | 5436 | bug or unclear definition of combine_attrs with xr.merge() | lanougue 32069530 | open | 0 | 13 | 2021-06-04T13:43:39Z | 2022-09-22T17:27:13Z | NONE | Hi all,
I use the latest version of xarray (0.18) and I have some problems. Here are very simple examples:
2) When using combine_attrs='drop_conflicts', elevation and velocity keeps their own units and ds has no attrs. 3) Then, if we set elevation units to be the same as velocity units and use combine_attrs='drop_conflicts', then ds get a new attibute wich is this common units As a conclucion, the combine_attrs flag definition is really not clear because it seems to control the final merged dataset attrs but, in reality, can also affects merged variables attributes. From my point of view, behaviour of combine_attrs is not consistent depending on the chosen option. I would expect merged variables to be (as much as possible) untouched when merged and the combine_attrs to only control the attributes of the merged dataset. Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5436/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1183777627 | I_kwDOAMm_X85GjwNb | 6423 | interpolation of xarray do not preserve attributes | lanougue 32069530 | open | 0 | 1 | 2022-03-28T17:47:33Z | 2022-05-21T20:31:40Z | NONE | What is your issue?Hi all,
Interpolation over an xarray variable do not preserve its attributes:
Here is a minimal example
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
595813283 | MDU6SXNzdWU1OTU4MTMyODM= | 3946 | removing uneccessary dimension | lanougue 32069530 | open | 0 | 8 | 2020-04-07T11:57:36Z | 2022-05-02T23:07:33Z | NONE | Hi Everyone, Sometimes I generate DataArray which are invariant along a dimension. I was not able to find a function or a simple workaround to get rid of these dimensions (unless I already know their names). I would like to use something as a combination of "reduce" and "np.allclose". For the moment I use the equivalent code below which works but is not efficent. Could it be a candidate for a native efficent xarray function? This can drastically reduce memory usage if relevant. Thanks ```python ds = xr.DataArray([[1.,2.],[1.,2.]], dims=('x','y')) dims_to_remove=list() for d in ds.dims: if np.all(ds[{d:0}]==ds): dims_to_remove.append(d) ds = ds[dict.fromkeys(dims_to_remove,0)] ds = ds.squeeze() ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1110623911 | I_kwDOAMm_X85CMsan | 6183 | [FEATURE]: dimension attribute are lost when stacking an xarray | lanougue 32069530 | closed | 0 | 2 | 2022-01-21T15:49:47Z | 2022-03-17T17:11:44Z | 2022-03-17T17:11:44Z | NONE | Is your feature request related to a problem?No Describe the solution you'd likeHi all, when stacking an array Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
817885693 | MDU6SXNzdWU4MTc4ODU2OTM= | 4968 | swap_coords() function ? | lanougue 32069530 | open | 0 | 0 | 2021-02-27T10:00:53Z | 2021-02-27T10:01:52Z | NONE | Hi all, I have a DataArray with coordinates 'time'. I want to swap the time coordinate with another coordinate which is not in the array.
swap_dims() allows to give a non-existing dimension but I cannot define the coords at the same time. I usually use a workaround as follow.
I think it could be interesting to be able to swap_coords which could do the last two lines at the same time. Basically a swap_coords() function |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
371906566 | MDU6SXNzdWUzNzE5MDY1NjY= | 2494 | Concurrent acces with multiple processes using open_mfdataset | lanougue 32069530 | closed | 0 | 4 | 2018-10-19T10:52:46Z | 2018-10-26T12:37:30Z | 2018-10-26T12:37:30Z | NONE | Hi everyone, First: thanks to the developers for this amazing xarray library ! Great piece of work ! Here comes my troubles: I run several (about 500) independant processes (dask distributed) that need simultaneous reading (only) access to a same (group of) netcdf files. I only pass the files-path strings to the processes to avoid pickling a netcdf python-object (issue). In each process, I run
but it leads to typical errors for many concurrent access that fail... : Invalid id or Exception: CancelledError("('mul-484a58bf5830233021e08456b45eb60d', 0, 0)",), ... I was using netCDF4 module with parallel option set to True, when playing with a single netcdf file and it was running fine:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);