home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

5 rows where state = "open", type = "issue" and user = 32069530 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue · 5 ✖

state 1

  • open · 5 ✖

repo 1

  • xarray 5
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1115166039 I_kwDOAMm_X85CeBVX 6196 Wrong list of coordinate when a singleton coordinate exists lanougue 32069530 open 0     5 2022-01-26T15:41:37Z 2023-03-01T19:55:13Z   NONE      

What happened?

Here is some simple code: a = xr.DataArray(np.arange(5), dims='x', coords={'x':np.arange(5)}) a = a.assign_coords({'y':1}) Now calling a['x'] or a['x'].coords shows y as a coordinate of x, which is unexpected for me

What did you expect to happen?

I expect that a singleton coordinate of a dataset not to be a coordinate of other coordinates present in the dataset

Minimal Complete Verifiable Example

import xarray as xr import numpy as np a = xr.DataArray(np.arange(5), dims='x', coords={'x':np.arange(5)}) a = a.assign_coords({'y':1}) print(a['x'].coords) returns Coordinates: * x (x) int64 0 1 2 3 4 y int64 1

Relevant log output

No response

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS

commit: None python: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:57:06) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 3.12.53-60.30-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1

xarray: 0.19.0 pandas: 1.3.5 numpy: 1.20.3 scipy: 1.6.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.10.0 distributed: 2021.10.0 matplotlib: 3.2.2 cartopy: None seaborn: None numbagg: None pint: None setuptools: 60.5.0 pip: 21.3.1 conda: None pytest: None IPython: 7.31.0 sphinx: None

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6196/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
911513701 MDU6SXNzdWU5MTE1MTM3MDE= 5436 bug or unclear definition of combine_attrs with xr.merge() lanougue 32069530 open 0     13 2021-06-04T13:43:39Z 2022-09-22T17:27:13Z   NONE      

Hi all, I use the latest version of xarray (0.18) and I have some problems. Here are very simple examples: python velocity = xr.DataArray(7.3, name='velocity', attrs={'units':'m/s'}) elevation = xr.DataArray(3.5, name='elevation', attrs={'units':'m'}) 1) When using ds =xr.merge([velocity, elevation], combine_attrs='drop') I expect ds not have any attrs but I may expect variables velocity and elevation to keep their own attributes. This is not the case. All variables loose their attributes

2) When using combine_attrs='drop_conflicts', elevation and velocity keeps their own units and ds has no attrs.

3) Then, if we set elevation units to be the same as velocity units and use combine_attrs='drop_conflicts', then ds get a new attibute wich is this common units

As a conclucion, the combine_attrs flag definition is really not clear because it seems to control the final merged dataset attrs but, in reality, can also affects merged variables attributes. From my point of view, behaviour of combine_attrs is not consistent depending on the chosen option.

I would expect merged variables to be (as much as possible) untouched when merged and the combine_attrs to only control the attributes of the merged dataset.

Thanks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5436/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1183777627 I_kwDOAMm_X85GjwNb 6423 interpolation of xarray do not preserve attributes lanougue 32069530 open 0     1 2022-03-28T17:47:33Z 2022-05-21T20:31:40Z   NONE      

What is your issue?

Hi all, Interpolation over an xarray variable do not preserve its attributes: Here is a minimal example x=np.arange(100) x=xr.DataArray(x, dims='x', coords={'x':x}) t=np.arange(40,60,0.5) t=xr.DataArray(t, dims='t', coords={'t':t}, attrs={'units':'s'}) result = x.interp(x=t) t coordinate in result do not have the t unit anymore

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6423/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
595813283 MDU6SXNzdWU1OTU4MTMyODM= 3946 removing uneccessary dimension lanougue 32069530 open 0     8 2020-04-07T11:57:36Z 2022-05-02T23:07:33Z   NONE      

Hi Everyone,

Sometimes I generate DataArray which are invariant along a dimension. I was not able to find a function or a simple workaround to get rid of these dimensions (unless I already know their names). I would like to use something as a combination of "reduce" and "np.allclose". For the moment I use the equivalent code below which works but is not efficent. Could it be a candidate for a native efficent xarray function? This can drastically reduce memory usage if relevant. Thanks

```python ds = xr.DataArray([[1.,2.],[1.,2.]], dims=('x','y'))

dims_to_remove=list() for d in ds.dims: if np.all(ds[{d:0}]==ds): dims_to_remove.append(d) ds = ds[dict.fromkeys(dims_to_remove,0)] ds = ds.squeeze() ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3946/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
817885693 MDU6SXNzdWU4MTc4ODU2OTM= 4968 swap_coords() function ? lanougue 32069530 open 0     0 2021-02-27T10:00:53Z 2021-02-27T10:01:52Z   NONE      

Hi all,

I have a DataArray with coordinates 'time'. I want to swap the time coordinate with another coordinate which is not in the array. swap_dims() allows to give a non-existing dimension but I cannot define the coords at the same time. I usually use a workaround as follow. a = xr.DataArray(np.random.rand(10), dims='time', coords={'time':1.e-4*np.arange(10)}) celerity=3e8 space = celerity * a['time'] a = a.swap_dims({'time':'space'}) a = a.assign_coords({'space':space.data})

I think it could be interesting to be able to swap_coords which could do the last two lines at the same time. Basically a swap_coords() function

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4968/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 26.488ms · About: xarray-datasette