home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

6 rows where user = 25624127 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, state_reason, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 4
  • pull 2

state 2

  • closed 5
  • open 1

repo 1

  • xarray 6
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2099243058 I_kwDOAMm_X859H-gy 8663 Typo for `variables` arg API docstring in `xarray.core.Dataset.sortby` tomvothecoder 25624127 closed 0     2 2024-01-24T22:47:35Z 2024-01-26T01:11:30Z 2024-01-26T01:11:30Z CONTRIBUTOR      

What is your issue?

Just something I caught while looking at the docs. I think this should be variables?

https://github.com/pydata/xarray/blob/d639d6e151bdeba070127aa7e286c5bfa6048194/xarray/core/dataset.py#L7925 https://github.com/pydata/xarray/blob/d639d6e151bdeba070127aa7e286c5bfa6048194/xarray/core/dataset.py#L7948-L7953

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8663/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2100878614 PR_kwDOAMm_X85lF1q6 8670 Fix `variables` arg typo in `Dataset.sortby()` docstring tomvothecoder 25624127 closed 0     0 2024-01-25T17:41:25Z 2024-01-26T01:11:29Z 2024-01-26T01:11:29Z CONTRIBUTOR   0 pydata/xarray/pulls/8670
  • [x] Closes #8663
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8670/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1098241812 I_kwDOAMm_X85BddcU 6149 [Bug]: `numpy` `DeprecationWarning` with `DType` and `xr.testing.assert_all_close()` + Dask tomvothecoder 25624127 closed 0     4 2022-01-10T18:34:27Z 2023-09-13T20:06:59Z 2023-09-13T20:06:58Z CONTRIBUTOR      

What happened?

A numpy DeprecationWarning regarding DType is being outputted when using xr.testing.assert_all_close() to compare two chunked Datasets. This does warning does not appear with two non-chunked datasets.

What did you expect to happen?

The warning should not appear.

Minimal Complete Verifiable Example

```python class TestTemporalAvg: class TestTimeseries: @pytest.fixture(autouse=True) def setup(self): self.ds: xr.Dataset = generate_dataset(cf_compliant=True, has_bounds=True)

    # No warning with this test
    def test_weighted_annual_avg(self):
        ds = self.ds.copy()

        result = ds.temporal.temporal_avg("timeseries", "year", data_var="ts")
        expected = ds.copy()
        expected["ts"] = xr.DataArray(
            name="ts",
            data=np.ones((2, 4, 4)),
            coords={
                "lat": self.ds.lat,
                "lon": self.ds.lon,
                "year": pd.MultiIndex.from_tuples(
                    [(2000,), (2001,)],
                ),
            },
            dims=["year", "lat", "lon"],
            attrs={
                "operation": "temporal_avg",
                "mode": "timeseries",
                "freq": "year",
                "groupby": "year",
                "weighted": "True",
                "centered_time": "True",
            },
        )

        # For some reason, there is a floating point difference between both
        # for ts so we have to use floating point comparison
        xr.testing.assert_allclose(result, expected)
        assert result.ts.attrs == expected.ts.attrs

    # Warning with this test
    @requires_dask
    def test_weighted_annual_avg_with_chunking(self):
        ds = self.ds.copy().chunk({"time": 2})

        result = ds.temporal.temporal_avg("timeseries", "year", data_var="ts")
        expected = ds.copy()
        expected["ts"] = xr.DataArray(
            name="ts",
            data=np.ones((2, 4, 4)),
            coords={
                "lat": ds.lat,
                "lon": ds.lon,
                "year": pd.MultiIndex.from_tuples(
                    [(2000,), (2001,)],
                ),
            },
            dims=["year", "lat", "lon"],
            attrs={
                "operation": "temporal_avg",
                "mode": "timeseries",
                "freq": "year",
                "groupby": "year",
                "weighted": "True",
                "centered_time": "True",
            },
        )

        # For some reason, there is a floating point difference between both
        # for ts so we have to use floating point comparison
        xr.testing.assert_allclose(result, expected)
        assert result.ts.attrs == expected.ts.attrs

```

Relevant log output

python DeprecationWarning: The `dtype` and `signature` arguments to ufuncs only select the general DType and not details such as the byte order or time unit (with rare exceptions see release notes). To avoid this warning please use the scalar types `np.float64`, or string notation. In rare cases where the time unit was preserved, either cast the inputs or provide an output array. In the future NumPy may transition to allow providing `dtype=` to denote the outputs `dtype` as well. (Deprecated NumPy 1.21) return ufunc.reduce(obj, axis, dtype, out, **passkwargs)

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS

commit: None python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:46) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.45.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1

xarray: 0.20.1 pandas: 1.3.4 numpy: 1.21.4 scipy: None netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.11.2 distributed: 2021.11.2 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2021.11.1 cupy: None pint: None sparse: None setuptools: 59.6.0 pip: 21.3.1 conda: None pytest: 6.2.5 IPython: 7.30.1 sphinx: 4.3.1

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6149/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  not_planned xarray 13221727 issue
1607677974 PR_kwDOAMm_X85LLO2H 7579 Add xCDAT to list of Xarray related projects tomvothecoder 25624127 closed 0     1 2023-03-02T23:17:40Z 2023-03-03T17:10:56Z 2023-03-03T07:51:26Z CONTRIBUTOR   0 pydata/xarray/pulls/7579
  • [x] Closes #7577
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7579/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1607416298 I_kwDOAMm_X85fzznq 7577 Consider adding xCDAT to list of Xarray related projects tomvothecoder 25624127 closed 0     1 2023-03-02T20:07:12Z 2023-03-03T07:51:27Z 2023-03-03T07:51:27Z CONTRIBUTOR      

What is your issue?

Hello, my name is Tom and I'm a core developer for xCDAT (Xarray Climate Data Analysis Tools). xCDAT is an extension of xarray for climate data analysis on structured grids. It serves as a modern successor to the Community Data Analysis Tools (CDAT) library.

I've had a GH Issue to try to get xCDAT added to the "Xarray related projects" list. It would be awesome xCDAT if can be included on there while the project and community continues to grow!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7577/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1060646604 I_kwDOAMm_X84_OC7M 6015 TypeError: '_ElementwiseFunctionArray' object does not support item assignment tomvothecoder 25624127 open 0     0 2021-11-22T22:19:24Z 2021-11-22T23:38:17Z   CONTRIBUTOR      

What happened: I am attempting to mask specific time_bnds coordinate points using loc, but am receiving TypeError: '_ElementwiseFunctionArray' object does not support item assignment.

This happens when calling ds = xr.open_dataset("path/to/file", decode_times=False), followed by ds = xr.decode_cf(ds, decode_times=True).

What you expected to happen:

The time_bnds coordinate points selected using .loc should mask.

Minimal Complete Verifiable Example:

```python import numpy as np import xarray as xr

file_path = "./input/ts_Amon_ACCESS1-0_historical_r1i1p1_185001-200512.nc"

This works fine

ds = xr.open_dataset(file_path, decode_times=True) ds["time_bnds"].loc[dict(time="1850-01")] = np.nan

This breaks

ds2 = xr.open_dataset(file_path, decode_times=False) ds2 = xr.decode_cf(ds2, decode_times=True) ds2["time_bnds"].loc[dict(time="1850-01")] = np.nan


TypeError Traceback (most recent call last) ~/Documents/Repositories/XCDAT/xcdat/qa/PR47 temporal avg/bugs/qa_seasonal_bug.py in <module> 13 ds2 = xr.open_dataset(file_path, decode_times=False) 14 ds2 = xr.decode_cf(ds2, decode_times=True) ----> 15 ds2["time_bnds"].loc[dict(time="1850-01")] = np.nan

/opt/miniconda3/envs/xcdat_dev/lib/python3.9/site-packages/xarray/core/dataarray.py in setitem(self, key, value) 212 213 pos_indexers, _ = remap_label_indexers(self.data_array, key) --> 214 self.data_array[pos_indexers] = value 215 216

/opt/miniconda3/envs/xcdat_dev/lib/python3.9/site-packages/xarray/core/dataarray.py in setitem(self, key, value) 765 for k, v in self._item_key_to_dict(key).items() 766 } --> 767 self.variable[key] = value 768 769 def delitem(self, key: Any) -> None:

/opt/miniconda3/envs/xcdat_dev/lib/python3.9/site-packages/xarray/core/variable.py in setitem(self, key, value) 852 853 indexable = as_indexable(self._data) --> 854 indexable[index_tuple] = value 855 856 @property

/opt/miniconda3/envs/xcdat_dev/lib/python3.9/site-packages/xarray/core/indexing.py in setitem(self, key, value) 435 ) 436 full_key = self._updated_key(key) --> 437 self.array[full_key] = value 438 439 def repr(self):

TypeError: '_ElementwiseFunctionArray' object does not support item assignment

```

Anything else we need to know?:

The workaround is to perform .load() after xr.decode_cf() ```python

This works

ds3 = xr.open_dataset(file_path, decode_times=False) ds3 = xr.decode_cf(ds3, decode_times=True) ds3.load() ds3["time_bnds"].loc[dict(time="1850-01")] = np.nan ```

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 20:33:18) [Clang 11.1.0 ] python-bits: 64 OS: Darwin OS-release: 19.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: None LOCALE: (None, 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.19.0 pandas: 1.3.3 numpy: 1.21.2 scipy: None netCDF4: 1.5.7 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.09.1 distributed: 2021.09.1 matplotlib: None cartopy: None seaborn: None numbagg: None pint: None setuptools: 58.2.0 pip: 21.2.4 conda: None pytest: 6.2.5 IPython: 7.28.0 sphinx: 4.2.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6015/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 28.629ms · About: xarray-datasette