home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

7 rows where type = "issue" and user = 22454970 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 6
  • open 1

type 1

  • issue · 7 ✖

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
464315727 MDU6SXNzdWU0NjQzMTU3Mjc= 3080 Error in to_netcdf() tlogan2000 22454970 closed 0     2 2019-07-04T15:14:33Z 2023-09-16T08:28:12Z 2023-09-16T08:28:12Z NONE      

MCVE Code Sample

In order for the maintainers to efficiently understand and prioritize issues, we ask you post a "Minimal, Complete and Verifiable Example" (MCVE): http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports

```python import xarray as xr print(xr.version)

url = 'https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/cccs_portal/indices/Final/BCCAQv2/tg_mean/YS/rcp26/simulations/BCCAQv2+ANUSPLIN300_bcc-csm1-1_historical+rcp26_r1i1p1_1950-2100_tg_mean_YS.nc' ds = xr.open_dataset(url, chunks={'time':50}) dsSel = ds.sel(lon= -80, lat= 50, method='nearest') outnc = 'tmp.nc' dsSel.to_netcdf(outnc)

```

Problem Description

This may be related to #2850 but I am unable to save to netcdf after performing a simple sel for certain .nc files (example file available via opendap link in the code) . Gives following error : Traceback (most recent call last): File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2961, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-3-492c82b08e81>", line 8, in <module> dsSel.to_netcdf(outnc) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/core/dataset.py", line 1365, in to_netcdf compute=compute) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/backends/api.py", line 886, in to_netcdf unlimited_dims=unlimited_dims) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/backends/api.py", line 929, in dump_to_store unlimited_dims=unlimited_dims) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/backends/common.py", line 269, in store self.set_attributes(attributes) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/backends/common.py", line 285, in set_attributes self.set_attribute(k, v) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/backends/netCDF4_.py", line 426, in set_attribute _set_nc_attribute(self.ds, key, value) File "/home/travis/.conda/envs/Xarray/lib/python3.6/site-packages/xarray/backends/netCDF4_.py", line 302, in _set_nc_attribute obj.setncattr(key, value) File "netCDF4/_netCDF4.pyx", line 2607, in netCDF4._netCDF4.Dataset.setncattr File "netCDF4/_netCDF4.pyx", line 1467, in netCDF4._netCDF4._set_att File "netCDF4/_netCDF4.pyx", line 1733, in netCDF4._netCDF4._ensure_nc_success AttributeError: NetCDF: String match to name in use

Expected Output

saved selection to .nc

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-54-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.1 xarray: 0.12.2 pandas: 0.23.3 numpy: 1.15.4 scipy: 1.2.0 netCDF4: 1.4.0 pydap: None h5netcdf: 0.6.1 h5py: 2.8.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.0.0 distributed: 1.25.2 matplotlib: 3.0.1 cartopy: None seaborn: None numbagg: None setuptools: 39.2.0 pip: 19.1.1 conda: None pytest: 3.8.1 IPython: 6.5.0 sphinx: 2.1.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3080/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
546303413 MDU6SXNzdWU1NDYzMDM0MTM= 3666 Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex tlogan2000 22454970 open 0     9 2020-01-07T14:08:03Z 2021-07-08T17:43:58Z   NONE      

MCVE Code Sample

```python import subprocess import sys import wget import glob

def install(package): subprocess.check_call([sys.executable, "-m", "pip", "install", package]) try: from xclim import ensembles except: install('xclim') from xclim import ensembles

outdir = 'tmp' url = [] url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_ACCESS1-0_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_BNU-ESM_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r2i1p1_1950-2100_tg_mean_YS.nc') for u in url: wget.download(u,out=outdir) datasets = glob.glob(f'{outdir}/1950.nc') ens1 = ensembles.create_ensemble(datasets) print(ens1)

```

Expected Output

Following advice of @dcherian (https://github.com/Ouranosinc/xclim/issues/281#issue-508073942) we have started testing builds of xclim against the master branch as well as the current release:

Using xarray 0.14.1 via pip the above code generates a concatenated dataset with new added dimension 'realization'

Problem Description

using xarray@master the xclim.ensembles.create_ensemble call gives the following error:

Traceback (most recent call last): File "/home/travis/.PyCharmCE2019.3/config/scratches/scratch_26.py", line 23, in <module> ens1 = ensembles.create_ensemble(datasets) File "/home/travis/github_xclim/xclim/xclim/ensembles.py", line 83, in create_ensemble data = xr.concat(list1, dim=dim) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/xarray/core/concat.py", line 135, in concat return f(objs, dim, data_vars, coords, compat, positions, fill_value, join) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/xarray/core/concat.py", line 439, in _dataarray_concat join=join, File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/xarray/core/concat.py", line 303, in _dataset_concat *datasets, join=join, copy=False, exclude=[dim], fill_value=fill_value File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/xarray/core/alignment.py", line 298, in align index = joiner(matching_indexes) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2385, in __or__ return self.union(other) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2517, in union return self._union_incompatible_dtypes(other, sort=sort) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2436, in _union_incompatible_dtypes return Index.union(this, other, sort=sort).astype(object, copy=False) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2517, in union return self._union_incompatible_dtypes(other, sort=sort) .... File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 498, in __new__ return DatetimeIndex(subarr, copy=copy, name=name, **kwargs) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py", line 334, in __new__ int_as_wall_time=True, File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 446, in _from_sequence int_as_wall_time=int_as_wall_time, File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1854, in sequence_to_dt64ns data, copy = maybe_convert_dtype(data, copy) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 2060, in maybe_convert_dtype elif is_extension_type(data) and not is_datetime64tz_dtype(data): File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 1734, in is_extension_type if is_categorical(arr): File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 387, in is_categorical return isinstance(arr, ABCCategorical) or is_categorical_dtype(arr) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 708, in is_categorical_dtype return CategoricalDtype.is_dtype(arr_or_dtype) File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/dtypes/base.py", line 256, in is_dtype if isinstance(dtype, (ABCSeries, ABCIndexClass, ABCDataFrame, np.dtype)): File "/home/travis/.conda/envs/xclim_dev/lib/python3.7/site-packages/pandas/core/dtypes/generic.py", line 9, in _check return getattr(inst, attr, "_typ") in comp RecursionError: maximum recursion depth exceeded while calling a Python object

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.5 (default, Oct 25 2019, 15:51:11) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-74-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.3 xarray: 0.14.1+37.gdb36c5c0 pandas: 0.25.3 numpy: 1.17.4 scipy: 1.3.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.1 cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.6.0 distributed: 2.6.0 matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 41.6.0.post20191030 pip: 19.3.1 conda: None pytest: 5.2.2 IPython: None sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3666/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
402413097 MDU6SXNzdWU0MDI0MTMwOTc= 2699 bfill behavior dask arrays with small chunk size tlogan2000 22454970 closed 0     4 2019-01-23T20:19:21Z 2021-04-26T13:06:46Z 2021-04-26T13:06:46Z NONE      

```python data = np.random.rand(100) data[25] = np.nan da = xr.DataArray(data)

unchunked

print('output : orig',da[25].values, ' backfill : ',da.bfill('dim_0')[25].values ) output : orig nan backfill : 0.024710724099643477

small chunk

da1 = da.chunk({'dim_0':1}) print('output chunks==1 : orig',da1[25].values, ' backfill : ',da1.bfill('dim_0')[25].values ) output chunks==1 : orig nan backfill : nan

medium chunk

da1 = da.chunk({'dim_0':10}) print('output chunks==10 : orig',da1[25].values, ' backfill : ',da1.bfill('dim_0')[25].values ) output chunks==10 : orig nan backfill : 0.024710724099643477 ```

Problem description

bfill methods seems to miss nans when dask array chunk size is small. Resulting array still has nan present (see 'small chunk' section of code)

Expected Output

absence of nans

Output of xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.6.8.final.0 python-bits: 64 OS: Linux OS-release: 4.15.0-43-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8 xarray: 0.11.0 pandas: 0.23.4 numpy: 1.15.4 scipy: None netCDF4: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None PseudonetCDF: None rasterio: None iris: None bottleneck: 1.2.1 cyordereddict: None dask: 1.0.0 distributed: 1.25.2 matplotlib: None cartopy: None seaborn: None setuptools: 40.6.3 pip: 18.1 conda: None pytest: None IPython: None sphinx: None

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2699/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
507471865 MDU6SXNzdWU1MDc0NzE4NjU= 3402 reduce() by multiple dims on groupby object tlogan2000 22454970 closed 0     2 2019-10-15T20:42:21Z 2019-10-25T21:01:11Z 2019-10-25T21:01:11Z NONE      

MCVE Code Sample

```python

Your code here

import xarray as xr import numpy as np url = 'https://data.nodc.noaa.gov/thredds/dodsC/GCOS/monthly_five_degree/19810101-NODC-L3_GHRSST-SSTblend-GLOB_HadSST2-Monthly_FiveDeg_DayNitAvg_19810101_20071231-v01.7-fv01.0.nc'

ds = xr.open_dataset(url, chunks=dict(time=12))

reduce() directly on dataArray - THIS IS OK

ds.analysed_sst.reduce(np.percentile, dim=('lat','lon'), q=0.5) # ok

Group by example

rr = ds.analysed_sst.rolling(min_periods=1, center=True, time=5).construct("window")

g = rr.groupby("time.dayofyear") print(g.dims) test1d = g.reduce(np.percentile, dim=('time'), q=0.5) # ok testall = g.reduce(np.percentile, dim=xr.ALL_DIMS, q=0.5) # ok

.reduce() w/ 2dims on grouby obj not working

test2d = g.reduce(np.nanpercentile, dim=('time','window'), q=0.5)

```

Expected Output

reduced output performed over multiple dimensions (but not xr.ALL_DIMS) on a groupby object

Problem Description

Using .reduce() on a groupby object is only successful when given a single dimensions or by using xr.ALL_DIMS. I wish to apply a reduce on a subset of dims (last line of code above) but gives folowing error: Traceback (most recent call last): File "/home/travis/.PyCharmCE2019.2/config/scratches/scratch_20.py", line 13, in <module> test = g.reduce(np.percentile, dim=('time','window'), q=0.5) File "/home/travis/.conda/envs/Xarray/lib/python3.7/site-packages/xarray/core/groupby.py", line 800, in reduce % (dim, self.dims) ValueError: cannot reduce over dimension ('time', 'window'). expected either xarray.ALL_DIMS to reduce over all dimensions or one or more of ('time', 'lat', 'lon', 'window'). Note: Using reduce() on a subset of dims directly on a xr.DataArray seems fine (line 7).

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-65-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.3 xarray: 0.14.0 pandas: 0.25.1 numpy: 1.17.2 scipy: 1.3.1 netCDF4: 1.5.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 2.4.0 distributed: 2.4.0 matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 41.0.1 pip: 19.2.2 conda: None pytest: 5.0.1 IPython: None sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3402/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
397916846 MDU6SXNzdWUzOTc5MTY4NDY= 2663 Implement quarterly frequencies for cftime tlogan2000 22454970 closed 0     1 2019-01-10T16:40:54Z 2019-03-02T01:41:44Z 2019-03-02T01:41:44Z NONE      

Currently cftime offsets have no options for quarterly frequencies (QS-DEC ... etc) as in xarray/pandas see : http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases

@jwenfai has recently worked extensively to implement xarray/cftime resampling (https://github.com/pydata/xarray/pull/2593) but this will remain limited as quarterly frequencies are currently not supported under xarray/coding/cftime_offsets.py

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2663/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
408924678 MDU6SXNzdWU0MDg5MjQ2Nzg= 2762 PyPi (v11.3) build behind master branch. cftime resampling not working tlogan2000 22454970 closed 0     3 2019-02-11T18:51:31Z 2019-02-11T19:43:15Z 2019-02-11T19:10:44Z NONE      

A pypi xarray install (v 11.3) gives an error when resampling ncfiles with cftime calendars An install from github/master however works fine.

11.3 release notes seem to indicate that it should include the cftime resampling PR

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2762/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
375627070 MDU6SXNzdWUzNzU2MjcwNzA= 2527 Undesired automatic conversion of 'dtype' based on variable units 'days'? tlogan2000 22454970 closed 0     2 2018-10-30T18:14:38Z 2018-10-31T16:04:08Z 2018-10-31T16:04:08Z NONE      

```python import xarray as xr from glob import os import numpy as np import urllib.request xr.set_options(enable_cftimeindex=True)

url = 'https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/NRCANdaily/nrcan_canada_daily_tasmax_1990.nc' infile = r'~/nrcan_canada_daily_tasmax_1990.nc' urllib.request.urlretrieve(url,infile)

freqs = ['MS'] # , 'QS-DEC', 'YS'] ds = xr.open_dataset(infile) su = (ds.tasmax > 25.0+273.15) * 1.0 for f in freqs:

output = su.resample(time=f).sum(dim='time')
output.attrs['units'] = 'days'
output.attrs['standard_name'] = 'summer_days'
output.attrs['long_name'] = 'summer days'
output.attrs['description'] = 'Number of days where daily maximum temperature exceeds 25℃'

# use original file as template
ds1 = xr.open_dataset(infile, drop_variables=['tasmax', 'time','time_vectors','ts'])
ds1.coords['time'] = output.time.values
ds1['su'] = output

comp = dict(zlib=True, complevel=5)
encoding = {var: comp for var in ds1.data_vars}
print(os.path.basename(infile).replace('.nc', '_SummerDays-' + f) + ' : writing ' + f + ' to netcdf')
# with dask.config.set(pool=ThreadPool(4)):

ds1.to_netcdf('~/testNRCANsummerdays.nc', format='NETCDF4', encoding=encoding)

ds2 = xr.open_dataset('~/testNRCANsummerdays.nc')
print(ds1)
print(ds2)
print(ds1.su.max())
print(ds2.su.max())

```

Problem description

I am calculating a climate index 'summer days' ('su') from daily maximum temperatures. Everything goes fine but when I reread a newly created output .nc file my 'su' dtype has changed from float to timedelta64. This seems to be due to the units ('days') that I assign to my newly created variable which when read by xarray must trigger an automatic conversion? If I alter the units to 'days>25C' everything is ok.

Is there a way to avoid this behavior and still keep my units as 'days' which is the CF_standard for this climate index calculation? (note there a large number of cases such as this - wet days, dry-days etc etc all of which have 'days' as the expected unit

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2527/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 24.393ms · About: xarray-datasette