issues
7 rows where repo = 13221727 and user = 22454970 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
464315727 | MDU6SXNzdWU0NjQzMTU3Mjc= | 3080 | Error in to_netcdf() | tlogan2000 22454970 | closed | 0 | 2 | 2019-07-04T15:14:33Z | 2023-09-16T08:28:12Z | 2023-09-16T08:28:12Z | NONE | MCVE Code SampleIn order for the maintainers to efficiently understand and prioritize issues, we ask you post a "Minimal, Complete and Verifiable Example" (MCVE): http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports ```python import xarray as xr print(xr.version) url = 'https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/cccs_portal/indices/Final/BCCAQv2/tg_mean/YS/rcp26/simulations/BCCAQv2+ANUSPLIN300_bcc-csm1-1_historical+rcp26_r1i1p1_1950-2100_tg_mean_YS.nc' ds = xr.open_dataset(url, chunks={'time':50}) dsSel = ds.sel(lon= -80, lat= 50, method='nearest') outnc = 'tmp.nc' dsSel.to_netcdf(outnc) ``` Problem DescriptionThis may be related to #2850 but I am unable to save to netcdf after performing a simple Expected Outputsaved selection to .nc Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
546303413 | MDU6SXNzdWU1NDYzMDM0MTM= | 3666 | Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex | tlogan2000 22454970 | open | 0 | 9 | 2020-01-07T14:08:03Z | 2021-07-08T17:43:58Z | NONE | MCVE Code Sample```python import subprocess import sys import wget import glob def install(package): subprocess.check_call([sys.executable, "-m", "pip", "install", package]) try: from xclim import ensembles except: install('xclim') from xclim import ensembles outdir = 'tmp' url = [] url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_ACCESS1-0_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_BNU-ESM_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r2i1p1_1950-2100_tg_mean_YS.nc') for u in url: wget.download(u,out=outdir) datasets = glob.glob(f'{outdir}/1950.nc') ens1 = ensembles.create_ensemble(datasets) print(ens1) ``` Expected OutputFollowing advice of @dcherian (https://github.com/Ouranosinc/xclim/issues/281#issue-508073942) we have started testing builds of Using xarray 0.14.1 via pip the above code generates a concatenated dataset with new added dimension 'realization' Problem Descriptionusing xarray@master the
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
402413097 | MDU6SXNzdWU0MDI0MTMwOTc= | 2699 | bfill behavior dask arrays with small chunk size | tlogan2000 22454970 | closed | 0 | 4 | 2019-01-23T20:19:21Z | 2021-04-26T13:06:46Z | 2021-04-26T13:06:46Z | NONE | ```python data = np.random.rand(100) data[25] = np.nan da = xr.DataArray(data) unchunkedprint('output : orig',da[25].values, ' backfill : ',da.bfill('dim_0')[25].values ) output : orig nan backfill : 0.024710724099643477 small chunkda1 = da.chunk({'dim_0':1}) print('output chunks==1 : orig',da1[25].values, ' backfill : ',da1.bfill('dim_0')[25].values ) output chunks==1 : orig nan backfill : nan medium chunkda1 = da.chunk({'dim_0':10}) print('output chunks==10 : orig',da1[25].values, ' backfill : ',da1.bfill('dim_0')[25].values ) output chunks==10 : orig nan backfill : 0.024710724099643477 ``` Problem descriptionbfill methods seems to miss nans when dask array chunk size is small. Resulting array still has nan present (see 'small chunk' section of code) Expected Outputabsence of nans Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
507471865 | MDU6SXNzdWU1MDc0NzE4NjU= | 3402 | reduce() by multiple dims on groupby object | tlogan2000 22454970 | closed | 0 | 2 | 2019-10-15T20:42:21Z | 2019-10-25T21:01:11Z | 2019-10-25T21:01:11Z | NONE | MCVE Code Sample```python Your code hereimport xarray as xr import numpy as np url = 'https://data.nodc.noaa.gov/thredds/dodsC/GCOS/monthly_five_degree/19810101-NODC-L3_GHRSST-SSTblend-GLOB_HadSST2-Monthly_FiveDeg_DayNitAvg_19810101_20071231-v01.7-fv01.0.nc' ds = xr.open_dataset(url, chunks=dict(time=12)) reduce() directly on dataArray - THIS IS OKds.analysed_sst.reduce(np.percentile, dim=('lat','lon'), q=0.5) # ok Group by examplerr = ds.analysed_sst.rolling(min_periods=1, center=True, time=5).construct("window") g = rr.groupby("time.dayofyear") print(g.dims) test1d = g.reduce(np.percentile, dim=('time'), q=0.5) # ok testall = g.reduce(np.percentile, dim=xr.ALL_DIMS, q=0.5) # ok .reduce() w/ 2dims on grouby obj not workingtest2d = g.reduce(np.nanpercentile, dim=('time','window'), q=0.5) ``` Expected Outputreduced output performed over multiple dimensions (but not xr.ALL_DIMS) on a groupby object Problem DescriptionUsing .reduce() on a groupby object is only successful when given a single dimensions or by using xr.ALL_DIMS. I wish to apply a reduce on a subset of dims (last line of code above) but gives folowing error:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
397916846 | MDU6SXNzdWUzOTc5MTY4NDY= | 2663 | Implement quarterly frequencies for cftime | tlogan2000 22454970 | closed | 0 | 1 | 2019-01-10T16:40:54Z | 2019-03-02T01:41:44Z | 2019-03-02T01:41:44Z | NONE | Currently cftime offsets have no options for quarterly frequencies (QS-DEC ... etc) as in xarray/pandas see : http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases @jwenfai has recently worked extensively to implement xarray/cftime resampling (https://github.com/pydata/xarray/pull/2593) but this will remain limited as quarterly frequencies are currently not supported under xarray/coding/cftime_offsets.py |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
408924678 | MDU6SXNzdWU0MDg5MjQ2Nzg= | 2762 | PyPi (v11.3) build behind master branch. cftime resampling not working | tlogan2000 22454970 | closed | 0 | 3 | 2019-02-11T18:51:31Z | 2019-02-11T19:43:15Z | 2019-02-11T19:10:44Z | NONE | A pypi xarray install (v 11.3) gives an error when resampling ncfiles with cftime calendars An install from github/master however works fine. 11.3 release notes seem to indicate that it should include the cftime resampling PR |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
375627070 | MDU6SXNzdWUzNzU2MjcwNzA= | 2527 | Undesired automatic conversion of 'dtype' based on variable units 'days'? | tlogan2000 22454970 | closed | 0 | 2 | 2018-10-30T18:14:38Z | 2018-10-31T16:04:08Z | 2018-10-31T16:04:08Z | NONE | ```python import xarray as xr from glob import os import numpy as np import urllib.request xr.set_options(enable_cftimeindex=True) url = 'https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/NRCANdaily/nrcan_canada_daily_tasmax_1990.nc' infile = r'~/nrcan_canada_daily_tasmax_1990.nc' urllib.request.urlretrieve(url,infile) freqs = ['MS'] # , 'QS-DEC', 'YS'] ds = xr.open_dataset(infile) su = (ds.tasmax > 25.0+273.15) * 1.0 for f in freqs:
``` Problem descriptionI am calculating a climate index 'summer days' ('su') from daily maximum temperatures. Everything goes fine but when I reread a newly created output .nc file my 'su' dtype has changed from float to timedelta64. This seems to be due to the units ('days') that I assign to my newly created variable which when read by xarray must trigger an automatic conversion? If I alter the units to 'days>25C' everything is ok. Is there a way to avoid this behavior and still keep my units as 'days' which is the CF_standard for this climate index calculation? (note there a large number of cases such as this - wet days, dry-days etc etc all of which have 'days' as the expected unit |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);