issue_comments
9 rows where user = 22454970 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- tlogan2000 · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
902670118 | https://github.com/pydata/xarray/pull/4540#issuecomment-902670118 | https://api.github.com/repos/pydata/xarray/issues/4540 | IC_kwDOAMm_X841zacm | tlogan2000 22454970 | 2021-08-20T12:54:07Z | 2021-08-20T12:54:07Z | NONE | FYI @aulemahal @Zeitsperre @huard ... re: xclim discussion yesterday. If we have spare moments in the following weeks we could try a few tests on our to end to benchmark and provide bug reports |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
numpy_groupies 729208432 | |
902148007 | https://github.com/pydata/xarray/pull/4540#issuecomment-902148007 | https://api.github.com/repos/pydata/xarray/issues/4540 | IC_kwDOAMm_X841xa-n | tlogan2000 22454970 | 2021-08-19T18:35:48Z | 2021-08-19T18:35:48Z | NONE | Hello all, my fellow xclim https://github.com/Ouranosinc/xclim devs and I would be interested to know if this PR is still moving forward? cheers |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
numpy_groupies 729208432 | |
571689678 | https://github.com/pydata/xarray/issues/3666#issuecomment-571689678 | https://api.github.com/repos/pydata/xarray/issues/3666 | MDEyOklzc3VlQ29tbWVudDU3MTY4OTY3OA== | tlogan2000 22454970 | 2020-01-07T17:31:53Z | 2020-01-07T17:31:53Z | NONE | FYI - @dcherian @keewis ... thanks for the suggestions and help This final small change (dropping time values before replacing with the common calendar) has gotten all cases in the xclim code test suite running against the xarray@master... Feel free to close ````dslist =[] for d in datasets: ds = xr.open_dataset(d, chunks=dict(time=10), decode_times=False) cal1 = xr.decode_cf(ds).time ds = ds.drop_vars('time') ds["time"] = pd.to_datetime( { "year": cal1.time.dt.year, "month": cal1.time.dt.month, "day": cal1.time.dt.day, } ).values dslist.append(ds) ens1 = xr.concat(dslist,dim='realization') ens1 = ensembles.create_ensemble(datasets)print(ens1) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex 546303413 | |
571669562 | https://github.com/pydata/xarray/issues/3666#issuecomment-571669562 | https://api.github.com/repos/pydata/xarray/issues/3666 | MDEyOklzc3VlQ29tbWVudDU3MTY2OTU2Mg== | tlogan2000 22454970 | 2020-01-07T16:42:44Z | 2020-01-07T16:42:44Z | NONE | I think I need to leave the ds.time as undecoded and create a calendar variable each time then overwrite the ds values: ```import wget import glob import xarray as xr import pandas as pd def install(package):subprocess.check_call([sys.executable, "-m", "pip", "install", package])try:from xclim import ensemblesexcept:install('xclim')from xclim import ensemblesoutdir = '/home/travis/Downloads' url = [] url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_ACCESS1-0_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_BNU-ESM_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r2i1p1_1950-2100_tg_mean_YS.nc') for u in url: wget.download(u,out=outdir) datasets = glob.glob(f'{outdir}/1950.nc') dslist =[] for d in datasets: ds = xr.open_dataset(d, chunks=dict(time=10), decode_times=False) cal1 = xr.decode_cf(ds).time ds["time"].values = pd.to_datetime( { "year": cal1.time.dt.year, "month": cal1.time.dt.month, "day": cal1.time.dt.day, } ) dslist.append(ds) ens1 = xr.concat(dslist,dim='realization') ens1 = ensembles.create_ensemble(datasets)print(ens1) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex 546303413 | |
571668871 | https://github.com/pydata/xarray/issues/3666#issuecomment-571668871 | https://api.github.com/repos/pydata/xarray/issues/3666 | MDEyOklzc3VlQ29tbWVudDU3MTY2ODg3MQ== | tlogan2000 22454970 | 2020-01-07T16:41:07Z | 2020-01-07T16:41:07Z | NONE | This is more or less what I was doing but I think the problem may be that I am trying to overwrite the cfdatetime with the pd.datetime in a loop see code below: ```import wget import glob import xarray as xr import pandas as pd def install(package):subprocess.check_call([sys.executable, "-m", "pip", "install", package])try:from xclim import ensemblesexcept:install('xclim')from xclim import ensemblesoutdir = '/home/travis/Downloads' url = [] url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_ACCESS1-0_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_BNU-ESM_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r2i1p1_1950-2100_tg_mean_YS.nc') for u in url: wget.download(u,out=outdir) datasets = glob.glob(f'{outdir}/1950.nc') dslist =[] for d in datasets: ds = xr.open_dataset(d, chunks=dict(time=10), decode_times=False) ds['time'] = xr.decode_cf(ds).time ds["time"].values = pd.to_datetime( { "year": ds.time.dt.year, "month": ds.time.dt.month, "day": ds.time.dt.day, } ) dslist.append(ds) ens1 = xr.concat(dslist,dim='realization') |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex 546303413 | |
571641613 | https://github.com/pydata/xarray/issues/3666#issuecomment-571641613 | https://api.github.com/repos/pydata/xarray/issues/3666 | MDEyOklzc3VlQ29tbWVudDU3MTY0MTYxMw== | tlogan2000 22454970 | 2020-01-07T15:42:37Z | 2020-01-07T15:42:37Z | NONE | Ok thanks, Basically the xclim ensembles code should overwrite a common calendar pd.datetime format to these monthly datasets (various calendar type) before concatenation (as mixing calendars is not possible). I will try to write it as an xarray only example |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex 546303413 | |
542667961 | https://github.com/pydata/xarray/issues/3402#issuecomment-542667961 | https://api.github.com/repos/pydata/xarray/issues/3402 | MDEyOklzc3VlQ29tbWVudDU0MjY2Nzk2MQ== | tlogan2000 22454970 | 2019-10-16T12:06:32Z | 2019-10-16T12:06:32Z | NONE | Thx for the quick reply. For info this was working fine with v.0.13.0... it only is occuring since the 0.14.0 upgrade |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
reduce() by multiple dims on groupby object 507471865 | |
462456767 | https://github.com/pydata/xarray/issues/2762#issuecomment-462456767 | https://api.github.com/repos/pydata/xarray/issues/2762 | MDEyOklzc3VlQ29tbWVudDQ2MjQ1Njc2Nw== | tlogan2000 22454970 | 2019-02-11T19:16:26Z | 2019-02-11T19:16:26Z | NONE | ok thanks, sorry for the confusion I was looking directly at commits for release 11.3 on github https://github.com/pydata/xarray/releases |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
PyPi (v11.3) build behind master branch. cftime resampling not working 408924678 | |
434745982 | https://github.com/pydata/xarray/issues/2527#issuecomment-434745982 | https://api.github.com/repos/pydata/xarray/issues/2527 | MDEyOklzc3VlQ29tbWVudDQzNDc0NTk4Mg== | tlogan2000 22454970 | 2018-10-31T16:04:08Z | 2018-10-31T16:04:08Z | NONE | Great thx |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Undesired automatic conversion of 'dtype' based on variable units 'days'? 375627070 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 5