issue_comments
15 rows where user = 40218891 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- yt87 · 15 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
822968365 | https://github.com/pydata/xarray/issues/5106#issuecomment-822968365 | https://api.github.com/repos/pydata/xarray/issues/5106 | MDEyOklzc3VlQ29tbWVudDgyMjk2ODM2NQ== | yt87 40218891 | 2021-04-20T04:41:07Z | 2021-04-20T04:41:07Z | NONE | I am closing this issue. It is impossible to guess the proper time unit when dealing with missing data. Setting the attribute explicitly is a better solution. A minor quibble: the statement
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr() fails on time coordinate in append mode 849751721 | |
822108756 | https://github.com/pydata/xarray/issues/5106#issuecomment-822108756 | https://api.github.com/repos/pydata/xarray/issues/5106 | MDEyOklzc3VlQ29tbWVudDgyMjEwODc1Ng== | yt87 40218891 | 2021-04-19T01:27:02Z | 2021-04-19T01:28:38Z | NONE | When the time dimension of the dataset being appended to is 1, the inferred unit is "days". This happens on line 318 in file conding/times.py. In this case variable Since the fallback return value is set to "seconds", I would argue that the case of empty |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr() fails on time coordinate in append mode 849751721 | |
767170277 | https://github.com/pydata/xarray/issues/4830#issuecomment-767170277 | https://api.github.com/repos/pydata/xarray/issues/4830 | MDEyOklzc3VlQ29tbWVudDc2NzE3MDI3Nw== | yt87 40218891 | 2021-01-25T23:06:00Z | 2021-01-25T23:06:00Z | NONE | One could always set source to s3 = s3fs.S3FileSystem(anon=True)
s3path = 's3://wrf-se-ak-ar5/gfdl/hist/daily/1980/WRFDS_1980-01-02.nc'
fileset = s3.open(s3path)
fileset
fileset.path
'wrf-se-ak-ar5/gfdl/hist/daily/1980/WRFDS_1980-01-02.nc'
If the fix is only for s3fs, getting |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GH2550 revisited 789653499 | |
762438483 | https://github.com/pydata/xarray/issues/4822#issuecomment-762438483 | https://api.github.com/repos/pydata/xarray/issues/4822 | MDEyOklzc3VlQ29tbWVudDc2MjQzODQ4Mw== | yt87 40218891 | 2021-01-18T19:39:34Z | 2021-01-18T19:39:34Z | NONE | You might be right. Adding However, after changing my AWS script to ``` import s3fs import xarray as xr s3 = s3fs.S3FileSystem(anon=True) s3path = 's3://wrf-se-ak-ar5/gfdl/hist/daily/1988/WRFDS_1988-04-23.nc' ds = xr.open_dataset(s3.open(s3path), engine='scipy')
print(ds)
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
h5netcdf fails to decode attribute coordinates. 787947436 | |
762423707 | https://github.com/pydata/xarray/issues/4822#issuecomment-762423707 | https://api.github.com/repos/pydata/xarray/issues/4822 | MDEyOklzc3VlQ29tbWVudDc2MjQyMzcwNw== | yt87 40218891 | 2021-01-18T19:03:19Z | 2021-01-18T19:03:19Z | NONE | This is how I did it: ``` $ ncdump /tmp/x.nc netcdf x { dimensions: x = 1 ; y = 1 ; variables: int foo(y, x) ; foo:coordinates = "x y" ; data: foo = 0 ; } $ rm x.nc $ ncgen -o x.nc < x.cdl $ python -c "import xarray as xr; ds = xr.open_dataset('/tmp/x.nc', engine='h5netcdf'); print(ds)" ``` Engine netcdf4 works fine, with string or without. My original code retrieving data from AWS: ``` import s3fs import xarray as xr s3 = s3fs.S3FileSystem(anon=True) s3path = 's3://wrf-se-ak-ar5/gfdl/hist/daily/1988/WRFDS_1988-04-23.nc' ds = xr.open_dataset(s3.open(s3path))
print(ds)
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
h5netcdf fails to decode attribute coordinates. 787947436 | |
762376418 | https://github.com/pydata/xarray/issues/4822#issuecomment-762376418 | https://api.github.com/repos/pydata/xarray/issues/4822 | MDEyOklzc3VlQ29tbWVudDc2MjM3NjQxOA== | yt87 40218891 | 2021-01-18T17:12:53Z | 2021-01-18T17:19:53Z | NONE | Dropping string changes error to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
h5netcdf fails to decode attribute coordinates. 787947436 | |
481033093 | https://github.com/pydata/xarray/issues/2871#issuecomment-481033093 | https://api.github.com/repos/pydata/xarray/issues/2871 | MDEyOklzc3VlQ29tbWVudDQ4MTAzMzA5Mw== | yt87 40218891 | 2019-04-08T22:35:10Z | 2019-04-08T22:35:10Z | NONE | After rethinking the issue, I would drop it: one can simply pass |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(f1).to_netcdf(file2) is not idempotent 429914958 | |
480475645 | https://github.com/pydata/xarray/issues/2871#issuecomment-480475645 | https://api.github.com/repos/pydata/xarray/issues/2871 | MDEyOklzc3VlQ29tbWVudDQ4MDQ3NTY0NQ== | yt87 40218891 | 2019-04-06T05:24:52Z | 2019-04-06T05:24:52Z | NONE | Indeed it works. Thanks. My quick fix:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(f1).to_netcdf(file2) is not idempotent 429914958 | |
455351725 | https://github.com/pydata/xarray/issues/2554#issuecomment-455351725 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQ1NTM1MTcyNQ== | yt87 40218891 | 2019-01-17T22:13:52Z | 2019-01-17T22:13:52Z | NONE | After upgrading to anaconda python 3.7 the code works without crashes. I think this issue can be closed. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 | |
439281383 | https://github.com/pydata/xarray/issues/2554#issuecomment-439281383 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQzOTI4MTM4Mw== | yt87 40218891 | 2018-11-16T04:50:43Z | 2018-11-16T04:50:43Z | NONE | The error
The segv crashes occur with other datasets as well. Example test set I used:
A simple fix is to change the scheduler as I did in my original post. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 | |
437647881 | https://github.com/pydata/xarray/issues/2554#issuecomment-437647881 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQzNzY0Nzg4MQ== | yt87 40218891 | 2018-11-11T06:50:22Z | 2018-11-11T06:50:22Z | NONE | I meant at random points during execution. The script crashed every time. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 | |
437647777 | https://github.com/pydata/xarray/issues/2554#issuecomment-437647777 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQzNzY0Nzc3Nw== | yt87 40218891 | 2018-11-11T06:47:47Z | 2018-11-11T06:47:47Z | NONE | I did some further tests, the crash occurs somewhat randomly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 | |
437646885 | https://github.com/pydata/xarray/issues/2554#issuecomment-437646885 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQzNzY0Njg4NQ== | yt87 40218891 | 2018-11-11T06:22:27Z | 2018-11-11T06:22:27Z | NONE | About 600k for 2 files. I could spend some time to try size that down, but if there is a way to upload the the whole set it would be easier for me. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 | |
437633544 | https://github.com/pydata/xarray/issues/2554#issuecomment-437633544 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQzNzYzMzU0NA== | yt87 40218891 | 2018-11-11T00:38:03Z | 2018-11-11T00:38:03Z | NONE | Another puzzle, I don't know it is related to the crashes. Trying to localize the issue I added line after
This prints: ``` ======= hlcy (1, 85) ======= cdbp (1, 85) ======= hovi (1, 85) ======= itim (1024,) RuntimeError Traceback (most recent call last) <ipython-input-5-aeb92962e874> in <module>() 1 ds0 = xr.open_dataset('/tmp/nam/bufr.701940/bufr.701940.2010123112.nc') ----> 2 ds0.to_netcdf('/tmp/d0.nc') /usr/local/Python-3.6.5/lib/python3.6/site-packages/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute) 1220 engine=engine, encoding=encoding, 1221 unlimited_dims=unlimited_dims, -> 1222 compute=compute) 1223 1224 def to_zarr(self, store=None, mode='w-', synchronizer=None, group=None, /usr/local/Python-3.6.5/lib/python3.6/site-packages/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile) 718 # to be parallelized with dask 719 dump_to_store(dataset, store, writer, encoding=encoding, --> 720 unlimited_dims=unlimited_dims) 721 if autoclose: 722 store.close() /usr/local/Python-3.6.5/lib/python3.6/site-packages/xarray/backends/api.py in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims) 761 762 store.store(variables, attrs, check_encoding, writer, --> 763 unlimited_dims=unlimited_dims) 764 765 /usr/local/Python-3.6.5/lib/python3.6/site-packages/xarray/backends/common.py in store(self, variables, attributes, check_encoding_set, writer, unlimited_dims) 264 self.set_dimensions(variables, unlimited_dims=unlimited_dims) 265 self.set_variables(variables, check_encoding_set, writer, --> 266 unlimited_dims=unlimited_dims) 267 268 def set_attributes(self, attributes): /usr/local/Python-3.6.5/lib/python3.6/site-packages/xarray/backends/common.py in set_variables(self, variables, check_encoding_set, writer, unlimited_dims) 302 check = vn in check_encoding_set 303 target, source = self.prepare_variable( --> 304 name, v, check, unlimited_dims=unlimited_dims) 305 306 writer.add(source, target) /usr/local/Python-3.6.5/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in prepare_variable(self, name, variable, check_encoding, unlimited_dims) 466 least_significant_digit=encoding.get( 467 'least_significant_digit'), --> 468 fill_value=fill_value) 469 _disable_auto_decode_variable(nc4_var) 470 netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.createVariable() netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.init() netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success() RuntimeError: NetCDF: Bad chunk sizes. ``` The dataset is:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 | |
437631073 | https://github.com/pydata/xarray/issues/2554#issuecomment-437631073 | https://api.github.com/repos/pydata/xarray/issues/2554 | MDEyOklzc3VlQ29tbWVudDQzNzYzMTA3Mw== | yt87 40218891 | 2018-11-10T23:49:22Z | 2018-11-10T23:49:22Z | NONE | No, it works fine. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset crashes with segfault 379472634 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 5