issue_comments
4 rows where issue = 94328498 and user = 380927 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- open_mfdataset too many files · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
143373357 | https://github.com/pydata/xarray/issues/463#issuecomment-143373357 | https://api.github.com/repos/pydata/xarray/issues/463 | MDEyOklzc3VlQ29tbWVudDE0MzM3MzM1Nw== | cpaulik 380927 | 2015-09-25T23:11:39Z | 2015-09-25T23:11:39Z | NONE | OK, I'll try. Thanks. But I originally tested if netCDF4 can work with a closed/reopened variable like this: ``` python In [1]: import netCDF4 In [2]: a = netCDF4.Dataset("temp.nc", mode="w") In [3]: a.createDimension("lon") Out[3]: <class 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'lon', size = 0 In [4]: a.createVariable("lon", "f8", dimensions=("lon")) Out[4]: <class 'netCDF4._netCDF4.Variable'> float64 lon(lon) unlimited dimensions: lon current shape = (0,) filling on, default _FillValue of 9.969209968386869e+36 used In [5]: v = a.variables['lon'] In [6]: v Out[6]: <class 'netCDF4._netCDF4.Variable'> float64 lon(lon) unlimited dimensions: lon current shape = (0,) filling on, default _FillValue of 9.969209968386869e+36 used In [7]: a.close() In [8]: v Out[8]: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /home/cp/.pyenv/versions/miniconda3-3.16.0/envs/xray-3.5.0/lib/python3.5/site-packages/IPython/core/formatters.py in call(self, obj) 695 type_pprinters=self.type_printers, 696 deferred_pprinters=self.deferred_printers) --> 697 printer.pretty(obj) 698 printer.flush() 699 return stream.getvalue() /home/cp/.pyenv/versions/miniconda3-3.16.0/envs/xray-3.5.0/lib/python3.5/site-packages/IPython/lib/pretty.py in pretty(self, obj) 381 if callable(meth): 382 return meth(obj, self, cycle) --> 383 return default_pprint(obj, self, cycle) 384 finally: 385 self.end_group() /home/cp/.pyenv/versions/miniconda3-3.16.0/envs/xray-3.5.0/lib/python3.5/site-packages/IPython/lib/pretty.py in _default_pprint(obj, p, cycle) 501 if _safe_getattr(klass, '__repr__', None) not in _baseclass_reprs: 502 # A user-provided repr. Find newlines and replace them with p.break() --> 503 repr_pprint(obj, p, cycle) 504 return 505 p.begin_group(1, '<') /home/cp/.pyenv/versions/miniconda3-3.16.0/envs/xray-3.5.0/lib/python3.5/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle) 683 """A pprint that just redirects to the normal repr function.""" 684 # Find newlines and replace them with p.break() --> 685 output = repr(obj) 686 for idx,output_line in enumerate(output.splitlines()): 687 if idx: netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.repr (netCDF4/_netCDF4.c:25045)() netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.unicode (netCDF4/_netCDF4.c:25243)() netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.dimensions.get (netCDF4/_netCDF4.c:27486)() netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable._getdims (netCDF4/_netCDF4.c:26297)() RuntimeError: NetCDF: Not a valid ID In [9]: a = netCDF4.Dataset("temp.nc") In [10]: v Out[10]: class 'netCDF4._netCDF4.Variable'> lon(lon) dimensions: lon shape = (0,) on, default _FillValue of 9.969209968386869e+36 used ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset too many files 94328498 | |
143338384 | https://github.com/pydata/xarray/issues/463#issuecomment-143338384 | https://api.github.com/repos/pydata/xarray/issues/463 | MDEyOklzc3VlQ29tbWVudDE0MzMzODM4NA== | cpaulik 380927 | 2015-09-25T20:02:42Z | 2015-09-25T20:02:42Z | NONE | I've only put the try - except there to conditionally set the breakpoint. How does it make a difference if the self.store.close is called? It it is not called then the dataset remains opened which should not cause the weird behaviour reported above? Nevertheless I have updated my branch to use a contextmanager because it is a better solution but I still have this strange behaviour of only printing the variable altering the test outcome. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset too many files 94328498 | |
143222580 | https://github.com/pydata/xarray/issues/463#issuecomment-143222580 | https://api.github.com/repos/pydata/xarray/issues/463 | MDEyOklzc3VlQ29tbWVudDE0MzIyMjU4MA== | cpaulik 380927 | 2015-09-25T13:27:59Z | 2015-09-25T13:27:59Z | NONE | I've pushed a few commits trying this out to https://github.com/cpaulik/xray/tree/closing_netcdf_backend . I can open a WIP PR if this would be easier to discuss there. There are however a few tests that keep failing and I can not figure out why. e.g.: If I set a breakpoint at line 941 of dataset.py and just continue the test fails. If I however evaluate The error I get when running the test without interference is: ``` shell test_backends.py::NetCDF4ViaDaskDataTest::test_compression_encoding FAILED ====================================================== FAILURES ======================================================= ______ NetCDF4ViaDaskDataTest.test_compression_encoding _________ self = <xray.test.test_backends.NetCDF4ViaDaskDataTest testMethod=test_compression_encoding>
/usr/lib/python2.7/contextlib.py:17: in enter return self.gen.next() test_backends.py:596: in roundtrip yield ds.chunk() ../core/dataset.py:942: in chunk for k, v in self.variables.items()]) ../core/dataset.py:935: in maybe_chunk token2 = tokenize(name, token if token else var._data) /home/cpa/.virtualenvs/xray/local/lib/python2.7/site-packages/dask/base.py:152: in tokenize return md5(str(tuple(map(normalize_token, args))).encode()).hexdigest() ../core/indexing.py:301: in repr (type(self).name, self.array, self.key)) ../core/utils.py:377: in repr return '%s(array=%r)' % (type(self).name, self.array) ../core/indexing.py:301: in repr (type(self).name, self.array, self.key)) ../core/utils.py:377: in repr return '%s(array=%r)' % (type(self).name, self.array) netCDF4/_netCDF4.pyx:2931: in netCDF4._netCDF4.Variable.repr (netCDF4/_netCDF4.c:25068) ??? netCDF4/_netCDF4.pyx:2938: in netCDF4._netCDF4.Variable.unicode (netCDF4/_netCDF4.c:25243) ??? netCDF4/_netCDF4.pyx:3059: in netCDF4._netCDF4.Variable.dimensions.get (netCDF4/_netCDF4.c:27486) ???
netCDF4/_netCDF4.pyx:2994: RuntimeError ============================================== 1 failed in 0.50 seconds =============================================== ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset too many files 94328498 | |
142637232 | https://github.com/pydata/xarray/issues/463#issuecomment-142637232 | https://api.github.com/repos/pydata/xarray/issues/463 | MDEyOklzc3VlQ29tbWVudDE0MjYzNzIzMg== | cpaulik 380927 | 2015-09-23T15:19:36Z | 2015-09-23T15:19:36Z | NONE | I've run into the same problem and have been looking at the netCDF backend. A solution does not seem to be so easy as to open and close the file in the Short of decorating all the functions of the netCDF4 package I can not think of a workable solution to this. But maybe I'm overlooking something fundamental. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset too many files 94328498 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1