issue_comments
27 rows where author_association = "CONTRIBUTOR" and user = 1794715 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- ebrevdo · 27 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
73815254 | https://github.com/pydata/xarray/issues/319#issuecomment-73815254 | https://api.github.com/repos/pydata/xarray/issues/319 | MDEyOklzc3VlQ29tbWVudDczODE1MjU0 | ebrevdo 1794715 | 2015-02-11T00:39:55Z | 2015-02-11T00:39:55Z | CONTRIBUTOR | seems like for, e.g., head, you can pass either a single dimension or multiple ones (e.g., either as **kwargs or a dictionary) and use those as the start dimension. that said, about naming conventions, i think for tensors the most common convention is definitely slice() (which is implemented as isel). head/tail can be implemented in terms of slice(). e.g.: ds.slice(dim1=3, dim2=(1,4), dim3=(1,None,5)) -- or -- ds.slice({'dim1': 3, 'dim2': (1,4), 'dim3': (1, None, 5)}) head/tail/whatever are easy calls to this and you can have that in the documentation. as a result, people won't get confused because they understand slice. On Tue, Feb 10, 2015 at 4:14 PM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add head(), tail() and thin() methods? 57254455 | |
73810816 | https://github.com/pydata/xarray/issues/319#issuecomment-73810816 | https://api.github.com/repos/pydata/xarray/issues/319 | MDEyOklzc3VlQ29tbWVudDczODEwODE2 | ebrevdo 1794715 | 2015-02-11T00:00:14Z | 2015-02-11T00:00:14Z | CONTRIBUTOR | Clojure conventions: .take, .take_last get the first n and last n pandas/ndarray conventions: .take([3,4,5]) selects rows 3,4,5. probably want to be consistent with one of these. On Tue, Feb 10, 2015 at 3:28 PM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add head(), tail() and thin() methods? 57254455 | |
43760243 | https://github.com/pydata/xarray/issues/138#issuecomment-43760243 | https://api.github.com/repos/pydata/xarray/issues/138 | MDEyOklzc3VlQ29tbWVudDQzNzYwMjQz | ebrevdo 1794715 | 2014-05-21T14:21:22Z | 2014-05-21T14:21:22Z | CONTRIBUTOR | Why not options like 'first', 'common', etc? On May 20, 2014 11:52 PM, "akleeman" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
keep attrs when reducing xray objects 33942756 | |
37432906 | https://github.com/pydata/xarray/pull/62#issuecomment-37432906 | https://api.github.com/repos/pydata/xarray/issues/62 | MDEyOklzc3VlQ29tbWVudDM3NDMyOTA2 | ebrevdo 1794715 | 2014-03-12T16:50:18Z | 2014-03-12T16:50:18Z | CONTRIBUTOR | I definitely like the inplace idea. We could also use the function name update in this case. On Mar 12, 2014 9:49 AM, "Stephan Hoyer" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modified Dataset.replace to replace a dictionary of variables 29220463 | |
37376646 | https://github.com/pydata/xarray/pull/62#issuecomment-37376646 | https://api.github.com/repos/pydata/xarray/issues/62 | MDEyOklzc3VlQ29tbWVudDM3Mzc2NjQ2 | ebrevdo 1794715 | 2014-03-12T05:24:06Z | 2014-03-12T05:24:06Z | CONTRIBUTOR | True. Maybe stick with replace, and we can put filter on the to do? I may work on it tomorrow. On Mar 11, 2014 10:10 PM, "Stephan Hoyer" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modified Dataset.replace to replace a dictionary of variables 29220463 | |
37375935 | https://github.com/pydata/xarray/pull/62#issuecomment-37375935 | https://api.github.com/repos/pydata/xarray/issues/62 | MDEyOklzc3VlQ29tbWVudDM3Mzc1OTM1 | ebrevdo 1794715 | 2014-03-12T05:05:00Z | 2014-03-12T05:05:00Z | CONTRIBUTOR | Don't dicts have an update function that works this way? On Mar 11, 2014 9:48 PM, "Stephan Hoyer" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modified Dataset.replace to replace a dictionary of variables 29220463 | |
37358215 | https://github.com/pydata/xarray/pull/62#issuecomment-37358215 | https://api.github.com/repos/pydata/xarray/issues/62 | MDEyOklzc3VlQ29tbWVudDM3MzU4MjE1 | ebrevdo 1794715 | 2014-03-11T23:13:12Z | 2014-03-11T23:13:12Z | CONTRIBUTOR | This in response to your first bullet: Create a new dataset based on some (but not all) variables from an existing dataset. for that there's a filter() in pandas. it would be useful to have here as well. On Tue, Mar 11, 2014 at 4:11 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modified Dataset.replace to replace a dictionary of variables 29220463 | |
37354993 | https://github.com/pydata/xarray/pull/62#issuecomment-37354993 | https://api.github.com/repos/pydata/xarray/issues/62 | MDEyOklzc3VlQ29tbWVudDM3MzU0OTkz | ebrevdo 1794715 | 2014-03-11T22:36:32Z | 2014-03-11T22:36:32Z | CONTRIBUTOR | Is part 1 similar to the pandas .filter operator? That one has nice keywords, 'like', 'regex', etc. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modified Dataset.replace to replace a dictionary of variables 29220463 | |
36484363 | https://github.com/pydata/xarray/issues/39#issuecomment-36484363 | https://api.github.com/repos/pydata/xarray/issues/39 | MDEyOklzc3VlQ29tbWVudDM2NDg0MzYz | ebrevdo 1794715 | 2014-03-03T06:21:33Z | 2014-03-03T06:21:33Z | CONTRIBUTOR | Indices also have an .inferred_type getter. unfortunately it doesn't seem to return true type names... In [13]: pandas.Index([1,2,3]).inferred_type Out[13]: 'integer' In [14]: pandas.Index([1,2,3.5]).inferred_type Out[14]: 'mixed-integer-float' In [15]: pandas.Index(["ab","cd"]).inferred_type Out[15]: 'string' In [16]: pandas.Index(["ab","cd",3]).inferred_type Out[16]: 'mixed-integer' On Sun, Mar 2, 2014 at 10:14 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
OpenDAP loaded Dataset has lon/lats with type 'object'. 28600785 | |
36279285 | https://github.com/pydata/xarray/issues/26#issuecomment-36279285 | https://api.github.com/repos/pydata/xarray/issues/26 | MDEyOklzc3VlQ29tbWVudDM2Mjc5Mjg1 | ebrevdo 1794715 | 2014-02-27T19:14:56Z | 2014-02-27T19:14:56Z | CONTRIBUTOR | Some of these are specific to the datastore. nc3/nc4 may care about integer packing and masking, but grib format may not. maybe that's where these things should really reside. as aspects of the datastore object. not sure about units though. either way, ideally these would be transparent to the user of the xarray/dataset objects, except as parameters when reading/writing? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow the ability to add/persist details of how a dataset is stored. 28445412 | |
36196139 | https://github.com/pydata/xarray/issues/25#issuecomment-36196139 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTk2MTM5 | ebrevdo 1794715 | 2014-02-27T00:28:13Z | 2014-02-27T00:28:13Z | CONTRIBUTOR | Agreed. I would avoid that kind of thing too. Maybe a stern warning for all conflicting attributes, and saying that they will be dropped from the new variable. For units specifically, Python has a variety of unit libraries that wrap numpy arrays and can probably do some magic. Not sure if we really want to do that, though. On Wed, Feb 26, 2014 at 4:07 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36193148 | https://github.com/pydata/xarray/issues/25#issuecomment-36193148 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTkzMTQ4 | ebrevdo 1794715 | 2014-02-26T23:45:57Z | 2014-02-26T23:45:57Z | CONTRIBUTOR | err, which attributes conflict. On Wed, Feb 26, 2014 at 3:45 PM, Eugene Brevdo ebrevdo@gmail.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36193126 | https://github.com/pydata/xarray/issues/25#issuecomment-36193126 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTkzMTI2 | ebrevdo 1794715 | 2014-02-26T23:45:42Z | 2014-02-26T23:45:42Z | CONTRIBUTOR | I don't think that example has your intended affect. I don't know why anyone would add something of units kelvin with those of celsius. I understand what you're saying, so maybe we should just throw a stern warning listing which units conflict and how, every single time. On Wed, Feb 26, 2014 at 3:42 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36190935 | https://github.com/pydata/xarray/issues/25#issuecomment-36190935 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTkwOTM1 | ebrevdo 1794715 | 2014-02-26T23:23:39Z | 2014-02-26T23:23:39Z | CONTRIBUTOR | Also, there are plenty of other bits where you don't want conflicts. Imagine that you have variables indexed on different basemap projections. Creating exceptions to the rule seems like a bit of a rabbit hole. On Wed, Feb 26, 2014 at 3:13 PM, Eugene Brevdo ebrevdo@gmail.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36190079 | https://github.com/pydata/xarray/issues/25#issuecomment-36190079 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTkwMDc5 | ebrevdo 1794715 | 2014-02-26T23:13:42Z | 2014-02-26T23:13:42Z | CONTRIBUTOR | This is an option, but these lists will break if we try to express other data formats using these conventions. For example, grib likely has other conventions. We would have to overload attribute or variable depending on what the underlying datastore is. On Wed, Feb 26, 2014 at 3:03 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36188397 | https://github.com/pydata/xarray/issues/25#issuecomment-36188397 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTg4Mzk3 | ebrevdo 1794715 | 2014-02-26T22:55:10Z | 2014-02-26T22:55:10Z | CONTRIBUTOR | It depends on whether x+y does attribute checking before performing the merge. Again, if units don't match then maybe you shouldn't add. I always favor the strictest approach so you don't get strange surprises. On Wed, Feb 26, 2014 at 2:47 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36186341 | https://github.com/pydata/xarray/issues/24#issuecomment-36186341 | https://api.github.com/repos/pydata/xarray/issues/24 | MDEyOklzc3VlQ29tbWVudDM2MTg2MzQx | ebrevdo 1794715 | 2014-02-26T22:33:31Z | 2014-02-26T22:39:49Z | CONTRIBUTOR | ``` In [73]: h = cPickle.dumps(f) TypeError Traceback (most recent call last) <ipython-input-73-a875f963f7d4> in <module>() ----> 1 h = cPickle.dumps(f) /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.pyc in _reduce_ex(self, proto) 68 else: 69 if base is self.class: ---> 70 raise TypeError, "can't pickle %s objects" % base.name 71 state = base(self) 72 args = (self.class, base, state) TypeError: can't pickle Variable objects ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
TODO: xray objects support the pickle protocol 28375722 | |
36186918 | https://github.com/pydata/xarray/issues/25#issuecomment-36186918 | https://api.github.com/repos/pydata/xarray/issues/25 | MDEyOklzc3VlQ29tbWVudDM2MTg2OTE4 | ebrevdo 1794715 | 2014-02-26T22:39:30Z | 2014-02-26T22:39:30Z | CONTRIBUTOR | I would default to 3, and in the exception suggest using a different merge option. Imagine merging two datasets with different _FillValue, unit, or compression attributes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consistent rules for handling merges between variables with different attributes 28376794 | |
36186641 | https://github.com/pydata/xarray/issues/24#issuecomment-36186641 | https://api.github.com/repos/pydata/xarray/issues/24 | MDEyOklzc3VlQ29tbWVudDM2MTg2NjQx | ebrevdo 1794715 | 2014-02-26T22:36:42Z | 2014-02-26T22:36:42Z | CONTRIBUTOR | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
TODO: xray objects support the pickle protocol 28375722 | ||
36186205 | https://github.com/pydata/xarray/issues/23#issuecomment-36186205 | https://api.github.com/repos/pydata/xarray/issues/23 | MDEyOklzc3VlQ29tbWVudDM2MTg2MjA1 | ebrevdo 1794715 | 2014-02-26T22:32:06Z | 2014-02-26T22:32:06Z | CONTRIBUTOR | Looks like this may be the only option. Based on my tests, netCDF4 is strongly antithetical to any kind of streams/piped buffers. If we go the hdf5 route, we'd have to reimplement the CDM/netcdf4 on top of hdf5, no? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 28375178 | |
36183864 | https://github.com/pydata/xarray/pull/21#issuecomment-36183864 | https://api.github.com/repos/pydata/xarray/issues/21 | MDEyOklzc3VlQ29tbWVudDM2MTgzODY0 | ebrevdo 1794715 | 2014-02-26T22:10:01Z | 2014-02-26T22:10:01Z | CONTRIBUTOR | I'd preserve as many of the original variables as possible, similarly to preserving _FillValue. you'll need them if you want to preserve file structure. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cf time units persist 28315331 | |
35839091 | https://github.com/pydata/xarray/pull/15#issuecomment-35839091 | https://api.github.com/repos/pydata/xarray/issues/15 | MDEyOklzc3VlQ29tbWVudDM1ODM5MDkx | ebrevdo 1794715 | 2014-02-23T18:43:08Z | 2014-02-23T18:43:08Z | CONTRIBUTOR | Fair enough. On Sun, Feb 23, 2014 at 10:40 AM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Version now contains git commit ID 28123550 | |
35838795 | https://github.com/pydata/xarray/pull/15#issuecomment-35838795 | https://api.github.com/repos/pydata/xarray/issues/15 | MDEyOklzc3VlQ29tbWVudDM1ODM4Nzk1 | ebrevdo 1794715 | 2014-02-23T18:32:44Z | 2014-02-23T18:32:44Z | CONTRIBUTOR | Hmm. Would it be cleaner to use the git-python library? It would be cleaner than running a subprocess with possibly unknown versions of git. In addition, you could have it a requirement only for the build, not for installation. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Version now contains git commit ID 28123550 | |
35145419 | https://github.com/pydata/xarray/pull/12#issuecomment-35145419 | https://api.github.com/repos/pydata/xarray/issues/12 | MDEyOklzc3VlQ29tbWVudDM1MTQ1NDE5 | ebrevdo 1794715 | 2014-02-15T02:39:21Z | 2014-02-15T02:39:21Z | CONTRIBUTOR | Thanks for looking at that. I'll do a more thorough evaluation over the weekend! On Fri, Feb 14, 2014 at 5:49 PM, Stephan Hoyer notifications@github.comwrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stephan's sprintbattical 27625970 | |
35133287 | https://github.com/pydata/xarray/pull/12#issuecomment-35133287 | https://api.github.com/repos/pydata/xarray/issues/12 | MDEyOklzc3VlQ29tbWVudDM1MTMzMjg3 | ebrevdo 1794715 | 2014-02-14T22:55:58Z | 2014-02-14T22:55:58Z | CONTRIBUTOR | Looks like a great change! I'm seeing some failing tests: ~/dev/scidata (DataView)$ python setup.py test running test running egg_info writing requirements to src/xray.egg-info/requires.txt writing src/xray.egg-info/PKG-INFO writing top-level names to src/xray.egg-info/top_level.txt writing dependency_links to src/xray.egg-info/dependency_links.txt reading manifest file 'src/xray.egg-info/SOURCES.txt' writing manifest file 'src/xray.egg-info/SOURCES.txt' running build_ext test_1d_math (test.test_array.TestArray) ... ok test_aggregate (test.test_array.TestArray) ... ERROR test_array_interface (test.test_array.TestArray) ... ok test_broadcasting_failures (test.test_array.TestArray) ... ok test_broadcasting_math (test.test_array.TestArray) ... ok test_collapse (test.test_array.TestArray) ... ok test_data (test.test_array.TestArray) ... ok test_from_stack (test.test_array.TestArray) ... ok test_groupby (test.test_array.TestArray) ... ERROR test_indexed_by (test.test_array.TestArray) ... ok test_inplace_math (test.test_array.TestArray) ... ok test_items (test.test_array.TestArray) ... ok test_properties (test.test_array.TestArray) ... ok test_repr (test.test_array.TestArray) ... ok test_transpose (test.test_array.TestArray) ... ok test_attributes (test.test_dataset.DataTest) ... SKIP: attribute checks are not yet backend specific test_coordinate (test.test_dataset.DataTest) ... ok test_copy (test.test_dataset.DataTest) ... ok test_dimension (test.test_dataset.DataTest) ... ok test_getitem (test.test_dataset.DataTest) ... ERROR test_indexed_by (test.test_dataset.DataTest) ... ok test_init (test.test_dataset.DataTest) ... ok test_iterator (test.test_dataset.DataTest) ... ok test_labeled_by (test.test_dataset.DataTest) ... ERROR test_merge (test.test_dataset.DataTest) ... ok test_rename (test.test_dataset.DataTest) ... ok test_repr (test.test_dataset.DataTest) ... ok test_select (test.test_dataset.DataTest) ... ok test_setitem (test.test_dataset.DataTest) ... ok test_to_dataframe (test.test_dataset.DataTest) ... ok test_unselect (test.test_dataset.DataTest) ... SKIP: need to write this test test_variable (test.test_dataset.DataTest) ... ok test_variable_indexing (test.test_dataset.DataTest) ... ok test_write_store (test.test_dataset.DataTest) ... ok test_attributes (test.test_dataset.NetCDF4DataTest) ... SKIP: attribute checks are not yet backend specific test_coordinate (test.test_dataset.NetCDF4DataTest) ... ok test_copy (test.test_dataset.NetCDF4DataTest) ... ok test_dimension (test.test_dataset.NetCDF4DataTest) ... ok test_dump_and_open_dataset (test.test_dataset.NetCDF4DataTest) ... ok test_getitem (test.test_dataset.NetCDF4DataTest) ... ERROR test_indexed_by (test.test_dataset.NetCDF4DataTest) ... ok test_init (test.test_dataset.NetCDF4DataTest) ... ok test_iterator (test.test_dataset.NetCDF4DataTest) ... ok test_labeled_by (test.test_dataset.NetCDF4DataTest) ... ERROR test_merge (test.test_dataset.NetCDF4DataTest) ... ok test_rename (test.test_dataset.NetCDF4DataTest) ... ok test_repr (test.test_dataset.NetCDF4DataTest) ... ok test_select (test.test_dataset.NetCDF4DataTest) ... ok test_setitem (test.test_dataset.NetCDF4DataTest) ... ok test_to_dataframe (test.test_dataset.NetCDF4DataTest) ... ok test_unselect (test.test_dataset.NetCDF4DataTest) ... SKIP: need to write this test test_variable (test.test_dataset.NetCDF4DataTest) ... ok test_variable_indexing (test.test_dataset.NetCDF4DataTest) ... ok test_write_store (test.test_dataset.NetCDF4DataTest) ... ok test_attributes (test.test_dataset.ScipyDataTest) ... SKIP: attribute checks are not yet backend specific test_coordinate (test.test_dataset.ScipyDataTest) ... ok test_copy (test.test_dataset.ScipyDataTest) ... ok test_dimension (test.test_dataset.ScipyDataTest) ... ok test_dump_and_open_dataset (test.test_dataset.ScipyDataTest) ... FAIL test_getitem (test.test_dataset.ScipyDataTest) ... ERROR test_indexed_by (test.test_dataset.ScipyDataTest) ... ok test_init (test.test_dataset.ScipyDataTest) ... ok test_iterator (test.test_dataset.ScipyDataTest) ... ok test_labeled_by (test.test_dataset.ScipyDataTest) ... ERROR test_merge (test.test_dataset.ScipyDataTest) ... ok test_rename (test.test_dataset.ScipyDataTest) ... ok test_repr (test.test_dataset.ScipyDataTest) ... ok test_select (test.test_dataset.ScipyDataTest) ... ok test_setitem (test.test_dataset.ScipyDataTest) ... ok test_to_dataframe (test.test_dataset.ScipyDataTest) ... ok test_unselect (test.test_dataset.ScipyDataTest) ... SKIP: need to write this test test_variable (test.test_dataset.ScipyDataTest) ... ok test_variable_indexing (test.test_dataset.ScipyDataTest) ... ok test_write_store (test.test_dataset.ScipyDataTest) ... ok test.test_dataset.create_test_data ... ok test_aggregate (test.test_dataset_array.TestDatasetArray) ... FAIL test_array_interface (test.test_dataset_array.TestDatasetArray) ... ok test_collapse (test.test_dataset_array.TestDatasetArray) ... ok test_dataset_getitem (test.test_dataset_array.TestDatasetArray) ... ok test_from_stack (test.test_dataset_array.TestDatasetArray) ... ok test_groupby (test.test_dataset_array.TestDatasetArray) ... FAIL test_indexed_by (test.test_dataset_array.TestDatasetArray) ... ok test_inplace_math (test.test_dataset_array.TestDatasetArray) ... ok test_intersection (test.test_dataset_array.TestDatasetArray) ... FAIL test_item_math (test.test_dataset_array.TestDatasetArray) ... ok test_items (test.test_dataset_array.TestDatasetArray) ... ok test_iteration (test.test_dataset_array.TestDatasetArray) ... ok test_labeled_by (test.test_dataset_array.TestDatasetArray) ... FAIL test_loc (test.test_dataset_array.TestDatasetArray) ... FAIL test_math (test.test_dataset_array.TestDatasetArray) ... ok test_properties (test.test_dataset_array.TestDatasetArray) ... ok test_refocus (test.test_dataset_array.TestDatasetArray) ... ok test_renamed (test.test_dataset_array.TestDatasetArray) ... ok test_frozen (test.test_utils.TestDictionaries) ... ok test_ordered_dict_intersection (test.test_utils.TestDictionaries) ... ok test_safe (test.test_utils.TestDictionaries) ... ok test_unsafe (test.test_utils.TestDictionaries) ... ok test_expanded_indexer (test.test_utils.TestIndexers) ... ok test_orthogonal_indexer (test.test_utils.TestIndexers) ... ok test (test.test_utils.TestNum2DatetimeIndex) ... ERROR ERROR: test_aggregate (test.test_array.TestArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_array.py", line 240, in test_aggregate self.assertVarEqual(expected_unique, actual_unique) File "/Users/ebrevdo/dev/scidata/test/init.py", line 10, in assertVarEqual self.assertTrue(utils.variable_equal(v1, v2)) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 132, in variable_equal return np.array_equal(data1, data2) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/numeric.py", line 1977, in array_equal return bool(logical_and.reduce(equal(a1,a2).ravel())) AttributeError: 'NotImplementedType' object has no attribute 'ravel' ERROR: test_groupby (test.test_array.TestArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_array.py", line 220, in test_groupby self.assertVarEqual(expected_unique, grouped.unique_coord) File "/Users/ebrevdo/dev/scidata/test/init.py", line 10, in assertVarEqual self.assertTrue(utils.variable_equal(v1, v2)) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 132, in variable_equal return np.array_equal(data1, data2) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/numeric.py", line 1977, in array_equal return bool(logical_and.reduce(equal(a1,a2).ravel())) AttributeError: 'NotImplementedType' object has no attribute 'ravel' ERROR: test_getitem (test.test_dataset.DataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 325, in test_getitem {'units': 'days since 2000-01-01'}) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 464, in create_variable return self.add_variable(name, v) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 534, in add_variable return self.set_variable(name, var) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 595, in set_variable self.indices.build_index(name) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 109, in build_index self.cache[key] = self.dataset._create_index(key) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 224, in _create_index attr.get('calendar')) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' ERROR: test_labeled_by (test.test_dataset.DataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 234, in test_labeled_by {'units': 'days since 2000-01-01'}) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 464, in create_variable return self.add_variable(name, v) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 534, in add_variable return self.set_variable(name, var) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 595, in set_variable self.indices.build_index(name) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 109, in build_index self.cache[key] = self.dataset._create_index(key) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 224, in _create_index attr.get('calendar')) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' ERROR: test_getitem (test.test_dataset.NetCDF4DataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 325, in test_getitem {'units': 'days since 2000-01-01'}) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 464, in create_variable return self.add_variable(name, v) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 534, in add_variable return self.set_variable(name, var) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 595, in set_variable self.indices.build_index(name) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 109, in build_index self.cache[key] = self.dataset._create_index(key) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 224, in _create_index attr.get('calendar')) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' ERROR: test_labeled_by (test.test_dataset.NetCDF4DataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 234, in test_labeled_by {'units': 'days since 2000-01-01'}) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 464, in create_variable return self.add_variable(name, v) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 534, in add_variable return self.set_variable(name, var) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 595, in set_variable self.indices.build_index(name) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 109, in build_index self.cache[key] = self.dataset._create_index(key) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 224, in _create_index attr.get('calendar')) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' ERROR: test_getitem (test.test_dataset.ScipyDataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 325, in test_getitem {'units': 'days since 2000-01-01'}) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 464, in create_variable return self.add_variable(name, v) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 534, in add_variable return self.set_variable(name, var) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 595, in set_variable self.indices.build_index(name) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 109, in build_index self.cache[key] = self.dataset._create_index(key) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 224, in _create_index attr.get('calendar')) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' ERROR: test_labeled_by (test.test_dataset.ScipyDataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 234, in test_labeled_by {'units': 'days since 2000-01-01'}) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 464, in create_variable return self.add_variable(name, v) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 534, in add_variable return self.set_variable(name, var) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 595, in set_variable self.indices.build_index(name) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 109, in build_index self.cache[key] = self.dataset._create_index(key) File "/Users/ebrevdo/dev/scidata/src/xray/dataset.py", line 224, in _create_index attr.get('calendar')) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' ERROR: test (test.test_utils.TestNum2DatetimeIndex)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_utils.py", line 68, in test actual = utils.num2datetimeindex(num_dates, units, calendar) File "/Users/ebrevdo/dev/scidata/src/xray/utils.py", line 106, in num2datetimeindex dates = first_time_delta * num_delta + np.datetime64(first_dates[0]) TypeError: ufunc 'multiply' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' FAIL: test_dump_and_open_dataset (test.test_dataset.ScipyDataTest)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset.py", line 404, in test_dump_and_open_dataset self.assertEquals(expected, actual) AssertionError: <xray.Dataset (time: 1000, @dim1: 100, @dim2: 50, @dim3: 10): var1 var2 var3> != <xray.Dataset (@dim2: 50, @dim3: 10, @dim1: 100, time: 1000): var1 var3 var2> FAIL: test_aggregate (test.test_dataset_array.TestDatasetArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 213, in test_aggregate self.assertViewEqual(expected, actual) File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 9, in assertViewEqual self.assertEqual(dv1.dataset, dv2.dataset) AssertionError: <xray.Dataset (@x: 10, @abc: 3): foo> != <xray.Dataset (@x: 10, @abc: 3): foo> FAIL: test_groupby (test.test_dataset_array.TestDatasetArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 189, in test_groupby grouped.collapse(np.sum, dimension=None)) File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 9, in assertViewEqual self.assertEqual(dv1.dataset, dv2.dataset) AssertionError: <xray.Dataset (@abc: 3): foo> != <xray.Dataset (@abc: 3): foo> FAIL: test_intersection (test.test_dataset_array.TestDatasetArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 240, in test_intersection self.assertViewEqual(dv1, self.dv[:5]) File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 9, in assertViewEqual self.assertEqual(dv1.dataset, dv2.dataset) AssertionError: <xray.Dataset (@x: 5, @y: 20): foo> != <xray.Dataset (@x: 5, @y: 20): foo> FAIL: test_labeled_by (test.test_dataset_array.TestDatasetArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 75, in test_labeled_by self.assertViewEqual(self.dv, self.dv.labeled_by(x=slice(None))) File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 9, in assertViewEqual self.assertEqual(dv1.dataset, dv2.dataset) AssertionError: <xray.Dataset (@x: 10, @y: 20): foo> != <xray.Dataset (@x: 10, @y: 20): foo> FAIL: test_loc (test.test_dataset_array.TestDatasetArray)Traceback (most recent call last): File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 81, in test_loc self.assertViewEqual(self.dv[:3], self.dv.loc[:'c']) File "/Users/ebrevdo/dev/scidata/test/test_dataset_array.py", line 9, in assertViewEqual self.assertEqual(dv1.dataset, dv2.dataset) AssertionError: <xray.Dataset (@x: 3, @y: 20): foo> != <xray.Dataset (@x: 3, @y: 20): foo> Ran 100 tests in 1.248s FAILED (failures=6, errors=9, skipped=6) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stephan's sprintbattical 27625970 | |
29719840 | https://github.com/pydata/xarray/pull/2#issuecomment-29719840 | https://api.github.com/repos/pydata/xarray/issues/2 | MDEyOklzc3VlQ29tbWVudDI5NzE5ODQw | ebrevdo 1794715 | 2013-12-03T15:37:56Z | 2013-12-03T15:37:56Z | CONTRIBUTOR | Nvm. Will review over the next day or two. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Data objects now have a swappable backend store. 23272260 | |
29671962 | https://github.com/pydata/xarray/pull/2#issuecomment-29671962 | https://api.github.com/repos/pydata/xarray/issues/2 | MDEyOklzc3VlQ29tbWVudDI5NjcxOTYy | ebrevdo 1794715 | 2013-12-03T00:13:01Z | 2013-12-03T00:13:01Z | CONTRIBUTOR | In that case, planning to close this one? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Data objects now have a swappable backend store. 23272260 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 12