html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/245#issuecomment-58452925,https://api.github.com/repos/pydata/xarray/issues/245,58452925,MDEyOklzc3VlQ29tbWVudDU4NDUyOTI1,514053,2014-10-09T01:37:40Z,2014-10-09T01:37:40Z,CONTRIBUTOR,"re: using accessors such as get_variables. To be fair, this pull request didn't really switch the behavior, it was already very similar but with different names (store_variables, open_store_variable) that were now changed to get_variables etc. I chose to continue with getters and setters because it makes it fairly clear what needs to be implemented to make a new DataStore, and allows for some post processing such as _decode_variable_name and encoding/decoding. Its not entirely clear to me how that would fit into a properties based approach so sounds like a good follow up pull request to me. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,44718119 https://github.com/pydata/xarray/issues/212#issuecomment-54560323,https://api.github.com/repos/pydata/xarray/issues/212,54560323,MDEyOklzc3VlQ29tbWVudDU0NTYwMzIz,514053,2014-09-04T23:35:26Z,2014-09-04T23:35:26Z,CONTRIBUTOR,"I personally am not familiar with `pandas.Block` objects, so I find the name uniformative. That combined with the fact that renaming `Variable` to `Block` would break alignment with netCDF naming conventions (so confuse users coming from that background) makes me hesitant about the change. I think I'd be more excited about finding a better name for `noncoordinates` (fields? ) instead of renaming `variables` (which would also cause non-backwards compatibility) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,40225000 https://github.com/pydata/xarray/issues/167#issuecomment-46749127,https://api.github.com/repos/pydata/xarray/issues/167,46749127,MDEyOklzc3VlQ29tbWVudDQ2NzQ5MTI3,514053,2014-06-21T09:40:59Z,2014-06-21T09:40:59Z,CONTRIBUTOR,"At least temporarily you might consider this: ``` myds = xray.open_dataset(ds.dumps()) ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,36211623 https://github.com/pydata/xarray/pull/163#issuecomment-46275601,https://api.github.com/repos/pydata/xarray/issues/163,46275601,MDEyOklzc3VlQ29tbWVudDQ2Mjc1NjAx,514053,2014-06-17T07:28:45Z,2014-06-17T07:28:45Z,CONTRIBUTOR,"One possibility could be to have the encoding filtering only happen once if the variable was loaded from NetCDF4. Ie, if a variable with chunksizes encoding were loaded from file they it would be removed after the first attempt to index, afterwards all encodings persist. I've been experimenting with something along those lines but don't have it working perfectly yet. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,35762823 https://github.com/pydata/xarray/pull/163#issuecomment-46146675,https://api.github.com/repos/pydata/xarray/issues/163,46146675,MDEyOklzc3VlQ29tbWVudDQ2MTQ2Njc1,514053,2014-06-16T07:02:26Z,2014-06-16T07:02:26Z,CONTRIBUTOR,"Is there a reason why we don't just have it remove problematic encodings? Some encodings are certainly nice to persist (fill value, scale, offset, time units etc ...) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,35762823 https://github.com/pydata/xarray/pull/153#issuecomment-45952834,https://api.github.com/repos/pydata/xarray/issues/153,45952834,MDEyOklzc3VlQ29tbWVudDQ1OTUyODM0,514053,2014-06-12T21:51:47Z,2014-06-12T21:51:47Z,CONTRIBUTOR,"So ... do you want me to remove the tests and only include the .data -> .values fix? Re: api. I'm working on a better storage scheme for reflectivity data, which involves CF decoding plus remapping some values. I could build my own data store which implements the netcdf store and adds the extra layer there .. but just using decode_cf_variable is far easier. In general, I can imagine a situation where we would want to allow users to provide their own decoding functions, but thats a larger project. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,35564268 https://github.com/pydata/xarray/pull/153#issuecomment-45943780,https://api.github.com/repos/pydata/xarray/issues/153,45943780,MDEyOklzc3VlQ29tbWVudDQ1OTQzNzgw,514053,2014-06-12T20:29:28Z,2014-06-12T20:29:28Z,CONTRIBUTOR,"I'm confused, are you saying we shouldn't add tests for internal functions? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,35564268 https://github.com/pydata/xarray/issues/138#issuecomment-43718217,https://api.github.com/repos/pydata/xarray/issues/138,43718217,MDEyOklzc3VlQ29tbWVudDQzNzE4MjE3,514053,2014-05-21T06:52:25Z,2014-05-21T06:52:25Z,CONTRIBUTOR,"Yeah I agree, seems like a great option to have. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,33942756 https://github.com/pydata/xarray/pull/134#issuecomment-43545468,https://api.github.com/repos/pydata/xarray/issues/134,43545468,MDEyOklzc3VlQ29tbWVudDQzNTQ1NDY4,514053,2014-05-19T19:27:26Z,2014-05-19T19:27:26Z,CONTRIBUTOR,"Also worth considering: how should datetime64[us] datetimes be handled? Currently they get cast to [ns] which, since datetimes do not, could get confusing. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,33772168 https://github.com/pydata/xarray/pull/134#issuecomment-43535578,https://api.github.com/repos/pydata/xarray/issues/134,43535578,MDEyOklzc3VlQ29tbWVudDQzNTM1NTc4,514053,2014-05-19T17:55:00Z,2014-05-19T17:55:00Z,CONTRIBUTOR,"Yeah all fixed. In #125 I went the route of forcing datetimes to be datetime64[ns]. This is probably part of a broader conversation, but doing so might save some future headaches. Of course ... it would also restrict us to nanosecond precision. Basically I feel like we should either force datetimes to be datetime64[ns] or make sure that operations on datetime objects preserve their type. Probably worth getting this in and picking that conversation back up if needed. In which case could you add tests which make sure variables with datetime objects are still datetime objects after concatenation? If those start getting cast to datetime[ns] it'll start get confusing for users. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,33772168 https://github.com/pydata/xarray/pull/134#issuecomment-43473367,https://api.github.com/repos/pydata/xarray/issues/134,43473367,MDEyOklzc3VlQ29tbWVudDQzNDczMzY3,514053,2014-05-19T07:35:53Z,2014-05-19T07:35:53Z,CONTRIBUTOR,"The reorganization does make things cleaner, but the behavior changed relative to #125. In particular, while this patch fixes concatenation with datetime64 times it doesn't work with datetimes: ``` In [2]: dates = [datetime.datetime(2011, 1, i + 1) for i in range(10)] In [3]: ds = xray.Dataset({'time': ('time', dates)}) In [4]: xray.Dataset.concat([ds.indexed(time=slice(0, 4)), ds.indexed(time=slice(4, 8))], 'time')['time'].values Out[4]: array([1293840000000000000L, 1293926400000000000L, 1294012800000000000L, 1294099200000000000L, 1294185600000000000L, 1294272000000000000L, 1294358400000000000L, 1294444800000000000L], dtype=object) ``` and not sure if this was broken before or not ... ``` In [5]: xray.Dataset.concat([x for _, x in ds.groupby('time')], 'time')['time'].values ValueError: cannot slice a 0-d array ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,33772168 https://github.com/pydata/xarray/pull/125#issuecomment-42951350,https://api.github.com/repos/pydata/xarray/issues/125,42951350,MDEyOklzc3VlQ29tbWVudDQyOTUxMzUw,514053,2014-05-13T13:02:22Z,2014-05-13T13:02:22Z,CONTRIBUTOR,"Yeah this gets tricky. Fixed part of the problem by reverting to using np.asarray instead of as_array_or_item in NumpyArrayWrapper. But I'm not sure thats the full solution, like you mentioned the problem is much deeper, though I don't think pushing the datetime nastiness into higher level functions (such as concat) is a great option. Also, I've been hoping to get the way dates are handled to be slightly more consistent, since as it currently stands its hard to know which data type dates are being stored as. ``` > d64 = np.datetime64(d) > print xray.Variable(['time'], [d]).dtype dtype('O') > print xray.Variable(['time'], [d64]).dtype dtype(' print d64.dtype dtype(' datetime.datetime(2014, 01, 01) This is definitely not high priority though. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,29067976 https://github.com/pydata/xarray/pull/62#issuecomment-37435670,https://api.github.com/repos/pydata/xarray/issues/62,37435670,MDEyOklzc3VlQ29tbWVudDM3NDM1Njcw,514053,2014-03-12T17:10:58Z,2014-03-12T17:10:58Z,CONTRIBUTOR,"Renaming set_variables to update seems reasonable to me. Its behavior seems similar enough to dict.update. Then a separate function replace could be a lightweight wrapper around update which makes a copy: obj = type(self)(variables, self.attributes) obj.update(_args, *_kwdargs) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,29220463 https://github.com/pydata/xarray/pull/59#issuecomment-37433623,https://api.github.com/repos/pydata/xarray/issues/59,37433623,MDEyOklzc3VlQ29tbWVudDM3NDMzNjIz,514053,2014-03-12T16:55:52Z,2014-03-12T16:55:52Z,CONTRIBUTOR,"I'll go ahead and merge this in to fix the bug ... but I wonder if there is any way we can avoid using np.datetime64 objects. They seem under-developed / broken. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,29067976 https://github.com/pydata/xarray/pull/44#issuecomment-36684248,https://api.github.com/repos/pydata/xarray/issues/44,36684248,MDEyOklzc3VlQ29tbWVudDM2Njg0MjQ4,514053,2014-03-04T22:04:07Z,2014-03-04T22:04:07Z,CONTRIBUTOR,"Yes much clearer, thanks. As soon as build passes I'll merge it in. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,28672803 https://github.com/pydata/xarray/issues/39#issuecomment-36484609,https://api.github.com/repos/pydata/xarray/issues/39,36484609,MDEyOklzc3VlQ29tbWVudDM2NDg0NjA5,514053,2014-03-03T06:28:32Z,2014-03-03T06:28:32Z,CONTRIBUTOR,"@shoyer You're right I can serialize the latitude object directly from that opendap url ... but after some manipulation I run into this: ``` ipdb> print fcst dimensions: latitude = 31 longitude = 46 time = 7 variables: object latitude(latitude) units:degrees_north _CoordinateAxisType:Lat object longitude(longitude) units:degrees_east _CoordinateAxisType:Lon datet... time(time) standard_name:time _CoordinateAxisType:Time units:hours since 2014-03-03 00:0... ipdb> fcst.dump('./test.nc') *** TypeError: illegal primitive data type, must be one of ['i8', 'f4', 'u8', 'i1', 'U1', 'S1', 'i2', 'u1', 'i4', 'u2', 'f8', 'u4'], got object ``` Currently tracking down exactly whats going on here. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,28600785 https://github.com/pydata/xarray/issues/26#issuecomment-36279915,https://api.github.com/repos/pydata/xarray/issues/26,36279915,MDEyOklzc3VlQ29tbWVudDM2Mjc5OTE1,514053,2014-02-27T19:20:25Z,2014-02-27T19:20:25Z,CONTRIBUTOR,"Yeah I think keeping them transparent to the user except when reading/writing is the way to go. Two datasets with the same data but different encodings should still be equal when compared, and operations beyond slicing should probably destroy encodings. Not sure how to handle the various file formats, like you said it could be all part of the store, or we could just throw warnings/fail if encodings aren't feasible. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,28445412 https://github.com/pydata/xarray/issues/23#issuecomment-36185024,https://api.github.com/repos/pydata/xarray/issues/23,36185024,MDEyOklzc3VlQ29tbWVudDM2MTg1MDI0,514053,2014-02-26T22:21:01Z,2014-02-26T22:21:01Z,CONTRIBUTOR,"Another similar option would be to use in-memory HDF5 objects for which Todd Small found an option: Writing to a string: ``` h5_file = tables.open_file(""in-memory"", title=my_title, mode=""w"", 12 driver=""H5FD_CORE"", driver_core_backing_store=0) ... [add variables] ... image = h5_file.get_file_image() ``` Reading from a string ``` h5_file = tables.open_file(""in-memory"", mode=""r"", driver=""H5FD_CORE"", driver_core_image=image, driver_core_backing_store=0) ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,28375178 https://github.com/pydata/xarray/pull/21#issuecomment-36100595,https://api.github.com/repos/pydata/xarray/issues/21,36100595,MDEyOklzc3VlQ29tbWVudDM2MTAwNTk1,514053,2014-02-26T08:07:20Z,2014-02-26T08:07:20Z,CONTRIBUTOR,"Currently this renames CF time units to 'cf_units' but perhaps '_units' or even just keeping 'units' would be better. Thoughts? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,28315331 https://github.com/pydata/xarray/issues/18#issuecomment-36039528,https://api.github.com/repos/pydata/xarray/issues/18,36039528,MDEyOklzc3VlQ29tbWVudDM2MDM5NTI4,514053,2014-02-25T18:16:38Z,2014-02-25T18:16:38Z,CONTRIBUTOR,"@jreback I'll spend some time getting a better feel for how/if we could push some of the backend into pandas' HDFStore. Certainly, we'd like to leverage other more powerful packages (pandas, numpy) as much as possible. Thanks for the suggestion. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,28262599 https://github.com/pydata/xarray/pull/12#issuecomment-35687068,https://api.github.com/repos/pydata/xarray/issues/12,35687068,MDEyOklzc3VlQ29tbWVudDM1Njg3MDY4,514053,2014-02-21T00:36:39Z,2014-02-21T00:36:39Z,CONTRIBUTOR,"This is all great. I've been experimenting with this branch and the majority of it is running fine. Given that this project is still under heavy development and rather than bloating this pull request, lets go ahead and merge it into master and iterate on top of it. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,27625970 https://github.com/pydata/xarray/pull/2#issuecomment-29788723,https://api.github.com/repos/pydata/xarray/issues/2,29788723,MDEyOklzc3VlQ29tbWVudDI5Nzg4NzIz,514053,2013-12-04T08:57:14Z,2013-12-04T08:57:14Z,CONTRIBUTOR,"Sounds good. I haven't had time to make much progress over the last few days, but do have some ideas about next steps and what this project has to offer that iris doesn't. I'll get both formalized next time I have a chance. In the meantime, feedback would be great! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,23272260