home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

27 rows where author_association = "CONTRIBUTOR" and user = 514053 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 22

  • Fix concatenating Variables with dtype=datetime64 3
  • Ensure decoding as datetime64[ns] 2
  • Fix decode_cf_variable. 2
  • BUG: fix encoding issues (array indexing now resets encoding) 2
  • Data objects now have a swappable backend store. 1
  • Stephan's sprintbattical 1
  • ENH: NETCDF4 in pandas 1
  • Cf time units persist 1
  • Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 1
  • Allow the ability to add/persist details of how a dataset is stored. 1
  • OpenDAP loaded Dataset has lon/lats with type 'object'. 1
  • Consistent handling of 0-dimensional XArrays for dtype=object 1
  • Modified Dataset.replace to replace a dictionary of variables 1
  • HDF5 backend for xray 1
  • Rename `DatasetArray` to `DataArray`? 1
  • ENH: Better Dataset repr 1
  • Dataset.concat() can now automatically concat over non-equal variables. 1
  • Only copy datetime64 data if it is using non-nanosecond precision. 1
  • keep attrs when reducing xray objects 1
  • Unable to load pickle Dataset that was picked with cPickle 1
  • Get ride of "noncoordinates" as a name? 1
  • Modular encodings (rebased) 1

user 1

  • akleeman · 27 ✖

author_association 1

  • CONTRIBUTOR · 27 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
58452925 https://github.com/pydata/xarray/pull/245#issuecomment-58452925 https://api.github.com/repos/pydata/xarray/issues/245 MDEyOklzc3VlQ29tbWVudDU4NDUyOTI1 akleeman 514053 2014-10-09T01:37:40Z 2014-10-09T01:37:40Z CONTRIBUTOR

re: using accessors such as get_variables.

To be fair, this pull request didn't really switch the behavior, it was already very similar but with different names (store_variables, open_store_variable) that were now changed to get_variables etc. I chose to continue with getters and setters because it makes it fairly clear what needs to be implemented to make a new DataStore, and allows for some post processing such as _decode_variable_name and encoding/decoding. Its not entirely clear to me how that would fit into a properties based approach so sounds like a good follow up pull request to me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Modular encodings (rebased) 44718119
54560323 https://github.com/pydata/xarray/issues/212#issuecomment-54560323 https://api.github.com/repos/pydata/xarray/issues/212 MDEyOklzc3VlQ29tbWVudDU0NTYwMzIz akleeman 514053 2014-09-04T23:35:26Z 2014-09-04T23:35:26Z CONTRIBUTOR

I personally am not familiar with pandas.Block objects, so I find the name uniformative. That combined with the fact that renaming Variable to Block would break alignment with netCDF naming conventions (so confuse users coming from that background) makes me hesitant about the change. I think I'd be more excited about finding a better name for noncoordinates (fields? ) instead of renaming variables (which would also cause non-backwards compatibility)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Get ride of "noncoordinates" as a name? 40225000
46749127 https://github.com/pydata/xarray/issues/167#issuecomment-46749127 https://api.github.com/repos/pydata/xarray/issues/167 MDEyOklzc3VlQ29tbWVudDQ2NzQ5MTI3 akleeman 514053 2014-06-21T09:40:59Z 2014-06-21T09:40:59Z CONTRIBUTOR

At least temporarily you might consider this:

myds = xray.open_dataset(ds.dumps())

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unable to load pickle Dataset that was picked with cPickle 36211623
46275601 https://github.com/pydata/xarray/pull/163#issuecomment-46275601 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2Mjc1NjAx akleeman 514053 2014-06-17T07:28:45Z 2014-06-17T07:28:45Z CONTRIBUTOR

One possibility could be to have the encoding filtering only happen once if the variable was loaded from NetCDF4. Ie, if a variable with chunksizes encoding were loaded from file they it would be removed after the first attempt to index, afterwards all encodings persist. I've been experimenting with something along those lines but don't have it working perfectly yet.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823
46146675 https://github.com/pydata/xarray/pull/163#issuecomment-46146675 https://api.github.com/repos/pydata/xarray/issues/163 MDEyOklzc3VlQ29tbWVudDQ2MTQ2Njc1 akleeman 514053 2014-06-16T07:02:26Z 2014-06-16T07:02:26Z CONTRIBUTOR

Is there a reason why we don't just have it remove problematic encodings? Some encodings are certainly nice to persist (fill value, scale, offset, time units etc ...)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: fix encoding issues (array indexing now resets encoding) 35762823
45952834 https://github.com/pydata/xarray/pull/153#issuecomment-45952834 https://api.github.com/repos/pydata/xarray/issues/153 MDEyOklzc3VlQ29tbWVudDQ1OTUyODM0 akleeman 514053 2014-06-12T21:51:47Z 2014-06-12T21:51:47Z CONTRIBUTOR

So ... do you want me to remove the tests and only include the .data -> .values fix?

Re: api. I'm working on a better storage scheme for reflectivity data, which involves CF decoding plus remapping some values. I could build my own data store which implements the netcdf store and adds the extra layer there .. but just using decode_cf_variable is far easier. In general, I can imagine a situation where we would want to allow users to provide their own decoding functions, but thats a larger project.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix decode_cf_variable. 35564268
45943780 https://github.com/pydata/xarray/pull/153#issuecomment-45943780 https://api.github.com/repos/pydata/xarray/issues/153 MDEyOklzc3VlQ29tbWVudDQ1OTQzNzgw akleeman 514053 2014-06-12T20:29:28Z 2014-06-12T20:29:28Z CONTRIBUTOR

I'm confused, are you saying we shouldn't add tests for internal functions?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix decode_cf_variable. 35564268
43718217 https://github.com/pydata/xarray/issues/138#issuecomment-43718217 https://api.github.com/repos/pydata/xarray/issues/138 MDEyOklzc3VlQ29tbWVudDQzNzE4MjE3 akleeman 514053 2014-05-21T06:52:25Z 2014-05-21T06:52:25Z CONTRIBUTOR

Yeah I agree, seems like a great option to have.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep attrs when reducing xray objects 33942756
43545468 https://github.com/pydata/xarray/pull/134#issuecomment-43545468 https://api.github.com/repos/pydata/xarray/issues/134 MDEyOklzc3VlQ29tbWVudDQzNTQ1NDY4 akleeman 514053 2014-05-19T19:27:26Z 2014-05-19T19:27:26Z CONTRIBUTOR

Also worth considering: how should datetime64[us] datetimes be handled? Currently they get cast to [ns] which, since datetimes do not, could get confusing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix concatenating Variables with dtype=datetime64 33772168
43535578 https://github.com/pydata/xarray/pull/134#issuecomment-43535578 https://api.github.com/repos/pydata/xarray/issues/134 MDEyOklzc3VlQ29tbWVudDQzNTM1NTc4 akleeman 514053 2014-05-19T17:55:00Z 2014-05-19T17:55:00Z CONTRIBUTOR

Yeah all fixed.

In #125 I went the route of forcing datetimes to be datetime64[ns]. This is probably part of a broader conversation, but doing so might save some future headaches. Of course ... it would also restrict us to nanosecond precision. Basically I feel like we should either force datetimes to be datetime64[ns] or make sure that operations on datetime objects preserve their type.

Probably worth getting this in and picking that conversation back up if needed. In which case could you add tests which make sure variables with datetime objects are still datetime objects after concatenation? If those start getting cast to datetime[ns] it'll start get confusing for users.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix concatenating Variables with dtype=datetime64 33772168
43473367 https://github.com/pydata/xarray/pull/134#issuecomment-43473367 https://api.github.com/repos/pydata/xarray/issues/134 MDEyOklzc3VlQ29tbWVudDQzNDczMzY3 akleeman 514053 2014-05-19T07:35:53Z 2014-05-19T07:35:53Z CONTRIBUTOR

The reorganization does make things cleaner, but the behavior changed relative to #125. In particular, while this patch fixes concatenation with datetime64 times it doesn't work with datetimes:

In [2]: dates = [datetime.datetime(2011, 1, i + 1) for i in range(10)] In [3]: ds = xray.Dataset({'time': ('time', dates)}) In [4]: xray.Dataset.concat([ds.indexed(time=slice(0, 4)), ds.indexed(time=slice(4, 8))], 'time')['time'].values Out[4]: array([1293840000000000000L, 1293926400000000000L, 1294012800000000000L, 1294099200000000000L, 1294185600000000000L, 1294272000000000000L, 1294358400000000000L, 1294444800000000000L], dtype=object)

and not sure if this was broken before or not ...

In [5]: xray.Dataset.concat([x for _, x in ds.groupby('time')], 'time')['time'].values ValueError: cannot slice a 0-d array

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix concatenating Variables with dtype=datetime64 33772168
42951350 https://github.com/pydata/xarray/pull/125#issuecomment-42951350 https://api.github.com/repos/pydata/xarray/issues/125 MDEyOklzc3VlQ29tbWVudDQyOTUxMzUw akleeman 514053 2014-05-13T13:02:22Z 2014-05-13T13:02:22Z CONTRIBUTOR

Yeah this gets tricky. Fixed part of the problem by reverting to using np.asarray instead of as_array_or_item in NumpyArrayWrapper. But I'm not sure thats the full solution, like you mentioned the problem is much deeper, though I don't think pushing the datetime nastiness into higher level functions (such as concat) is a great option.

Also, I've been hoping to get the way dates are handled to be slightly more consistent, since as it currently stands its hard to know which data type dates are being stored as.

```

d64 = np.datetime64(d) print xray.Variable(['time'], [d]).dtype dtype('O') print xray.Variable(['time'], [d64]).dtype dtype('<M8[ns]') print d64.dtype dtype('<M8[us]') ```

I'm going to attempt getting utils.as_safe_array to convert from datetime objects to datetime64 objects which should make things a little clearer, but still doesn't solve the whole problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Only copy datetime64 data if it is using non-nanosecond precision. 33307883
42835510 https://github.com/pydata/xarray/issues/66#issuecomment-42835510 https://api.github.com/repos/pydata/xarray/issues/66 MDEyOklzc3VlQ29tbWVudDQyODM1NTEw akleeman 514053 2014-05-12T14:04:11Z 2014-05-12T14:04:11Z CONTRIBUTOR

@alimanfoo

Glad you're enjoying xray!

From your description it sounds like it should be relatively simple for you to get xray working with your dataset. NetCDF4 is a subset of h5py and simply adding dimension scales should get you most of the way there.

Re: groups, each xray.Dataset corresponds to one HDF5 group. So while xray doesn't currently support groups, you could split your HDF5 dataset into separate files for each group and load those files using xray. Alternatively (if you feel ambitious) it shouldn't be too hard to get xray's NetCDF4DataStore (backends.netCDF4_.py) to work with groups, allowing you to do something like:

dataset = xray.open_dataset('multiple_groups.h5', group='/one_group')

This gives some good examples of how groups work within the netCDF4.

Also, as @shoyer mentioned, it might make sense to modify xray so that NetCDF4 support is obtained by wrapping h5py instead of netCDF4 which might make your life even easier.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  HDF5 backend for xray 29453809
40425993 https://github.com/pydata/xarray/pull/102#issuecomment-40425993 https://api.github.com/repos/pydata/xarray/issues/102 MDEyOklzc3VlQ29tbWVudDQwNDI1OTkz akleeman 514053 2014-04-14T22:28:31Z 2014-04-14T22:28:31Z CONTRIBUTOR

Sounds good, I'd also like to consider renaming the behavior/arguments in concat. Or maybe add an additional argument with options along the lines of 'shared-dimension', 'different', 'all'. Which would allow the user to choose which variables are concatenated (ie, variables with dimension == 'concat_dimension' those that are different across datasets (even if they don't follow 'concat_dimension') and all variables.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.concat() can now automatically concat over non-equal variables. 31510183
39386922 https://github.com/pydata/xarray/pull/88#issuecomment-39386922 https://api.github.com/repos/pydata/xarray/issues/88 MDEyOklzc3VlQ29tbWVudDM5Mzg2OTIy akleeman 514053 2014-04-02T21:35:26Z 2014-04-02T21:35:26Z CONTRIBUTOR

I like the new look, more intuitive.

One thought, what happens when a variable has a coordinate twice? For example a covariance matrix?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Better Dataset repr 30656827
38858084 https://github.com/pydata/xarray/issues/85#issuecomment-38858084 https://api.github.com/repos/pydata/xarray/issues/85 MDEyOklzc3VlQ29tbWVudDM4ODU4MDg0 akleeman 514053 2014-03-27T20:43:24Z 2014-03-27T20:43:24Z CONTRIBUTOR

+1 I like the name DataArray much more than DatasetArray

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Rename `DatasetArray` to `DataArray`? 30339447
37503963 https://github.com/pydata/xarray/pull/59#issuecomment-37503963 https://api.github.com/repos/pydata/xarray/issues/59 MDEyOklzc3VlQ29tbWVudDM3NTAzOTYz akleeman 514053 2014-03-13T06:32:59Z 2014-03-13T06:32:59Z CONTRIBUTOR

Yeah, does seem that way. Unfortunately that makes comparison with datetime objects a bit awkward. Perhaps we should include a to_datetime() the way pandas does? Certainly, it would be convenient to be able to do comparisons such as:

ds['time'].data > datetime.datetime(2014, 01, 01)

This is definitely not high priority though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ensure decoding as datetime64[ns] 29067976
37435670 https://github.com/pydata/xarray/pull/62#issuecomment-37435670 https://api.github.com/repos/pydata/xarray/issues/62 MDEyOklzc3VlQ29tbWVudDM3NDM1Njcw akleeman 514053 2014-03-12T17:10:58Z 2014-03-12T17:10:58Z CONTRIBUTOR

Renaming set_variables to update seems reasonable to me. Its behavior seems similar enough to dict.update. Then a separate function replace could be a lightweight wrapper around update which makes a copy:

obj = type(self)(variables, self.attributes) obj.update(_args, *_kwdargs)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Modified Dataset.replace to replace a dictionary of variables 29220463
37433623 https://github.com/pydata/xarray/pull/59#issuecomment-37433623 https://api.github.com/repos/pydata/xarray/issues/59 MDEyOklzc3VlQ29tbWVudDM3NDMzNjIz akleeman 514053 2014-03-12T16:55:52Z 2014-03-12T16:55:52Z CONTRIBUTOR

I'll go ahead and merge this in to fix the bug ... but I wonder if there is any way we can avoid using np.datetime64 objects. They seem under-developed / broken.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ensure decoding as datetime64[ns] 29067976
36684248 https://github.com/pydata/xarray/pull/44#issuecomment-36684248 https://api.github.com/repos/pydata/xarray/issues/44 MDEyOklzc3VlQ29tbWVudDM2Njg0MjQ4 akleeman 514053 2014-03-04T22:04:07Z 2014-03-04T22:04:07Z CONTRIBUTOR

Yes much clearer, thanks. As soon as build passes I'll merge it in.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Consistent handling of 0-dimensional XArrays for dtype=object 28672803
36484609 https://github.com/pydata/xarray/issues/39#issuecomment-36484609 https://api.github.com/repos/pydata/xarray/issues/39 MDEyOklzc3VlQ29tbWVudDM2NDg0NjA5 akleeman 514053 2014-03-03T06:28:32Z 2014-03-03T06:28:32Z CONTRIBUTOR

@shoyer You're right I can serialize the latitude object directly from that opendap url ... but after some manipulation I run into this:

ipdb> print fcst dimensions: latitude = 31 longitude = 46 time = 7 variables: object latitude(latitude) units:degrees_north _CoordinateAxisType:Lat object longitude(longitude) units:degrees_east _CoordinateAxisType:Lon datet... time(time) standard_name:time _CoordinateAxisType:Time units:hours since 2014-03-03 00:0... ipdb> fcst.dump('./test.nc') *** TypeError: illegal primitive data type, must be one of ['i8', 'f4', 'u8', 'i1', 'U1', 'S1', 'i2', 'u1', 'i4', 'u2', 'f8', 'u4'], got object

Currently tracking down exactly whats going on here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  OpenDAP loaded Dataset has lon/lats with type 'object'. 28600785
36279915 https://github.com/pydata/xarray/issues/26#issuecomment-36279915 https://api.github.com/repos/pydata/xarray/issues/26 MDEyOklzc3VlQ29tbWVudDM2Mjc5OTE1 akleeman 514053 2014-02-27T19:20:25Z 2014-02-27T19:20:25Z CONTRIBUTOR

Yeah I think keeping them transparent to the user except when reading/writing is the way to go. Two datasets with the same data but different encodings should still be equal when compared, and operations beyond slicing should probably destroy encodings. Not sure how to handle the various file formats, like you said it could be all part of the store, or we could just throw warnings/fail if encodings aren't feasible.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the ability to add/persist details of how a dataset is stored. 28445412
36185024 https://github.com/pydata/xarray/issues/23#issuecomment-36185024 https://api.github.com/repos/pydata/xarray/issues/23 MDEyOklzc3VlQ29tbWVudDM2MTg1MDI0 akleeman 514053 2014-02-26T22:21:01Z 2014-02-26T22:21:01Z CONTRIBUTOR

Another similar option would be to use in-memory HDF5 objects for which Todd Small found an option:

Writing to a string:

h5_file = tables.open_file("in-memory", title=my_title, mode="w", 12 driver="H5FD_CORE", driver_core_backing_store=0) ... [add variables] ... image = h5_file.get_file_image()

Reading from a string

h5_file = tables.open_file("in-memory", mode="r", driver="H5FD_CORE", driver_core_image=image, driver_core_backing_store=0)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 28375178
36100595 https://github.com/pydata/xarray/pull/21#issuecomment-36100595 https://api.github.com/repos/pydata/xarray/issues/21 MDEyOklzc3VlQ29tbWVudDM2MTAwNTk1 akleeman 514053 2014-02-26T08:07:20Z 2014-02-26T08:07:20Z CONTRIBUTOR

Currently this renames CF time units to 'cf_units' but perhaps '_units' or even just keeping 'units' would be better. Thoughts?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cf time units persist 28315331
36039528 https://github.com/pydata/xarray/issues/18#issuecomment-36039528 https://api.github.com/repos/pydata/xarray/issues/18 MDEyOklzc3VlQ29tbWVudDM2MDM5NTI4 akleeman 514053 2014-02-25T18:16:38Z 2014-02-25T18:16:38Z CONTRIBUTOR

@jreback I'll spend some time getting a better feel for how/if we could push some of the backend into pandas' HDFStore. Certainly, we'd like to leverage other more powerful packages (pandas, numpy) as much as possible. Thanks for the suggestion.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: NETCDF4 in pandas 28262599
35687068 https://github.com/pydata/xarray/pull/12#issuecomment-35687068 https://api.github.com/repos/pydata/xarray/issues/12 MDEyOklzc3VlQ29tbWVudDM1Njg3MDY4 akleeman 514053 2014-02-21T00:36:39Z 2014-02-21T00:36:39Z CONTRIBUTOR

This is all great. I've been experimenting with this branch and the majority of it is running fine. Given that this project is still under heavy development and rather than bloating this pull request, lets go ahead and merge it into master and iterate on top of it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Stephan's sprintbattical 27625970
29788723 https://github.com/pydata/xarray/pull/2#issuecomment-29788723 https://api.github.com/repos/pydata/xarray/issues/2 MDEyOklzc3VlQ29tbWVudDI5Nzg4NzIz akleeman 514053 2013-12-04T08:57:14Z 2013-12-04T08:57:14Z CONTRIBUTOR

Sounds good. I haven't had time to make much progress over the last few days, but do have some ideas about next steps and what this project has to offer that iris doesn't. I'll get both formalized next time I have a chance. In the meantime, feedback would be great!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Data objects now have a swappable backend store. 23272260

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1037.431ms · About: xarray-datasette