pull_requests
4,034 rows
This data as json, CSV (advanced)
Suggested facets: assignee, milestone, author_association, created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10239839 | MDExOlB1bGxSZXF1ZXN0MTAyMzk4Mzk= | 1 | closed | 0 | Added setup.py which runs unit tests as necessary. | ebrevdo 1794715 | 2013-11-24T04:10:48Z | 2016-01-04T23:11:54Z | 2013-11-25T20:23:46Z | 2013-11-25T20:23:46Z | 69b70e459fa2ae97fcbb57afb6bc8f26e5694433 | 0 | 752d2795977dfab3853555f0a04fe76f90793def | 01591bfdbb55ef2de9e97188ce8f73fe6f2f5237 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1 | |||||
10275318 | MDExOlB1bGxSZXF1ZXN0MTAyNzUzMTg= | 2 | closed | 0 | Data objects now have a swappable backend store. | akleeman 514053 | - Allows conversion to and from: NetCDF4, scipy.io.netcdf and in memory storage. - Added general test cases, and cases for specific backend stores. | 2013-11-25T20:48:40Z | 2016-12-29T02:39:48Z | 2014-01-29T19:20:58Z | 5d8e6998d42efa29b62346b0b41b8a6eac27fb47 | 0 | 073f52281d55e4ed8c1999fcdcff7d4dba54cd76 | eb971ee40161350e79e034cad5d1d9933b78f78d | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2 | |||||
11964757 | MDExOlB1bGxSZXF1ZXN0MTE5NjQ3NTc= | 3 | closed | 0 | Fixed setup.py so "pip install -e" works | shoyer 1217238 | 2014-01-28T20:53:49Z | 2014-06-14T07:06:41Z | 2014-01-28T20:55:15Z | 2014-01-28T20:55:15Z | df66332c95453e7ba5e4d5b6d02c390e55e96d15 | 0 | 4411b3afca466f311bdb18d21860ddf7f8a2bbf1 | eb971ee40161350e79e034cad5d1d9933b78f78d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/3 | |||||
11966375 | MDExOlB1bGxSZXF1ZXN0MTE5NjYzNzU= | 4 | closed | 0 | Removed data copies from Dataset | shoyer 1217238 | data.copy() is not implemented if data is a netCDF4 variable, and in any case it seems that we should always use views unless a copy is explicitly requested. | 2014-01-28T21:30:20Z | 2016-01-04T23:11:54Z | 2014-01-28T21:32:08Z | 2014-01-28T21:32:08Z | 1b0bb952329558e5a6a83b96c79b736c5511dee9 | 0 | f3b6e224029471f90c8e8be129a8091becd2ba96 | 124d2df197ab25e9df56bd5c3f22d2d5c764d581 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/4 | ||||
11966707 | MDExOlB1bGxSZXF1ZXN0MTE5NjY3MDc= | 5 | closed | 0 | Switch dataset.coordinates values to variables | shoyer 1217238 | This makes it consistent with dataset.noncoordinates. If you want the old behavior (values as dimension lengths), then you can just use dataset.dimensions instead. | 2014-01-28T21:37:18Z | 2014-09-18T19:43:18Z | 2014-01-28T21:37:29Z | 2014-01-28T21:37:29Z | b8e29f4af5a90ce9cace64ce6dd7297be1c3c8e6 | 0 | d25f9421744d62ee378511fb1902439fef6d3a34 | 403feef820dca7ce8578b3e54d0030e20dc5e15f | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/5 | ||||
11973330 | MDExOlB1bGxSZXF1ZXN0MTE5NzMzMzA= | 6 | closed | 0 | Rationalized copy methods and pylint cleanup | shoyer 1217238 | 2014-01-29T00:33:35Z | 2014-06-15T00:00:27Z | 2014-01-29T00:33:42Z | 2014-01-29T00:33:42Z | 8d4c81b70262284d6112723c0f1e55496433e2d7 | 0 | 67e4b60e371bbd0110aff07600bbd34918af28a3 | e595def3d5b1132db5e41636b8412002113e2aaf | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/6 | |||||
11976954 | MDExOlB1bGxSZXF1ZXN0MTE5NzY5NTQ= | 7 | closed | 0 | Mutable variables | shoyer 1217238 | It is now possible to replace the data in a polyglot.Variable object if it has the same shape. This makes it a little more straightforward to allow for mutating data with my new "Cube" objects. | 2014-01-29T03:08:45Z | 2014-01-29T23:14:17Z | 2014-01-29T23:14:13Z | 413b5c44b097ea1debdd30f1d6ff05b1aae95113 | 0 | ae80f42c792c45c0ec44cd1a481a36dd64cfd22a | 6b77d820851d9d9f6d4196c222d8ea75cdf26193 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/7 | |||||
12005789 | MDExOlB1bGxSZXF1ZXN0MTIwMDU3ODk= | 8 | closed | 0 | Datasets now use data stores to allow swap-able backends | akleeman 514053 | ``` Data objects now have a swap-able backend store. - Allows conversion to and from: NetCDF4, scipy.io.netcdf and in memory storage. - Added general test cases, and cases for specific backend stores. - Dataset.translate() can now optionally copy the object. - Fixed most unit tests, test_translate_consistency still fails. ``` | 2014-01-29T19:25:42Z | 2014-06-17T00:35:01Z | 2014-01-29T19:30:09Z | 2014-01-29T19:30:09Z | 1f7bf07ce664cd4d1915956a459312bce9ef8505 | 0 | 58551773afcefb0cb32d24ced95602e6fc35b360 | 6b77d820851d9d9f6d4196c222d8ea75cdf26193 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/8 | ||||
12015420 | MDExOlB1bGxSZXF1ZXN0MTIwMTU0MjA= | 9 | closed | 0 | Mutable variables! | shoyer 1217238 | With this patch, the Variable object has been refactored and is now mutable. Some of its behavior may have changed in other subtle ways. For example, getting an item from a variable now returns another variable instead of an ndarray. | 2014-01-29T23:11:46Z | 2014-06-14T20:09:27Z | 2014-01-29T23:16:50Z | 2014-01-29T23:16:50Z | 8c64c640e3d612f332d46ffbd30923aa178dc55b | 0 | 893c7fa65a8e467cbaf224235511bd6710c331a1 | 4f4745f6aa1327eeac2f628bd4dd5b89ce27431f | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/9 | ||||
12019059 | MDExOlB1bGxSZXF1ZXN0MTIwMTkwNTk= | 10 | closed | 0 | Dataset.views() works for non-slices | shoyer 1217238 | 2014-01-30T01:08:22Z | 2014-06-12T17:29:28Z | 2014-01-31T19:01:29Z | 2014-01-31T19:01:29Z | 11b779e80b94bd8117f5f258173f42e4278370e3 | 0 | 0044ae5b6aea6ccd65b99627514b2ba5306ce45c | cab6ad9cf2b207611727dce90a63a1525030b696 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/10 | |||||
12067542 | MDExOlB1bGxSZXF1ZXN0MTIwNjc1NDI= | 11 | closed | 0 | Math with Variable objects | shoyer 1217238 | They even broadcast automatically based on dimension names. | 2014-01-31T05:20:01Z | 2014-06-12T17:29:21Z | 2014-01-31T19:01:28Z | 2014-01-31T19:01:28Z | 9704c55198d8b3ea924420352c3131442280e653 | 0 | e314a07844e6d5a85cf1383a4c5014dcfde0e13f | cab6ad9cf2b207611727dce90a63a1525030b696 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/11 | ||||
12572991 | MDExOlB1bGxSZXF1ZXN0MTI1NzI5OTE= | 12 | closed | 0 | Stephan's sprintbattical | shoyer 1217238 | 2014-02-14T21:23:09Z | 2014-08-04T00:03:21Z | 2014-02-21T00:36:53Z | 2014-02-21T00:36:53Z | 4bd400a60b14d97fbff23b1d38e737f65c7f9d47 | 0 | 9488463c3388fbda04419208a794ef2f6ff49959 | 303b89004fd3fe7c2a24248eb86304cac94092b0 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/12 | |||||
12817796 | MDExOlB1bGxSZXF1ZXN0MTI4MTc3OTY= | 14 | closed | 0 | Fix to_dataframe method for DatetimeIndex indices | shoyer 1217238 | Also this removes one of our dependencies on pandas==0.13.1. | 2014-02-22T01:03:00Z | 2014-06-12T17:29:27Z | 2014-02-25T00:17:38Z | 2014-02-25T00:17:38Z | ada5e420940297c353a72be694c526d105ce3538 | 0 | efac0a639bf1c3bec73630de13630efbb4fb5e64 | de28cd67f7b9a12912d2b772b065e9252d2d9b6e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/14 | ||||
12834005 | MDExOlB1bGxSZXF1ZXN0MTI4MzQwMDU= | 15 | closed | 0 | Version now contains git commit ID | shoyer 1217238 | Thanks to some code borrowed from pandas, setup.py now reports the development version of xray as something like "0.1.0.dev-de28cd6". I also took this opportunity to add xray.**version**. | 2014-02-23T18:17:04Z | 2014-06-12T17:29:51Z | 2014-02-23T20:22:49Z | 2014-02-23T20:22:49Z | ec21953125191c413e57aab86c6a48f8994124f8 | 0 | 8008f31395d56bb71fd97d888b1d35ddff748923 | de28cd67f7b9a12912d2b772b065e9252d2d9b6e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/15 | ||||
12844149 | MDExOlB1bGxSZXF1ZXN0MTI4NDQxNDk= | 16 | closed | 0 | Expanded docs | shoyer 1217238 | We still need better overview and getting started pages (with examples!). I also updated a number of docstrings so they can appear in the API reference. Note: I spent a little bit of time trying to get building docs setup on readthedocs.org, but I have not yet been able to get it to work with the necessary dependencies (notably, numpydoc). | 2014-02-24T08:18:55Z | 2014-06-12T17:29:58Z | 2014-02-26T05:34:36Z | 2014-02-26T05:34:36Z | 9b910127e45da72fc990006320fb9f67f27b514e | 0 | f0a31119a55a59109fc1071cbc20bde36bc0a781 | 2cd8133468a2933b41565bebec7a221454d60ca3 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/16 | ||||
12881612 | MDExOlB1bGxSZXF1ZXN0MTI4ODE2MTI= | 17 | closed | 0 | Arrays with object dtype can now be dumped to netCDF | shoyer 1217238 | Object arrays arise when using pandas.Index objects that aren't integers. | 2014-02-24T23:56:36Z | 2014-03-07T22:42:35Z | 2014-02-25T17:58:55Z | 2014-02-25T17:58:55Z | 2d228efc6b143c4fa41b726c9f56d456063d84db | 0 | 642d4742b7a37f3332ca011ed0c0f11582d09bae | 2cd8133468a2933b41565bebec7a221454d60ca3 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/17 | ||||
12927151 | MDExOlB1bGxSZXF1ZXN0MTI5MjcxNTE= | 19 | closed | 0 | BUG: fix loading all variable data during slicing | shoyer 1217238 | Accessing the data attribute loads all data into memory as a numpy array, which is obviously problematic! This fix replaces `self.data.ndim` with `self.ndim`, which means the data doesn't all need to be loaded. | 2014-02-25T22:17:07Z | 2014-06-12T17:29:53Z | 2014-02-25T23:52:18Z | 2014-02-25T23:52:18Z | c82ccf9b4b83c06e2a1e9b4d1bd9dad551fbbd19 | 0 | 8c5deca23e8ff3d318c72a96e5d90d1a1f52fa9a | d5aca723700217f1325c9a7e5fca3345c1b27716 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/19 | ||||
12932105 | MDExOlB1bGxSZXF1ZXN0MTI5MzIxMDU= | 20 | closed | 0 | Handle mask_and_scale ourselves instead of using netCDF4 | shoyer 1217238 | This lets us use NaNs instead of masked arrays to indicate missing values. | 2014-02-26T00:19:15Z | 2014-06-12T17:29:32Z | 2014-02-28T22:33:16Z | 2014-02-28T22:33:16Z | a87566007c7271618b0f7e17b1c209ed92185c0b | 0 | c647ed6e0ab1eea408e0264074d2a3efc091ee2f | 6f99cfeb51e829f28f3a09a3fef81cb7dad7db11 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/20 | ||||
12941602 | MDExOlB1bGxSZXF1ZXN0MTI5NDE2MDI= | 21 | closed | 0 | Cf time units persist | akleeman 514053 | Internally Datasets convert time coordinates to pandas.DatetimeIndex. The backend function convert_to_cf_variable will convert these datetimes back to CF style times, but the original units were not being preserved. | 2014-02-26T08:05:41Z | 2014-06-12T17:29:24Z | 2014-02-28T01:45:21Z | 9b89321f4c39477abb64d09f7c3b238c6ff1c1ee | 0 | 9b403acf84e38418d820b4dd658c865503e3076f | 6167e0f3f8617534be0fcf43b9618bd82d431ef4 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/21 | |||||
12970802 | MDExOlB1bGxSZXF1ZXN0MTI5NzA4MDI= | 22 | closed | 0 | Added Scipy netcdf test file for new consistency test. | ebrevdo 1794715 | - Led to finding a small bug in xarray_equals - New dict_equal() function works when dictionary values are np arrays | 2014-02-26T20:23:01Z | 2014-02-27T04:56:07Z | 2014-02-27T04:56:01Z | 2014-02-27T04:56:01Z | e0ef97c09680e38da8b1ff023967c899d3f54b36 | 0 | 9f46afd3106999cc736e05b77b0cbf99012b4929 | 6167e0f3f8617534be0fcf43b9618bd82d431ef4 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/22 | ||||
13035326 | MDExOlB1bGxSZXF1ZXN0MTMwMzUzMjY= | 27 | closed | 0 | Read the docs | shoyer 1217238 | Take a look: http://xray.readthedocs.org | 2014-02-28T02:28:20Z | 2014-06-12T17:29:24Z | 2014-02-28T21:46:58Z | 2014-02-28T21:46:58Z | 027049b85dc0b2c4a54c94539729dfc77a0fb9ed | 0 | 61d92a7925b5f9813966df24fd5d273d302fb9ed | 6f99cfeb51e829f28f3a09a3fef81cb7dad7db11 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/27 | ||||
13036516 | MDExOlB1bGxSZXF1ZXN0MTMwMzY1MTY= | 28 | closed | 0 | Added .travis.yml file for Travis-CI | shoyer 1217238 | I adapted it from the one for pystan: https://github.com/stan-dev/pystan/blob/develop/.travis.yml @akleeman Could you please setup the GitHub hook for Travis? I can't do it since I don't have admin rights to this repo. See the guide at: ttp://docs.travis-ci.com/user/getting-started/ | 2014-02-28T03:27:25Z | 2014-06-12T17:29:55Z | 2014-02-28T05:55:54Z | 2014-02-28T05:55:54Z | 34436fb18ffefbaee6a62c06a14de8b9aa5ec51c | 0 | 07deeff414f2a4cd613dab4e3934c116909a5d7a | 6f99cfeb51e829f28f3a09a3fef81cb7dad7db11 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/28 | ||||
13071671 | MDExOlB1bGxSZXF1ZXN0MTMwNzE2NzE= | 29 | closed | 0 | Alternative Travis-CI config | shoyer 1217238 | This versions should be faster since it uses Anaconda binaries instead of building from scratch. | 2014-02-28T21:51:15Z | 2014-06-12T17:30:25Z | 2014-02-28T22:06:19Z | 2014-02-28T22:06:19Z | 96cf13376cafba0ceedfac1cb380bd60b7b61bd7 | 0 | bc289505cc1ba7f42a21ef5f1d854350c08f83ed | f8159e829e33108be993d7c1e05e4309236f00c5 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/29 | ||||
13072827 | MDExOlB1bGxSZXF1ZXN0MTMwNzI4Mjc= | 30 | closed | 0 | Tweaks to Travis-CI config | shoyer 1217238 | Please don't merge this until Travis says everything passes. | 2014-02-28T22:19:00Z | 2014-06-12T17:29:40Z | 2014-02-28T22:25:54Z | 2014-02-28T22:25:54Z | b17accb035e59433db6236f41c7b97bc0ba22251 | 0 | 4a47350017a88c4f9d9628851d5965bfcb497396 | 9948d383bc43d276b5204ecd306b75bef9455ee4 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/30 | ||||
13075275 | MDExOlB1bGxSZXF1ZXN0MTMwNzUyNzU= | 31 | closed | 0 | Remove Travis emails & add status to README | shoyer 1217238 | 2014-02-28T23:29:30Z | 2014-06-12T17:29:52Z | 2014-02-28T23:38:18Z | 2014-02-28T23:38:18Z | 096bfd049e1e34c71517ca6f2ae38fe7ca9dddfd | 0 | cfbe968395cce9d54d7ef45402da5fdfd6ddf85d | 65d62c6a4a332dbc43cfe9454963f2f9ee5fcb79 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/31 | |||||
13095339 | MDExOlB1bGxSZXF1ZXN0MTMwOTUzMzk= | 33 | closed | 0 | Dataset.__getitem__ returns a DatasetArray linked to the same dataset | shoyer 1217238 | Originally, `Dataset.__getitem__` would "select" out the given variable to use as the dataset for new DatasetArray. The rationale was that you don't really want to keep track of extra dataset variables that are no longer relevant. The problem is that this means that modifying an item from a dataset would not modify the original dataset. An example might make this clearer: ``` >>> ds = xray.Dataset({'x': ('x', np.arange(10))}) >>> ds['x'].attributes['units'] = 'meters' # this was actually a no-op ``` This is clearly a pretty blatant violation of the norms for a Python container, and it certaintly surprised @akleeman. So this PR simplies this behavior so that `ds['x']` gives a DatasetArray linked to the dataset `ds`, and does some related clean-up of `DatasetArray.from_stack`. The new method `DatasetArray.select` lets you reproduce the old behavior if desired, by using `ds['x'].select()` instead of `ds['x']`. A bonus is that the new behavior is actually faster, because it doesn't need to create a new Dataset object. | 2014-03-02T20:30:32Z | 2014-06-12T17:29:47Z | 2014-03-03T05:44:50Z | 2014-03-03T05:44:50Z | d8d5abc608768d8a329a9d52590c109dd041b2f2 | 0 | 46efe3e01e20a3c422ec2160b3a56caed3768208 | 2f26e859a791dc6eef39414b812580ee8d7f8277 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/33 | ||||
13096302 | MDExOlB1bGxSZXF1ZXN0MTMwOTYzMDI= | 34 | closed | 0 | Return squeeze method and fix Dataset.__delitem__ | shoyer 1217238 | This should fix issue #32. | 2014-03-02T22:07:26Z | 2014-06-12T17:29:42Z | 2014-03-03T18:21:06Z | 2014-03-03T18:21:06Z | 13e9f8112b06a29f89e7a34fa80d13531b18e8ed | 0 | 8fa3e43f114d05d87a95ab7ecaff291c342daec6 | 2f26e859a791dc6eef39414b812580ee8d7f8277 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/34 | ||||
13098894 | MDExOlB1bGxSZXF1ZXN0MTMwOTg4OTQ= | 35 | closed | 0 | Fix 0-dimensional arrays accessed via netCDF4-python | shoyer 1217238 | netCDF4-python has a bug (for which I've submitted a fix) that means that the data from a 0-dimensional array is always returned as a 1-dimensional array: https://github.com/Unidata/netcdf4-python/pull/220 | 2014-03-03T02:24:48Z | 2014-06-12T17:30:06Z | 2014-03-03T17:57:39Z | 2014-03-03T17:57:39Z | 3f6035ccaa94830d120e88ab73f670c5063f945a | 0 | c7e928f70f05c07e61ca09f449c7beab8235f569 | 2f26e859a791dc6eef39414b812580ee8d7f8277 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/35 | ||||
13099385 | MDExOlB1bGxSZXF1ZXN0MTMwOTkzODU= | 38 | closed | 0 | Updated minimum scipy version to 0.13 | shoyer 1217238 | Our scipy.io.netcdf related tests appear to fail on scipy==0.11. I'm not sure if scipy==0.12 works, but scipy==0.13 certainly works. So for now (to avoid future install issues), I've updated the dependencies in our setup.py file. | 2014-03-03T03:03:05Z | 2014-03-03T18:37:30Z | 2014-03-03T18:21:17Z | 2014-03-03T18:21:17Z | 5de64a486bc48bdf8a2ba30aff4a31564c5aca8f | 0 | 6609e931f18b46ce7f6fe67e6e78b32f324eb5d5 | 2f26e859a791dc6eef39414b812580ee8d7f8277 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/38 | ||||
13103084 | MDExOlB1bGxSZXF1ZXN0MTMxMDMwODQ= | 40 | closed | 0 | Encodings for object data types are not saved. | akleeman 514053 | decode_cf_variable will not save encoding for any 'object' dtypes. When encoding cf variables check if dtype is np.datetime64 as well as DatetimeIndex. fixes akleeman/xray/issues/39 | 2014-03-03T07:22:37Z | 2014-04-09T04:10:56Z | 2014-03-07T02:21:16Z | 7daf9d244f727247dd49a11171d3902ebbd5ef43 | 0 | 34b65e1af60b1740dd825b47ff80a0e50d0ade64 | 08a03b3c3a864ae0743623c67c66f72da8422d79 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/40 | |||||
13126742 | MDExOlB1bGxSZXF1ZXN0MTMxMjY3NDI= | 41 | closed | 0 | Use _data instead of data for decode_cf_variable | shoyer 1217238 | This was causing data to be loaded (e.g., from remote servers) when decoding CF variables. @akleeman Hopefully this fixes your immediate problem? I wonder if we could write some sort of test to verify that the data is being loaded lazily... | 2014-03-03T18:46:02Z | 2014-03-03T23:45:04Z | 2014-03-03T23:43:53Z | 2014-03-03T23:43:53Z | 1c9fe1550085eda4fe293784aa47dad55bd5a864 | 0 | b2e61aa5ebd13193a08de148d5180ce9bb9e2f24 | f415380843e2a0943e023d4609a8831d36c90fd5 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/41 | ||||
13130290 | MDExOlB1bGxSZXF1ZXN0MTMxMzAyOTA= | 42 | closed | 0 | Fix MaskedAndScaledArray for 0-dimensional input | shoyer 1217238 | It's not possible to index a 0-dimensional array, so the expression `values[values == self.fill_value]` raised an error. | 2014-03-03T20:03:47Z | 2014-03-04T23:24:59Z | 2014-03-04T18:24:59Z | 2014-03-04T18:24:59Z | 6bfb53bdcb69d2799a5f0e45195a2a94a93c0661 | 0 | 26b4f1734b75b3264eae733d17f25d966c295769 | f415380843e2a0943e023d4609a8831d36c90fd5 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/42 | ||||
13141879 | MDExOlB1bGxSZXF1ZXN0MTMxNDE4Nzk= | 43 | closed | 0 | Generalize CF encoding/decoding of datetime arrays to n-dimensions | shoyer 1217238 | Note: This will conflict with PR #40 because they both deal with handling of datetimeindex objects. Whichever goes in last will need to be rebased. | 2014-03-04T00:45:45Z | 2014-03-04T23:24:56Z | 2014-03-04T22:02:55Z | 2014-03-04T22:02:55Z | c210cb15bf09e847b07e37d2bebe20d62f55d605 | 0 | 46835c5e2a22e21478b9012f0a2745b358e6c380 | 3ae636d7eb5251f42ea1e16cc2e7af41bc2cbc8d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/43 | ||||
13143014 | MDExOlB1bGxSZXF1ZXN0MTMxNDMwMTQ= | 44 | closed | 0 | Consistent handling of 0-dimensional XArrays for dtype=object | shoyer 1217238 | Numpy unpacks 0-dimensional ararys when indexed. We should do the same. | 2014-03-04T01:26:07Z | 2014-03-04T22:45:13Z | 2014-03-04T22:04:27Z | 2014-03-04T22:04:27Z | 58a2cfb1df96234ca27fae5ac550547bd7c8ac7b | 0 | aef78f12340d30dfaaa5ebcf9991d1d550cc9c0b | 3ae636d7eb5251f42ea1e16cc2e7af41bc2cbc8d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/44 | ||||
13174344 | MDExOlB1bGxSZXF1ZXN0MTMxNzQzNDQ= | 45 | closed | 0 | Dataset.concat | shoyer 1217238 | This class method allows for concatenating multiple datasets into one along existing or new dimensions. | 2014-03-04T18:20:14Z | 2014-03-19T01:13:16Z | 2014-03-07T02:22:46Z | 2014-03-07T02:22:46Z | a42f856e2c57285f2d70a082eff3878c612b8d61 | 0 | f59a0a73ecffc2c1a6225d8d944889450e6224a2 | a679d04ae4cca4a5b75082ba16e6d9e3c7e7e9bd | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/45 | ||||
13175676 | MDExOlB1bGxSZXF1ZXN0MTMxNzU2NzY= | 46 | closed | 0 | Test lazy loading from stores using mock XArray classes. | akleeman 514053 | 2014-03-04T18:50:40Z | 2014-03-04T23:24:52Z | 2014-03-04T23:10:28Z | 2014-03-04T23:10:28Z | 744cc1dfd2eb641e1677b93991de2fa15fa12b87 | 0 | c002324efb2d1966ad33c21d960f3bfd6dabff90 | 63ea8c5f7a1792a086e85604b4f267684f299dd4 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/46 | |||||
13193312 | MDExOlB1bGxSZXF1ZXN0MTMxOTMzMTI= | 47 | closed | 0 | Allow label based indexing by XArrays | shoyer 1217238 | 2014-03-05T02:09:33Z | 2014-03-19T01:13:15Z | 2014-03-07T02:20:35Z | 2014-03-07T02:20:35Z | 854abdd80352ca68e60a177aa899bb444ccff8d9 | 0 | ef9157fb1d84d9908885bbe6cc88e6ba32cd2eb6 | 5a1585b7a2184aab1aad54c66c4bedd25353d8c8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/47 | |||||
13227047 | MDExOlB1bGxSZXF1ZXN0MTMyMjcwNDc= | 48 | closed | 0 | More varied test data for test_dataset | shoyer 1217238 | The new test data includes integer, float, datetime and string indices, and thus should much more robustly test our serialization code. This flushed out a bug I recently introduced in conventions.py. | 2014-03-05T19:22:12Z | 2014-03-19T01:13:13Z | 2014-03-07T02:17:08Z | 2014-03-07T02:17:08Z | 4ad7000513e652b66b3d96cd719362be62c5f645 | 0 | f1391798bac1411d4f1f297e4cb37bb655b916ca | 5a1585b7a2184aab1aad54c66c4bedd25353d8c8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/48 | ||||
13229574 | MDExOlB1bGxSZXF1ZXN0MTMyMjk1NzQ= | 49 | closed | 0 | Another test and a fix for decode_cf_datetime | shoyer 1217238 | 2014-03-05T20:11:45Z | 2014-03-19T01:13:12Z | 2014-03-07T02:16:27Z | 2014-03-07T02:16:27Z | dc9e57dd9b5a9721016b16c681a8aa8e52fac718 | 0 | 8ba71f22741d05628e457533860e2b09e6a9102a | 5a1585b7a2184aab1aad54c66c4bedd25353d8c8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/49 | |||||
13240785 | MDExOlB1bGxSZXF1ZXN0MTMyNDA3ODU= | 50 | closed | 0 | Added encoding options for netCDF4 variables | shoyer 1217238 | This now allows for variable specific compression. | 2014-03-06T00:21:48Z | 2014-03-19T01:13:10Z | 2014-03-07T02:13:45Z | 2014-03-07T02:13:45Z | 45a90184e84e1ab695b41a4f3f902006f20f1fab | 0 | fd56f8f198fc3d7274b23e4ef558467a961748f9 | 5a1585b7a2184aab1aad54c66c4bedd25353d8c8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/50 | ||||
13245137 | MDExOlB1bGxSZXF1ZXN0MTMyNDUxMzc= | 51 | closed | 0 | Allow Dataset __setitem__ to override an existing variable | shoyer 1217238 | 2014-03-06T03:18:00Z | 2014-03-19T01:13:07Z | 2014-03-07T02:08:37Z | 2014-03-07T02:08:37Z | 7ce2f0d7cc3141518cd39912d64cabbc2cd2fcda | 0 | 830991de69c96fee599060881c98998254460bd4 | fee35b1f7af53d36af790de8bda30bc5c34eec52 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/51 | |||||
13245339 | MDExOlB1bGxSZXF1ZXN0MTMyNDUzMzk= | 52 | closed | 0 | Preserve indexing mode in decode_cf_variable | shoyer 1217238 | 2014-03-06T03:29:35Z | 2014-03-07T02:06:56Z | 2014-03-07T02:05:52Z | 2014-03-07T02:05:52Z | 103ff3151888ec8be98bb4dcbd4ba7f94d17d2f9 | 0 | 6933847f258a59b9ffac1960dd545d8fe5f1022a | fee35b1f7af53d36af790de8bda30bc5c34eec52 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/52 | |||||
13333082 | MDExOlB1bGxSZXF1ZXN0MTMzMzMwODI= | 54 | closed | 0 | Internal refactor of XArray, with a new CoordXArray subtype | shoyer 1217238 | This allows us to simplify our internal model for XArray (it always cached internally as a base ndarray) and supports some previously tricky aspects involving pandas.Index objects. Noteably: 1. The dtype of arrays stored as pandas.Index objects can now be faithfully saved and restored. Doing math with XArray objects always yields objects with the right dtype, so `ds['latitude'] + 1` has dtype=float, not dtype=object. 2. It's no longer necessary to load index data into memory upon creating a new Dataset. Instead, the index data can be loaded on demand. 3. `var.data` is always an ndarray. `var.index` is always a pandas.Index. Related issues: #17, #39, #40. | 2014-03-07T22:42:35Z | 2014-03-24T07:21:02Z | 2014-03-11T01:01:40Z | 2014-03-11T01:01:40Z | 5a2db298c6203246ab647e8a1bd2d8fc62b56a3e | 0 | 8ea13703314ff1dcfb97526393ad92a1083fd54a | fdbfb7c2a5126221d404047190caa04a6229fb52 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/54 | ||||
13336129 | MDExOlB1bGxSZXF1ZXN0MTMzMzYxMjk= | 56 | closed | 0 | ENH: More descriptive error message for invalid indexing | shoyer 1217238 | The error message "orthogonal array indexing only supports 1d arrays" way encountered by Holly when attempting to use an string index for integer based indexing (since is because `asarray` converts strings to 0-d arrays). Now such invalid indexing arguments will be caught. | 2014-03-08T00:22:49Z | 2014-03-19T01:11:21Z | 2014-03-19T00:35:47Z | 2014-03-19T00:35:47Z | fe0241df486cfdcc0eef93cd30d667dd2ab8e8fd | 0 | dbd3c520536c1ffdb3f61d5e70c79f45bce8826a | fdbfb7c2a5126221d404047190caa04a6229fb52 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/56 | ||||
13356616 | MDExOlB1bGxSZXF1ZXN0MTMzNTY2MTY= | 59 | closed | 0 | Ensure decoding as datetime64[ns] | shoyer 1217238 | Pandas seems to have trouble constructing multi-indices when it's given datetime64 arrays which don't have ns precision. The current version of decode_cf_datetime will give datetime arrays with the default precision, which is us. Hence, when coupled with the dtype restoring wrapper from PR #54, the `to_series()` and `to_dataframe()` methods were broken when using decoded datetimes. | 2014-03-10T01:26:54Z | 2014-03-13T06:58:16Z | 2014-03-12T16:55:57Z | 2014-03-12T16:55:57Z | b1cb962620454febeef888e934debab3fe84818b | 0 | 931db2433594b34396beac945854d655306edc13 | 74d43ffde7f7c285715315f26de39d41c3b931bb | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/59 | ||||
13400487 | MDExOlB1bGxSZXF1ZXN0MTM0MDA0ODc= | 61 | closed | 0 | Fix performance regression for decode_cf_variable | shoyer 1217238 | We were passing the netCDF4 Variable directly to decode_cf_datetime, which does a very expensive np.asarray() call (fixed in the master branch of netCDF4-python). By passing the data as a numpy array, this call is _much_ faster. | 2014-03-10T23:59:38Z | 2014-06-12T23:44:31Z | 2014-03-11T08:29:51Z | 2014-03-11T08:29:51Z | 99d7ce93a1a256ce7e02bf96206cb7038add2e09 | 0 | 2eb423863d1309fedcd3a6a7a21b0e590f1e8424 | fdbfb7c2a5126221d404047190caa04a6229fb52 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/61 | ||||
13446222 | MDExOlB1bGxSZXF1ZXN0MTM0NDYyMjI= | 62 | closed | 0 | Modified Dataset.replace to replace a dictionary of variables | shoyer 1217238 | The resulting function is more flexible and provides a canonical solution for a common pattern I found myself writing with xray: - Create a new dataset based on some (but not all) variables from an existing dataset. - Add in new variables, often of that same name as the variables I removed from the original dataset. | 2014-03-11T22:28:17Z | 2014-06-12T23:44:33Z | 2014-03-31T06:57:38Z | 960d9b16e88e45a899b47a02cb3d32259f06fbc0 | 0 | 99926e7bd781ba6d6f293b2503b8f92af2d4d6c2 | 900f6e49b6419480cc76f615fbfe6df6c876b80c | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/62 | |||||
13447918 | MDExOlB1bGxSZXF1ZXN0MTM0NDc5MTg= | 63 | closed | 0 | Revamped Dataset.rename and DatasetArray.rename | shoyer 1217238 | For consistency, I renamed Dataset.renamed to Dataset.rename (most of our functions to modify datasets and return new updates are not using the past tense) and modified DatasetArray.rename so it can take a name dictionary, just like Dataset.rename. | 2014-03-11T23:03:09Z | 2014-03-12T16:59:14Z | 2014-03-12T16:57:08Z | 2014-03-12T16:57:08Z | 4a629e36f5f7f27e77b8981dacfad8b070c159dd | 0 | ef430d308c09dbc8d255a43af84f7dfd257180b1 | 900f6e49b6419480cc76f615fbfe6df6c876b80c | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/63 | ||||
13507700 | MDExOlB1bGxSZXF1ZXN0MTM1MDc3MDA= | 64 | closed | 0 | ENH: Allow for providing dimensions as xarrays to Dataset.concat | shoyer 1217238 | I also took the opportunity to consolidate the dimension argument handling logic with DatasetArray.concat. | 2014-03-13T05:46:16Z | 2014-03-19T01:11:29Z | 2014-03-19T00:43:03Z | 2014-03-19T00:43:03Z | 8e491992384275575517a4def87f895b29c05740 | 0 | 15d66bd6b2ff6cb3ad0d15df266ab0ad0e462857 | b4fcd7c84ec6e004dcd7e628d08ea0833cce64d7 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/64 | ||||
13508336 | MDExOlB1bGxSZXF1ZXN0MTM1MDgzMzY= | 65 | closed | 0 | ENH: Improvements to as_xarray | shoyer 1217238 | 1. Moved tuple unpacking logic from Dataset._as_variable to as_xarray. 2. Added unit tests. My intention is to add an as_xarray cast to the top of most functions in xray which expect arguments as XArray or DatasetArray objects, mathematical operations excluded. | 2014-03-13T06:24:23Z | 2014-03-19T01:11:33Z | 2014-03-19T00:55:17Z | 2014-03-19T00:55:17Z | 82eb6e433d83f39241d80f60f5119a41643cb885 | 0 | f183aea5a4cb70e2fcb39f527a770437c12958f0 | b4fcd7c84ec6e004dcd7e628d08ea0833cce64d7 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/65 | ||||
13607527 | MDExOlB1bGxSZXF1ZXN0MTM2MDc1Mjc= | 68 | closed | 0 | ENH: More flexible math with variables from different datasets | shoyer 1217238 | PR #33 was definitely a useful change -- item access via [] should return items still in the context of the dataset they were pulled from. However, it doesn't make sense to always keep track of all dataset variables. A particular example is when doing math between variables from different datasets. To be more concrete, suppose I have two datasets ("obs" and "sim"), each with two measurement variables ("tmin" and "tmax"). It should be possible to calculate `obs['tmin'] - sim['tmin']` without a merge conflict due to conflicting values of "tmax". Unfortunately, this is exactly what the current version of xray reports. This PR fixes this behavior, by automatically including only coordinates necessary to describe the arrays involved (via `DatasetArray.select`) when merging datasets resulting from mathematical operations. A possible downside is that occasionally auxiliary coordinates worth keeping around will be lost (e.g., `(2 * obs['tmin']).dataset` no longer contains a variable "tmax"). But on the whole I think this behavior is much more in line with reasonable expectations. This change also removes the DatasetArray methods `refocus` and `unselected` from the public API. I think this is the right call, since these functions were highly specific and really only useful for the prior version of the internal API. | 2014-03-15T22:04:09Z | 2014-06-12T23:44:33Z | 2014-03-24T20:07:46Z | 2014-03-24T20:07:46Z | 8c57be46a4f394c84657c64d926879f7a6915cd8 | 0 | 50c421df2ccfecbf2d1f2f822c879b667f52c992 | bb6885d8cc7f7dacdfd4646f6527599076230604 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/68 | ||||
13610071 | MDExOlB1bGxSZXF1ZXN0MTM2MTAwNzE= | 69 | closed | 0 | BUG: Fix check for virtual variables | shoyer 1217238 | It's only possible to check variable.index for 1d variables. | 2014-03-16T05:59:20Z | 2014-03-19T01:11:38Z | 2014-03-19T00:33:37Z | 2014-03-19T00:33:37Z | 9e13452e076727df28509b2db1ccbf20f9af09a0 | 0 | 0da2faa5f70267b6f4f7a2fddc0dd2c9694655fe | b4fcd7c84ec6e004dcd7e628d08ea0833cce64d7 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/69 | ||||
13610079 | MDExOlB1bGxSZXF1ZXN0MTM2MTAwNzk= | 70 | closed | 0 | ENH: Improved __repr__/__str__ for xray objects | shoyer 1217238 | The new `__repr__` for xray.Dataset is inspired by the representation for iris.Cube: ``` <xray.Dataset> Coordinates: (time: 20, dim1: 100, dim2: 50, dim3: 10) Non-coordinates: var1 - X X - var2 - X X - var3 - X - X Attributes: Empty ``` The new `__repr__` for xray.XArray and xray.DatasetArray shows the actual data as summarized by `repr(array.data)`, as long as the data is an ndarray or has fewer than 10^5 elements (~400 KB): ``` <xray.DatasetArray 'my_variable' (time: 2, x: 3)> array([[1, 2, 3], [4, 5, 6]]) Attributes: foo: bar ``` `__repr__` not showing the data was a complaint I heard about the prior representation. I removed the separate `__str__` implementation so we can have one canonical string representation (both implementations showed equivalent information). I am definitely open to suggestions for improving either of these! Note that unlike the old `__str__` implementation, I'm not doing any truncations of long line here. We could add that back in (perhaps for attributes) if it seems helpful. | 2014-03-16T06:00:58Z | 2014-03-27T09:05:05Z | 2014-03-24T20:46:31Z | 2014-03-24T20:46:31Z | 0f1d9864ebbdaf2206a1bdadef517ea1c763e138 | 0 | 0e1837cd13811e075c5fb33f2a4e4dd9b354f725 | 9bf3708be4574f85d6c664d9bb80742d5a37a2c0 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/70 | ||||
13616693 | MDExOlB1bGxSZXF1ZXN0MTM2MTY2OTM= | 71 | closed | 0 | ENH: Improved copy methods | shoyer 1217238 | The new copy methods have a uniform API (which also matches pandas): they take a keyword argument `deep`. `deep=False` no longer always loads data into a numpy array. This makes it possible to write functions to using the public API to rename variable dimensions without always loading variables into memory. | 2014-03-16T21:57:49Z | 2014-03-27T09:05:15Z | 2014-03-26T23:54:05Z | 2014-03-26T23:54:05Z | 872a050bb4e0e681d101761059e39e1146d5f5f7 | 0 | 2494cf2247f1a2178c7c2f83e55cabc4ccb70e1a | b4fcd7c84ec6e004dcd7e628d08ea0833cce64d7 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/71 | ||||
13616769 | MDExOlB1bGxSZXF1ZXN0MTM2MTY3Njk= | 72 | closed | 0 | ENH: Allow Dataset.concat to take a str for concat_over | shoyer 1217238 | This is a simple fix for an issue that Holly encountered. | 2014-03-16T22:03:56Z | 2014-03-27T20:26:59Z | 2014-03-26T23:54:29Z | 2014-03-26T23:54:29Z | 98a546adbfa65add139fca648aab3607b3526a61 | 0 | 668fd556af9b34c20e1c7c4be3f148d982ab98d6 | b4fcd7c84ec6e004dcd7e628d08ea0833cce64d7 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/72 | ||||
13617730 | MDExOlB1bGxSZXF1ZXN0MTM2MTc3MzA= | 73 | closed | 0 | ENH: Rename DatasetArray.focus to DatasetArray.name | shoyer 1217238 | This is not mere bikeshedding -- "name" much more clearly implies the meaning of this attribute and the fact that it should be a string. I think it makes complete sense that array.name is the name with which the variable is associated in the attached dataset. "Focus" is ambiguous, and is a left-over from when I called DatasetArray "DataView". For an additional level of security (to protect users from themselves), I have also hidden DatasetArray.dataset and DatasetArray.name behind properties so they cannot be modified in-place (I can't think of any case in which this would make sense instead of creating a new DatasetArray). Note: For obvious reasons, this change will conflict with most of the current pull-request. I will rebase whichever change is done last. | 2014-03-16T23:45:38Z | 2014-08-14T07:44:49Z | 2014-03-31T06:36:51Z | 2014-03-31T06:36:51Z | 9b8315468d4db8e3763066edf5477e7a4c7c394f | 0 | 5874bee78f7721bc0fe40323d721d7b264a52d46 | ee86f1f4ad69fe3ea90a029902f03cda24fd9ead | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/73 | ||||
13622293 | MDExOlB1bGxSZXF1ZXN0MTM2MjIyOTM= | 74 | closed | 0 | Simple getting-started guide and imports for readthedocs | shoyer 1217238 | Read the docs can now build documentation on the fly using IPython: http://xray.readthedocs.org/en/latest/getting-started.html The only part that doesn't work directly (for unclear reasons) are the plot directives -- so I've included the one example plot in the PR (created by running `cd doc && make html`). | 2014-03-17T06:11:28Z | 2014-03-27T09:05:10Z | 2014-03-26T23:55:36Z | 2014-03-26T23:55:36Z | 5a5f758ebd8efd565767fc67b0993d1cdaab8795 | 0 | c21243a92c3d5df56e7a5c78f4f3ab1781652aa3 | b4fcd7c84ec6e004dcd7e628d08ea0833cce64d7 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/74 | ||||
13717396 | MDExOlB1bGxSZXF1ZXN0MTM3MTczOTY= | 76 | closed | 0 | Dataset.merge ignores conflicting variable attributes | shoyer 1217238 | This is meant as a temporary fix until we figure out the right logic -- note the comments in the docs to this effect. I also took this as an opportunity to clean up `xarray_equal` and the related functions. It should still do the same thing, just in a more modular way. | 2014-03-19T01:32:22Z | 2014-03-27T09:05:08Z | 2014-03-27T00:02:32Z | 2014-03-27T00:02:32Z | ec50affa8e140a0f0ce17a3c45228c055eed3d3d | 0 | 6cb9b9b9e6df5b203e363ca499af3200037e72f5 | bb6885d8cc7f7dacdfd4646f6527599076230604 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/76 | ||||
13823772 | MDExOlB1bGxSZXF1ZXN0MTM4MjM3NzI= | 77 | closed | 0 | ENH: Dataset.reindex_like and DatasetArray.reindex_like | shoyer 1217238 | This provides an interface for re-indexing a dataset or dataset array using the coordinates from another object. Missing values along any coordinate are replaced by `NaN`. This method is directly based on the pandas method `DataFrame.reindex_like` (and the related series and panel variants). Eventually, I would like to build upon this functionality to add a `join` method to `xray.align` with the possible values `{'outer', 'inner', 'left', 'right'}`, just like `DataFrame.align`. This PR depends on PR #71, since I use its improved `copy` method for datasets. | 2014-03-21T05:12:53Z | 2014-06-12T17:30:21Z | 2014-04-09T03:05:43Z | 2014-04-09T03:05:43Z | 3d194bb5b8f2fecfe2a2102e16d21fca87d7c227 | 0 | c00727f8d8737e378911a7babcd4c83842710256 | 4eef8ec92d0d04978d1e35ec762f6bf195bbe3cf | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/77 | ||||
13917384 | MDExOlB1bGxSZXF1ZXN0MTM5MTczODQ= | 80 | closed | 0 | ENH: Dataset.from_dataframe and DatasetArray.from_series | shoyer 1217238 | Added new methods for creating Datasets and DatasetArrays from pandas objects. | 2014-03-24T18:56:58Z | 2014-06-12T17:29:57Z | 2014-04-09T03:06:03Z | 2014-04-09T03:06:03Z | 9e7db9d403a46e46e148d0b8954d6a7eef349b73 | 0 | a8e31a5855c0aa553d90881db7c3b596c0b0a1ba | 4eef8ec92d0d04978d1e35ec762f6bf195bbe3cf | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/80 | ||||
14047813 | MDExOlB1bGxSZXF1ZXN0MTQwNDc4MTM= | 82 | closed | 0 | Reworked README again to improve presentation | shoyer 1217238 | I'm trying to clean this up in preparation for submitting a talk about xray to [PyData Silicon Valley 2014](http://pydata.org/sv2014/). | 2014-03-27T09:04:36Z | 2014-04-09T04:10:51Z | 2014-03-27T17:04:57Z | 2014-03-27T17:04:57Z | fc89b93a106adbc441475b06d56a3bb175b0631e | 0 | 33730d027c250289073215fe9b7af7778e391d8c | 25c497887caf2c7656d79cf631d06f39a6990b8c | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/82 | ||||
14047897 | MDExOlB1bGxSZXF1ZXN0MTQwNDc4OTc= | 83 | closed | 0 | ENH: Dataset dimensions always appear in sorted order | shoyer 1217238 | Prior to this patch, dimensions appeared in the order in which they were added. This was generally fine, but it means that some output which depends on dimension order (e.g., dataframe output) does not always appear the same, depending on the order in which dataset variables were added. This patch now stores dimensions internally in an unordered dict, but sorts the dimensions into alphabetical order every time they are accessed. | 2014-03-27T09:06:35Z | 2014-06-12T17:29:49Z | 2014-04-09T03:10:25Z | 2014-04-09T03:10:25Z | aad59f3212434207d1d2cb48899af9dce1af7e58 | 0 | 8cf763d54c349e83797e47aadbd7217109416122 | 190fe958c492e60707723b4ed9bee8c9efa8a8c5 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/83 | ||||
14074398 | MDExOlB1bGxSZXF1ZXN0MTQwNzQzOTg= | 84 | closed | 0 | Fix: dataset_repr was failing on empty datasets. | akleeman 514053 | BUG: dataset_repr was failing on empty datasets. | 2014-03-27T18:29:18Z | 2014-03-27T20:09:45Z | 2014-03-27T20:05:49Z | 2014-03-27T20:05:49Z | 93e318a319e9ab6f5e1a8fa1e118131647709df6 | 0 | 68d5e7a0c7b35b9add4ecb6717036f7204118a93 | 648ce64176410ff0fb397ea7b0c13b41ae588183 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/84 | ||||
14081129 | MDExOlB1bGxSZXF1ZXN0MTQwODExMjk= | 86 | closed | 0 | BUG: Zero dimensional variables couldn't be written to file or serialized. | akleeman 514053 | Fixed a bug in which writes would fail if Datasets contained 0d variables. Also added the ability to open Datasets directly from NetCDF3 bytestrings. | 2014-03-27T20:42:06Z | 2014-06-12T17:29:11Z | 2014-03-28T03:58:43Z | 2014-03-28T03:58:43Z | 6e5ba34ac1e034a6c1aea276231548850994e21e | 0 | 59acec9e9ee1def7df6bd570c110759a3760e7cb | f41f7f0d2937239e695bcdadc697ca688c62bf67 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/86 | ||||
14162561 | MDExOlB1bGxSZXF1ZXN0MTQxNjI1NjE= | 87 | closed | 0 | ENH: Revamped Dataset update methods | shoyer 1217238 | New in this patch: - Added an `update` method to override variables and attributes. - Unified `__init__`, `__setitem__`, `update` and `merge` to call the same private methods/functions for processing dictionaries of variables. - If a variable is provided as a `DataArray`, it is automatically unpacked into variables including all its coordinates. - It is now possible to alter the size of existing dimensions via `__setitem__` or `update`. - Removed "decode_cf" as a parameter to `Dataset.__init__`. Now this flag can only be used in `Dataset.load_store`. - Generally cleaned up dataset.py. Replaces PR #62 | 2014-03-31T06:57:14Z | 2014-06-12T17:29:37Z | 2014-04-09T03:21:49Z | 2014-04-09T03:21:49Z | 2f6616ac31337412070300c0d4e252e983306f25 | 0 | bff679355fbe791e6abc1aa6275444e8a1944ebf | 3c0a3d0d2e4ff73adeeda02bf4ebc0cf890e7932 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/87 | ||||
14261147 | MDExOlB1bGxSZXF1ZXN0MTQyNjExNDc= | 88 | closed | 0 | ENH: Better Dataset repr | shoyer 1217238 | @toddsmall and @hdail both reported surprise that the order of dimensions as printed in a Dataset's string representation are not necessarily the same as the order of any variable dimensions. This patch alters `Dataset.__repr__` so the order of the dimensions for each variable is now printed. Hopefully this should make things clearer. Example of the new look: ``` <xray.Dataset> Dimensions: (dim1: 100, dim2: 50, dim3: 10, time: 20) Coordinates: dim1 X dim2 X dim3 X time X Noncoordinates: var1 0 1 var2 0 1 var3 1 0 Attributes: Empty ``` Vs. the old look: ``` <xray.Dataset> Coordinates: (dim1: 100, dim2: 50, dim3: 10, time: 20) Non-coordinates: var1 X X - - var2 X X - - var3 X - X - Attributes: Empty ``` I added Coordinates to make it clear that these are items in the Dataset. I removed the dashes `-` because they added visual noise that made it harder to read the axis numbers. Note: the commits for this PR are applied on top of those for #83, so they can be merged sequentially without the need to rebase. | 2014-04-02T03:24:50Z | 2014-06-12T17:29:15Z | 2014-04-09T03:15:49Z | 2014-04-09T03:15:49Z | a31aa6429a8151300f34ccde2ad41837352da662 | 0 | 1c314febd0f2839d0f555928cf96033513113aa6 | 68e785b90fe43131c9ad23086cffacc9abcfa41c | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/88 | ||||
14290994 | MDExOlB1bGxSZXF1ZXN0MTQyOTA5OTQ= | 89 | closed | 0 | Adjusted README to note that we require Python 2.7 | shoyer 1217238 | Also adjusted setup.py slightly. | 2014-04-02T17:23:36Z | 2014-06-12T17:30:12Z | 2014-04-02T19:52:57Z | 2014-04-02T19:52:57Z | dd2bf6af43f6d15dcfd2e185271f0793d86f2396 | 0 | 1e78dcb4a2b9ca25d9da1027875f60aa94e8a2d7 | 4eef8ec92d0d04978d1e35ec762f6bf195bbe3cf | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/89 | ||||
14313283 | MDExOlB1bGxSZXF1ZXN0MTQzMTMyODM= | 90 | closed | 0 | BUG FIX: two fixes related to DataArray math | shoyer 1217238 | 2014-04-03T03:16:46Z | 2014-06-12T17:29:22Z | 2014-04-03T23:41:53Z | 2014-04-03T23:41:53Z | 3bb11e6efcca4e692a571324f03653634e13502d | 0 | 74db6caefd2478554a66cb3d69a93693704fa47d | 652904df43c665bddca223b086f50bda7f95f912 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/90 | |||||
14318355 | MDExOlB1bGxSZXF1ZXN0MTQzMTgzNTU= | 91 | closed | 0 | ENH: Lazily decode CF datetimes and handle missing datetime values | shoyer 1217238 | 2014-04-03T07:43:36Z | 2014-06-12T17:29:21Z | 2014-04-09T03:23:30Z | 2014-04-09T03:23:30Z | bcb089ea96b89d44891160774616effb0ffb8f6e | 0 | 937197fd113619f68d99a586ca67a8abf2b26f94 | 652904df43c665bddca223b086f50bda7f95f912 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/91 | |||||
14437608 | MDExOlB1bGxSZXF1ZXN0MTQ0Mzc2MDg= | 92 | closed | 0 | Major reorganization of backends | shoyer 1217238 | Changes: - Backends are now in independent files (which should make it easier to keep track of them) - netCDF4 and scipy are now optional dependencies - added a new pydap backend (another optional dependency) - cleaned up and sped up dataset encoding/decoding and XArray equality checks in the process of getting tests to pass P.S. This is appears to be the world's most pathological netCDF file (also available at the same URL as an OpenDAP dataset). I eventually gave up on trying to get it to deserialize it consistently (pydap doesn't decode the strings properly) but we might want to add it to our test suite anyways: http://test.opendap.org/opendap/hyrax/data/nc/testfile.nc Fun fact of the day: `np.allclose(np.int8(-128), np.int8(-128)) == False`. | 2014-04-07T08:48:25Z | 2014-06-12T17:30:16Z | 2014-04-09T03:50:25Z | 2014-04-09T03:50:25Z | be7576cfa61a9ff3cfe4e4bf6e10145463f85ab8 | 0 | 19926c27705c3155f47a4bf82c8d9ce1b3f59608 | 8daf1d7f58c40d408959e508463f9abf3d2b8264 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/92 | ||||
14534536 | MDExOlB1bGxSZXF1ZXN0MTQ1MzQ1MzY= | 93 | closed | 0 | Reorganized file layout to be more standard | shoyer 1217238 | Most python projects include `project` as a subdirectory of the main project folder, instead of putting it in a separate `src` directory. | 2014-04-09T04:06:02Z | 2014-06-12T17:29:42Z | 2014-04-09T04:06:29Z | 2014-04-09T04:06:29Z | e404344f78b8a8263b5b9f6993bd28aa86a1d12c | 0 | 85408db60312b00b7e4f7db54a1734b8e03431e3 | 571dcc6e92fc9b54305aa825151a4bace9a16a86 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/93 | ||||
14539247 | MDExOlB1bGxSZXF1ZXN0MTQ1MzkyNDc= | 94 | closed | 0 | ENH: Array.get_axis_num method | shoyer 1217238 | This provides a standard way to get axis numbers for an xray array, i.e., `axis = array.get_axis_num(dim)` instead of `axis = array.dimensions.index(dim)`. The main advantage is that it gives a sensible error message, instead of the mystifying "ValueError: tuple.index(x): x not in tuple". | 2014-04-09T08:04:55Z | 2014-06-12T17:29:27Z | 2014-04-09T17:09:42Z | 2014-04-09T17:09:42Z | 8df3a950f9d4508f349324b2e7b0b7cd8a9db631 | 0 | 1949285d8e381e4f773ef6c5e5951831b2eb9eda | 1fabb9c83492e9b9be1a64dc4bb9817594254acf | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/94 | ||||
14540545 | MDExOlB1bGxSZXF1ZXN0MTQ1NDA1NDU= | 95 | closed | 0 | Fixed DataArray.reduce to use the axis argument like numpy | shoyer 1217238 | Previously, the axis argument (if given as a list) was used to apply the reduction repeatedly one dimension at a time. That wasn't like numpy, and could potentially lead to hard to recognize errors if using an aggregator where order matters. | 2014-04-09T08:46:09Z | 2014-06-12T17:29:12Z | 2014-04-09T17:10:05Z | 2014-04-09T17:10:05Z | e9b0c8bbc8c9d3b358b22294df893f1f0700980e | 0 | 8a8a59b2c41fd3fc7a1dc83abcf6ee2cfee62068 | 1fabb9c83492e9b9be1a64dc4bb9817594254acf | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/95 | ||||
14578424 | MDExOlB1bGxSZXF1ZXN0MTQ1Nzg0MjQ= | 96 | closed | 0 | Allow for reading variable length strings from NetCDF4 | shoyer 1217238 | Creating the NetCDF4ArrayWrapper object also let me clean-up some other internal code for XArray objects. I also created utils.NDArrayMixin to consolidate all my ndarray-like subclasses. Partial fix for #57 -- we still could use support for _writing_ variable length strings, but that is much less urgent. | 2014-04-09T22:27:11Z | 2014-06-12T17:30:04Z | 2014-04-09T23:38:31Z | 2014-04-09T23:38:31Z | bf9b759b067ba689cbfeaa305c032f280ce2fddc | 0 | ad5144101245c93e2bf475b5ce7835e7aee4d050 | a5296ca7ff95440f24a5d2175ad3fa5bea0bf522 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/96 | ||||
14639094 | MDExOlB1bGxSZXF1ZXN0MTQ2MzkwOTQ= | 98 | closed | 0 | Add anticipated API changes to README | shoyer 1217238 | 2014-04-11T05:25:33Z | 2014-06-12T17:29:09Z | 2014-04-11T05:29:58Z | 2014-04-11T05:29:58Z | b6d3f1351414183b14681ed3195187ce10c30d25 | 0 | cbf96d7ef5221a50aa8d4713ca539bfea70d48a8 | 3c2c7fe8274f08cb95adf2c5e3160e9e8bfc31ce | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/98 | |||||
14642816 | MDExOlB1bGxSZXF1ZXN0MTQ2NDI4MTY= | 99 | closed | 0 | Cleaned up Dataset indexing and groupby | shoyer 1217238 | Changes of note: - Indexing a Dataset with an integer no longer drops 0-dimensional variables. They are kept around as scalar values (without dimensions). This let me remove at least one nasty hack. - Restructured the internals of groupby and DataArray.concat (fixes #81). - Removed support for grouping XArray objects since they will no longer be in the public API. - Added an apply method to DatasetGroupBy (implements #78). | 2014-04-11T08:14:13Z | 2014-06-12T17:29:26Z | 2014-04-11T17:15:49Z | 2014-04-11T17:15:49Z | 12c50c04b050f54dca352d52210cb9f2c5011d35 | 0 | 75d0b1b4dfe73dd91615a465f922281372e44ce8 | 570746cd565930443405c088665478ed69e8d929 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/99 | ||||
14697321 | MDExOlB1bGxSZXF1ZXN0MTQ2OTczMjE= | 100 | closed | 0 | API reorganization | shoyer 1217238 | Renamed "XArray" back to "Variable" and a bunch of associated names. Also renamed the "data" attribute to "values" to match pandas (closes #97). Using any of the old names should still work (for now) but raise a warning. | 2014-04-13T20:00:03Z | 2014-06-12T17:33:49Z | 2014-04-15T02:41:52Z | 2014-04-15T02:41:52Z | 73c768171c316c04638fbf8d46642b79a2a938b5 | 0 | 6c7db497d25a97c082beaf653633eeacf0f13750 | 4713be2beef8c02818089da7c4d343669b59ff1b | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/100 | ||||
14744392 | MDExOlB1bGxSZXF1ZXN0MTQ3NDQzOTI= | 102 | closed | 0 | Dataset.concat() can now automatically concat over non-equal variables. | akleeman 514053 | concat_over=True indicates that concat should concat over all variables that are not the same in the set of datasets that are to be concatenated. | 2014-04-14T22:19:02Z | 2014-06-12T17:33:49Z | 2014-04-23T03:24:45Z | 2014-04-23T03:24:45Z | 881122397cf3728b58856cca2986078bfa49c038 | 0 | b9635a53136126980080f4ff80e213c936a3c1e0 | 4713be2beef8c02818089da7c4d343669b59ff1b | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/102 | ||||
15216817 | MDExOlB1bGxSZXF1ZXN0MTUyMTY4MTc= | 107 | closed | 0 | Deprecated 'attributes' in favor of 'attrs' | shoyer 1217238 | Also: 1. Don't try to preserve attributes under mathematical operations. 2. Finish up some cleanup related to "equals" and "identical" for testing. 3. Options for how strictly to compare varaibles when merging or concatenating (see #25). Fixes #103 and #104. | 2014-04-27T23:00:18Z | 2014-04-28T07:01:06Z | 2014-04-28T07:01:03Z | 2014-04-28T07:01:03Z | bc328b99e5309e401ca3fb1fa8402afacaf6cbed | 0 | 53ac3b83414432a1c35e361467c48b26db32b0f8 | 9744aaf01abe13c8f8d4e7781a5f48c4dc906433 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/107 | ||||
15273574 | MDExOlB1bGxSZXF1ZXN0MTUyNzM1NzQ= | 108 | closed | 0 | Change default of ArrayGroupBy.reduce to dimension=None | shoyer 1217238 | This makes xray consistent with pandas: `obj.groupby('year').sum()` should return an object with 'year' as a dimension, not an object where the 'year' dimension is summed out. As a bonus, the implementation is simpler (less code). | 2014-04-29T05:48:28Z | 2014-04-29T06:06:02Z | 2014-04-29T06:05:59Z | 2014-04-29T06:05:59Z | 436d6ad2dcfdfd01511cd4e6c7f823a050fc1504 | 0 | 0198346f84e4e429b85f80d3a64ff09ba2dec220 | 436538a205012b138225fdcab287dc128355fc2a | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/108 | ||||
15405609 | MDExOlB1bGxSZXF1ZXN0MTU0MDU2MDk= | 109 | closed | 0 | Lazy indexing for loading remote/disk datasets | shoyer 1217238 | @akleeman @ToddSmall I'd love if you could take a quick look, even if it's just at my unit tests. | 2014-05-01T20:28:43Z | 2014-05-02T20:17:55Z | 2014-05-02T20:17:38Z | 2014-05-02T20:17:38Z | 75b99d0f293415f82767df342752b5ba0a7509fb | 0 | 088006c59e926301a320b34db60a3426c919ac4c | 8cd667db011af74f33dee05824b6762010378943 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/109 | ||||
15449080 | MDExOlB1bGxSZXF1ZXN0MTU0NDkwODA= | 110 | closed | 0 | Pre 0.1 release cleanup | shoyer 1217238 | This includes my intended updates to the codebase prior to the 0.1 release, including all changes in PR #109. I still intend to update the docs prior to tagging the release. | 2014-05-02T20:17:10Z | 2014-05-02T20:17:40Z | 2014-05-02T20:17:37Z | 2014-05-02T20:17:37Z | e6426832d903e92ce0e0ffe30a16ab050a03d1ae | 0 | 03cfc056c78d48b2d789736b7adbe44b381cd17e | 8cd667db011af74f33dee05824b6762010378943 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/110 | ||||
15458207 | MDExOlB1bGxSZXF1ZXN0MTU0NTgyMDc= | 111 | closed | 0 | Prepare v0.1 | shoyer 1217238 | 2014-05-03T01:28:19Z | 2014-05-03T01:28:31Z | 2014-05-03T01:28:29Z | 2014-05-03T01:28:29Z | ed1f2caada43b213c2947f80284b80c999e606f7 | 0 | 9f15916fb4ffaed9cc7aec656ad168b318bb8074 | 9d09b43148fa4a6682a68f9f2eadf814cdc3ec76 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/111 | |||||
15556956 | MDExOlB1bGxSZXF1ZXN0MTU1NTY5NTY= | 113 | closed | 0 | Most of Python 3 support | takluyver 327925 | This isn't entirely finished, but I need to stop working on it for a bit, and I think enough of it is ready to be reviewed. The core code is passing its tests; the remaining failures are all in talking to the Scipy and netCDF4 backends. I also have PRs open against Scipy (scipy/scipy#3617) and netCDF4 (Unidata/netcdf4-python#252) to fix bugs I've encountered there. Particular issues that came up: - There were quite a few circular imports. For now, I've fudged these to work rather than trying to reorganise the code. - `isinstance(x, int)` doesn't reliably catch numpy integer types - see e.g. numpy/numpy#2951. I changed several such cases to `isinstance(x, (int, np.integer))`. | 2014-05-06T18:31:56Z | 2014-07-15T20:36:05Z | 2014-05-09T01:39:01Z | 2014-05-09T01:39:01Z | 184fd39c0fa1574a03439998138297bdb193674c | 0.1.1 664063 | 0 | 6dbd8910080e9210700501c0ea671cf0dc44d90f | 8d6fbd7f4469ce73ed94cf09602efa0498f9dab6 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/113 | |||
15688674 | MDExOlB1bGxSZXF1ZXN0MTU2ODg2NzQ= | 119 | closed | 0 | Fix non-standard calendars | shoyer 1217238 | These calendars now result in arrays with object dtype. Should fix #118. | 2014-05-09T06:52:47Z | 2014-05-09T06:53:06Z | 2014-05-09T06:53:02Z | 2014-05-09T06:53:02Z | e012ef660a75432b5b51b9ff6221fd4e2b4694a1 | 0 | 4c5d7075358c27d89f0fa3961419dc8860c20360 | b48f1ef6391ff3f8a09f22a569fc51f48e62156d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/119 | ||||
15751098 | MDExOlB1bGxSZXF1ZXN0MTU3NTEwOTg= | 124 | closed | 0 | Complete Python 3 support | shoyer 1217238 | Resolves #53. Thanks @takluyver for doing most of the hard work! Also resolves #57 (writing variable length unicode strings in NetCDF4), since at some point I thought it would be convenient for Python 3. That turned out to be a tangent, but I'm happy I wrote it anyways. | 2014-05-12T05:03:34Z | 2014-07-29T21:27:56Z | 2014-05-12T05:56:28Z | 2014-05-12T05:56:28Z | 263d140747e1004f1bfa7b1e480d57f39e480d70 | 0 | ce84a8a6da961245affdcaea2321fe3d63f019a6 | cc5e1b22e015e320a5ffc9194e6e6fb869d96279 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/124 | ||||
15767015 | MDExOlB1bGxSZXF1ZXN0MTU3NjcwMTU= | 125 | closed | 0 | Only copy datetime64 data if it is using non-nanosecond precision. | akleeman 514053 | In an attempt to coerce all datetime arrays to nano second resolutoin utils.as_safe_array() was creating copies of any datetime64 array (via the astype method). This was causing unexpected behavior (bugs) for things such as concatenation over times. (see below). ``` import xray import pandas as pd ds = xray.Dataset() ds['time'] = ('time', pd.date_range('2011-09-01', '2011-09-11')) times = [ds.indexed(time=[i]) for i in range(10)] ret = xray.Dataset.concat(times, 'time') print ret['time'] <xray.DataArray 'time' (time: 10)> array(['1970-01-02T07:04:40.718526408-0800', '1969-12-31T16:00:00.099966608-0800', '1969-12-31T16:00:00.041748384-0800', '1969-12-31T16:00:00.041748360-0800', '1969-12-31T16:00:00.041748336-0800', '1969-12-31T16:00:00.041748312-0800', '1969-12-31T16:00:00.041748288-0800', '1969-12-31T16:00:00.041748264-0800', '1969-12-31T16:00:00.041748240-0800', '1969-12-31T16:00:00.041748216-0800'], dtype='datetime64[ns]') Attributes: Empty ``` | 2014-05-12T13:36:22Z | 2014-05-20T19:09:40Z | 2014-05-20T19:09:40Z | e255f9e632bd646190ba6433599ccea7e122cc7f | 0 | d09708a119d8ca90298673ecd982414017ef53de | 8f667bef6e190764cdd801fc857f94f23c8a36c2 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/125 | |||||
15798892 | MDExOlB1bGxSZXF1ZXN0MTU3OTg4OTI= | 126 | closed | 0 | Return numpy.datetime64 arrays for non-standard calendars | jhamman 2443309 | Fixes issues in #118 and #121 | 2014-05-13T00:22:51Z | 2015-07-27T05:38:06Z | 2014-05-16T00:21:08Z | 2014-05-16T00:21:08Z | e80836b9736fcfba1af500c08aab22bcda4e8912 | 0.1.1 664063 | 0 | e07bc93589bbd23fe3bfa1ae1e1daf15eebf83f2 | ed3143e3082ba339d35dc4678ddabc7e175dd6b8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/126 | |||
15820652 | MDExOlB1bGxSZXF1ZXN0MTU4MjA2NTI= | 127 | closed | 0 | initial implementation of support for NetCDF groups | alimanfoo 703554 | Just to start getting familiar with xray, I've had a go at implementing support for opening a dataset from a specific group within a NetCDF file. I haven't tested on real data but there are a couple of unit tests covering simple cases. Let me know if you'd like to take this forward, happy to work on it further. | 2014-05-13T13:12:53Z | 2014-06-27T17:23:33Z | 2014-05-16T01:46:09Z | 2014-05-16T01:46:09Z | efece21b5fce99465a52c866b890e34f19d5bd37 | 0.1.1 664063 | 0 | 28b0ba59b33f63dcd6f6cb05666b3cd98211f4b4 | ed3143e3082ba339d35dc4678ddabc7e175dd6b8 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/127 | |||
15862044 | MDExOlB1bGxSZXF1ZXN0MTU4NjIwNDQ= | 128 | closed | 0 | Expose more information in DataArray.__repr__ | shoyer 1217238 | This PR changes the `DataArray` representation so that it displays more of the information associated with a data array: - "Coordinates" are indicated by their name and the `repr` of the corresponding pandas.Index object (to indicate how they are used as indices). - "Linked" dataset variables are also listed. - These are other variables in the dataset associated with a DataArray which are also indexed along with the DataArray. - They accessible from the `dataset` attribute or by indexing the data array with a string. - Perhaps their most convenient aspect is that they enable [`groupby` operations by name](http://xray.readthedocs.org/en/latest/tutorial.html#apply) for DataArray objets. - This is an admitedly somewhat confusing (though convenient) notion that I am considering [removing](https://github.com/xray- pydata/xray/issues/117), but we if we don't remove them we should certainly expose their existence more clearly, given the potential benefits in expressiveness and costs in performance. Questions to resolve: - Is "Linked dataset variables" the best name for these? - Perhaps it would be useful to show more information about these linked variables, such as their dimensions and/or shape? Examples of the new repr are on nbviewer: http://nbviewer.ipython.org/gist/shoyer/94936e5b71613683d95a | 2014-05-14T06:05:53Z | 2014-08-01T05:54:50Z | 2014-05-29T04:19:46Z | 2014-05-29T04:19:46Z | 166ba9652e44423de902351d65e94216f5d8125a | 0.2 650893 | 0 | 238cb2a3d360e4dc0977c0e37758faf62e262fab | ed3143e3082ba339d35dc4678ddabc7e175dd6b8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/128 | |||
15862812 | MDExOlB1bGxSZXF1ZXN0MTU4NjI4MTI= | 129 | closed | 0 | Require only numpy 1.7 for the benefit of readthedocs | shoyer 1217238 | ReadTheDocs comes with pre-built packages for the basic scientific python stack, but some of these packages are old (e.g., numpy is 1.7.1). The only way to upgrade packages on readthedocs is to use a virtual environment and a requirements.txt. Unfortunately, this means we can't upgrade both numpy and pandas simultaneously, because pandas may get built first and link against the wrong version of numpy. We inadvertantly stumbled upon a work around to build the "latest" docs by first installing numpy in the (cached) virtual environment, and then later (in another commit), adding pandas to the requirements.txt file. However, this is a real hack and makes it impossible to maintain different versions of the docs, such as for tagged releases. Accordingly, this commit relaxes the numpy version requirement so we can use a version that readthedocs already has installed. (We actually don't really need a newer version of numpy for any current functionality in xray, although it's nice to have for support for missing value functions like nanmean.) | 2014-05-14T06:41:30Z | 2014-06-25T23:40:31Z | 2014-05-15T07:21:22Z | 2014-05-15T07:21:22Z | b020100a03b394cc08b5cb504a08a64af1253ba7 | 0.1.1 664063 | 0 | 0b33e2ab862f27b688d8ababa954265942720164 | ed3143e3082ba339d35dc4678ddabc7e175dd6b8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/129 | |||
16037950 | MDExOlB1bGxSZXF1ZXN0MTYwMzc5NTA= | 134 | closed | 0 | Fix concatenating Variables with dtype=datetime64 | shoyer 1217238 | This is an alternative to #125 which I think is a little cleaner. Basically, there was a bug where `Variable.values` for datetime64 arrays always made a copy of values. This made it impossible to edit variable values in-place. @akleeman would appreciate your thoughts. | 2014-05-19T05:39:46Z | 2014-06-28T01:08:03Z | 2014-05-20T19:09:28Z | 2014-05-20T19:09:28Z | 6e9268f01681c37a9603ef67a46aa96d29955fb8 | 0.1.1 664063 | 0 | e9e1866dfdf13b9656c923c1d8f077e9bad225d8 | c425967c5f23f46ec1100ccdf472a3fbc0a51ade | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/134 | |||
16037967 | MDExOlB1bGxSZXF1ZXN0MTYwMzc5Njc= | 135 | closed | 0 | Tweak specification of dependencies for readthedocs | shoyer 1217238 | 2014-05-19T05:41:12Z | 2014-06-26T23:33:24Z | 2014-05-19T06:03:57Z | 2014-05-19T06:03:57Z | e90765bb169259060278cffe239983fee433b8d2 | 0 | 05eb9d3b651a06be78b9557551bd2cf83adc30d1 | c425967c5f23f46ec1100ccdf472a3fbc0a51ade | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/135 | |||||
16085838 | MDExOlB1bGxSZXF1ZXN0MTYwODU4Mzg= | 137 | closed | 0 | Dataset.reduce methods | jhamman 2443309 | A first attempt at implementing Dataset reduction methods. #131 | 2014-05-20T01:53:30Z | 2014-07-25T06:37:31Z | 2014-05-21T20:23:36Z | 2014-05-21T20:23:36Z | f6a6e7317c78e108176b74f1f67e12f5880e14fa | 0.2 650893 | 0 | b5d82a0887f7156ddd4ab1c1aab89345bd642162 | 7732816216bbb5d0c98946149c9f3b8dc54eb28f | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/137 | |||
16140780 | MDExOlB1bGxSZXF1ZXN0MTYxNDA3ODA= | 139 | closed | 0 | Enable keep attrs | jhamman 2443309 | Fixes #138 | 2014-05-21T00:48:47Z | 2015-07-27T05:38:13Z | 2014-05-21T21:43:21Z | cfc9de74d9dccfd61798e6f0db6fdd8cf47f4e7f | 0 | 1c08e190d2b3d05b7107d3d7a988c2afac37b911 | fd5268f7bbf932767b589169112efc2ee5a8a012 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/139 | |||||
16190479 | MDExOlB1bGxSZXF1ZXN0MTYxOTA0Nzk= | 141 | closed | 0 | Add keep_attrs to reduction methods | jhamman 2443309 | fixes #138 This is a much cleaner version of #139. | 2014-05-21T21:48:19Z | 2014-05-22T00:35:21Z | 2014-05-22T00:29:22Z | 2014-05-22T00:29:22Z | 70a6f9b29743e2b5480bdb25ced7c184c99df268 | 0 | 555def48f18e75246a91decd4a3b3c951e247ff1 | fd5268f7bbf932767b589169112efc2ee5a8a012 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/141 | ||||
16535481 | MDExOlB1bGxSZXF1ZXN0MTY1MzU0ODE= | 143 | closed | 0 | Fix decoded_cf_variable was not working. | akleeman 514053 | Small bug fix, and a test. | 2014-05-30T14:27:13Z | 2014-06-12T09:39:20Z | 2014-06-12T09:39:20Z | b77a8173175acc504ccf1203576b7be4b111da6e | 0 | 1ebd3a5df08605410d716a002de4e72072dbd7e8 | 71137d1e50116e5cca63d9b1c169844b5737cec2 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/143 | |||||
16622100 | MDExOlB1bGxSZXF1ZXN0MTY2MjIxMDA= | 144 | closed | 0 | Use "equivalence" for all dictionary equality checks | shoyer 1217238 | This should fix a bug @mgarvert encountered with concatenating variables with different array attributes. In the process of fixing this issue, I encountered and fixed another bug with utils.remove_incompatible_items. | 2014-06-02T21:01:35Z | 2014-06-25T23:40:36Z | 2014-06-02T21:20:15Z | 2014-06-02T21:20:15Z | 955027efe5822cdb1d3f48ee1260318e1af8c0a8 | 0.2 650893 | 0 | eff435deecabd1ff9488ec640c126dde2fe4fca0 | 71137d1e50116e5cca63d9b1c169844b5737cec2 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/144 | |||
16687849 | MDExOlB1bGxSZXF1ZXN0MTY2ODc4NDk= | 145 | closed | 0 | Fix doc builds on ReadTheDocs | shoyer 1217238 | 2014-06-04T01:40:12Z | 2014-06-04T01:40:33Z | 2014-06-04T01:40:30Z | 2014-06-04T01:40:30Z | 4274574b07804a15818638344a4aa74efe1ca377 | 0 | f3abd1333df8d65d75ac904d8e7d409540febe44 | 131aee9516795925e15e4745add4b44b1578c1ee | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/145 | |||||
16802020 | MDExOlB1bGxSZXF1ZXN0MTY4MDIwMjA= | 147 | closed | 0 | Support "None" as a variable name and use it as a default | shoyer 1217238 | This makes the xray API a little more similar to pandas, which makes heavy use of `name = None` for objects that can but don't always have names like Series and Index. It will be a particular useful option to have around when we add a direct constructor for DataArray objects (#115). For now, arrays will probably only end up being named `None` if they are the result of some mathematical operation where the name could be ambiguous. | 2014-06-06T02:26:57Z | 2014-08-14T07:44:27Z | 2014-06-09T06:17:55Z | 2014-06-09T06:17:55Z | 0674f9350b26eb604d7cb729d34abbf52fde2e20 | 0.2 650893 | 0 | f448318ff7efc8e6c4e98140ecda0db7304fbfce | 77dd0c38a4065ea815368f3ca9490157b530a9c4 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/147 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);