issues
38 rows where milestone = 650893 and state = "closed" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, comments, author_association, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
33637243 | MDU6SXNzdWUzMzYzNzI0Mw== | 131 | Dataset summary methods | jhamman 2443309 | closed | 0 | 0.2 650893 | 10 | 2014-05-16T00:17:56Z | 2023-09-28T12:42:34Z | 2014-05-21T21:47:29Z | MEMBER | Add summary methods to Dataset object. For example, it would be great if you could summarize a entire dataset in a single line. (1) Mean of all variables in dataset.
(2) Mean of all variables in dataset along a dimension:
In the case where a dimension is specified and there are variables that don't use that dimension, I'd imagine you would just pass that variable through unchanged. Related to #122. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
37841310 | MDU6SXNzdWUzNzg0MTMxMA== | 183 | Checklist for v0.2 release | shoyer 1217238 | closed | 0 | shoyer 1217238 | 0.2 650893 | 1 | 2014-07-15T00:25:27Z | 2014-08-14T20:01:17Z | 2014-08-14T20:01:17Z | MEMBER | Requirements:
- [x] Better documentation:
- [x] Tutorial introduces Nice to have: - [x] Support modifying DataArray dimensions/coordinates in place (#180) - [ ] Automatic alignment in mathematical operations (#184) - [ ] Revised interface for CF encoding/decoding (#155, #175) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||
40231730 | MDExOlB1bGxSZXF1ZXN0MTk3NzMyODE= | 213 | Checklist for v0.2.0 | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-08-14T08:08:25Z | 2014-08-14T17:20:05Z | 2014-08-14T17:20:02Z | MEMBER | 0 | pydata/xarray/pulls/213 | Should resolve all remaining items in #183. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
35114453 | MDExOlB1bGxSZXF1ZXN0MTY4MDIwMjA= | 147 | Support "None" as a variable name and use it as a default | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-06T02:26:57Z | 2014-08-14T07:44:27Z | 2014-06-09T06:17:55Z | MEMBER | 0 | pydata/xarray/pulls/147 | This makes the xray API a little more similar to pandas, which
makes heavy use of It will be a particular useful option to have around when we add
a direct constructor for DataArray objects (#115). For now, arrays will
probably only end up being named |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
38848839 | MDU6SXNzdWUzODg0ODgzOQ== | 190 | Consistent use of abbreviations: attrs, dims, coords | shoyer 1217238 | closed | 0 | 0.2 650893 | 3 | 2014-07-27T19:38:35Z | 2014-08-14T07:24:29Z | 2014-08-14T07:24:29Z | MEMBER | Right now, we use We also use Note that I switched to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
39162573 | MDExOlB1bGxSZXF1ZXN0MTkxMzI3MzY= | 194 | Consistently use shorter names: always use 'attrs', 'coords' and 'dims' | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-31T05:11:12Z | 2014-08-14T05:08:01Z | 2014-08-14T05:07:58Z | MEMBER | 0 | pydata/xarray/pulls/194 | Cleaned up a few cases where Fixes: #190 - [x] Switch names in xray itself - [x] Switch names in tests - [x] Switch names in documentation |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
39768388 | MDExOlB1bGxSZXF1ZXN0MTk0OTQ1OTc= | 207 | Raise an error when attempting to use a scalar variable as a dimension | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-08-07T21:07:03Z | 2014-08-07T21:13:12Z | 2014-08-07T21:13:02Z | MEMBER | 0 | pydata/xarray/pulls/207 | If 'x' was a scalar variable in a dataset and you set a new variable with 'x' as a dimension, you could end up with a broken Dataset object. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
39384616 | MDExOlB1bGxSZXF1ZXN0MTkyNjE4MTc= | 201 | Fix renaming in-place bug with virtual variables | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-08-04T01:20:06Z | 2014-08-04T01:24:32Z | 2014-08-04T01:22:58Z | MEMBER | 0 | pydata/xarray/pulls/201 | This is why mutating state is a bad idea. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
39354137 | MDExOlB1bGxSZXF1ZXN0MTkyNDgzMDg= | 198 | Cleanup of DataArray constructor / Dataset.__getitem__ | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-08-02T18:12:36Z | 2014-08-02T18:28:54Z | 2014-08-02T18:28:52Z | MEMBER | 0 | pydata/xarray/pulls/198 | Now Dataset.getitem raises a KeyError when it can't find a variable. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
33463636 | MDExOlB1bGxSZXF1ZXN0MTU4NjIwNDQ= | 128 | Expose more information in DataArray.__repr__ | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-05-14T06:05:53Z | 2014-08-01T05:54:50Z | 2014-05-29T04:19:46Z | MEMBER | 0 | pydata/xarray/pulls/128 | This PR changes the Questions to resolve: - Is "Linked dataset variables" the best name for these? - Perhaps it would be useful to show more information about these linked variables, such as their dimensions and/or shape? Examples of the new repr are on nbviewer: http://nbviewer.ipython.org/gist/shoyer/94936e5b71613683d95a |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
39167256 | MDExOlB1bGxSZXF1ZXN0MTkxMzUxNTk= | 196 | Raise NotImplementedError when attempting to use a pandas.MultiIndex | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-31T06:53:04Z | 2014-07-31T07:00:43Z | 2014-07-31T07:00:40Z | MEMBER | 0 | pydata/xarray/pulls/196 | Related: #164 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
39163624 | MDExOlB1bGxSZXF1ZXN0MTkxMzMzNjg= | 195 | .loc and .sel support indexing with boolean arrays | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-31T05:41:09Z | 2014-07-31T06:52:43Z | 2014-07-31T06:52:41Z | MEMBER | 0 | pydata/xarray/pulls/195 | Fixes #182 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
37840634 | MDU6SXNzdWUzNzg0MDYzNA== | 182 | DataArray.loc should accept boolean arrays | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-15T00:12:01Z | 2014-07-31T06:52:41Z | 2014-07-31T06:52:41Z | MEMBER | Allowing boolean arrays for There is basically no ambiguity since |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
38857041 | MDExOlB1bGxSZXF1ZXN0MTg5NDczNTA= | 192 | Enhanced support for modifying Dataset & DataArray properties in place | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-28T02:14:00Z | 2014-07-31T04:46:19Z | 2014-07-31T04:46:16Z | MEMBER | 0 | pydata/xarray/pulls/192 | With this patch, it is possible to perform the following operations:
- It is no longer possible to set |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
37211553 | MDU6SXNzdWUzNzIxMTU1Mw== | 180 | Support modifying DataArray dimensions and coordinates in-place | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-06T04:30:36Z | 2014-07-31T04:46:16Z | 2014-07-31T04:46:16Z | MEMBER | The key thing is to (shallow) copy the underlying |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
34003882 | MDU6SXNzdWUzNDAwMzg4Mg== | 140 | Dataset.apply method | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-05-21T17:08:47Z | 2014-07-31T04:45:29Z | 2014-07-31T04:45:29Z | MEMBER | Dataset reduce methods (#131) suggested to me that it would be nice to support applying functions which map over all data arrays in a dataset. The signature of For example, I should be able to write Note: It's still worth having #137 as a separate implementation because it can do some additional validation for dimensions and skip variables where the aggregation doesn't make sense. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
38700243 | MDExOlB1bGxSZXF1ZXN0MTg4Nzk3OTY= | 189 | Implementation of Dataset.apply method | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-07-25T06:18:29Z | 2014-07-31T04:45:29Z | 2014-07-31T04:45:29Z | MEMBER | 0 | pydata/xarray/pulls/189 | Fixes #140 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
38502970 | MDExOlB1bGxSZXF1ZXN0MTg3NTk1NTA= | 188 | Dataset context manager and close() method | shoyer 1217238 | closed | 0 | 0.2 650893 | 1 | 2014-07-23T07:03:49Z | 2014-07-29T19:47:46Z | 2014-07-29T19:44:30Z | MEMBER | 0 | pydata/xarray/pulls/188 | With this PR, it is possible to close the data store from which a dataset was loaded via
The ability to cleanly close files opened from disk is pretty essential -- we probably should have had this a while ago. It should not be necessary to use the low-level/unstable datastore API to get this functionality. Implementation question: With this current implementation, calling CC @ToddSmall |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
33852501 | MDExOlB1bGxSZXF1ZXN0MTYwODU4Mzg= | 137 | Dataset.reduce methods | jhamman 2443309 | closed | 0 | 0.2 650893 | 6 | 2014-05-20T01:53:30Z | 2014-07-25T06:37:31Z | 2014-05-21T20:23:36Z | MEMBER | 0 | pydata/xarray/pulls/137 | A first attempt at implementing Dataset reduction methods. 131 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
37031948 | MDU6SXNzdWUzNzAzMTk0OA== | 178 | Use "XIndex" instead of "Index"? | shoyer 1217238 | closed | 0 | 0.2 650893 | 1 | 2014-07-02T22:43:46Z | 2014-07-14T23:58:46Z | 2014-07-14T23:58:46Z | MEMBER | In #161, I renamed To better distinguish xray's Index from pandas, let's call it |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
34352255 | MDU6SXNzdWUzNDM1MjI1NQ== | 142 | Rename "coordinates" to "indices"? | shoyer 1217238 | closed | 0 | 0.2 650893 | 2 | 2014-05-27T08:47:51Z | 2014-07-10T09:38:26Z | 2014-06-22T00:44:26Z | MEMBER | For users of pandas, the xray interface would be more obvious if we referred to what we currently call "coordinates" as "indices." This would entail renaming the Possible downsides:
1. The xray data model would be less obvious to people familiar with the NetCDF.
2. There is some potential for confusion between |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
32928338 | MDU6SXNzdWUzMjkyODMzOA== | 116 | Allow DataArray objects without named dimensions? | shoyer 1217238 | closed | 0 | 0.2 650893 | 2 | 2014-05-06T20:12:41Z | 2014-07-06T03:38:48Z | 2014-07-06T03:38:48Z | MEMBER | At PyData SV, @mrocklin suggested that by default, array broadcasting should fall back on numpy's shape based broadcasting. This would also simplify directly constructing DataArray objects (#115). The trick will be to make this work with xray's internals, which currently assume that dimensions are always named by strings. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
36908559 | MDExOlB1bGxSZXF1ZXN0MTc4NDAyNDE= | 177 | Add python2.6 compatibility | aykuznetsova 3344007 | closed | 0 | 0.2 650893 | 1 | 2014-07-01T16:19:21Z | 2014-07-01T21:30:08Z | 2014-07-01T19:57:30Z | NONE | 0 | pydata/xarray/pulls/177 | This change mainly involves an alternative import of OrderedDict, modified dict and set comprehensions, and using unittest2 for testing. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
34810536 | MDExOlB1bGxSZXF1ZXN0MTY2MjIxMDA= | 144 | Use "equivalence" for all dictionary equality checks | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-02T21:01:35Z | 2014-06-25T23:40:36Z | 2014-06-02T21:20:15Z | MEMBER | 0 | pydata/xarray/pulls/144 | This should fix a bug @mgarvert encountered with concatenating variables with different array attributes. In the process of fixing this issue, I encountered and fixed another bug with utils.remove_incompatible_items. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
36453574 | MDExOlB1bGxSZXF1ZXN0MTc1NzQ3MjY= | 174 | Add isnull and notnull (wrapping pandas) | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-25T07:07:42Z | 2014-06-25T07:37:36Z | 2014-06-25T07:37:35Z | MEMBER | 0 | pydata/xarray/pulls/174 | { "url": "https://api.github.com/repos/pydata/xarray/issues/174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
36354990 | MDExOlB1bGxSZXF1ZXN0MTc1MTM3NTk= | 173 | Edge cases | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-24T05:34:05Z | 2014-06-24T17:55:16Z | 2014-06-24T17:55:14Z | MEMBER | 0 | pydata/xarray/pulls/173 | { "url": "https://api.github.com/repos/pydata/xarray/issues/173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
36354140 | MDExOlB1bGxSZXF1ZXN0MTc1MTMyNjY= | 172 | {DataArray,Dataset}.indexes no longer creates a new dict | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-24T05:10:25Z | 2014-06-24T05:34:38Z | 2014-06-24T05:34:36Z | MEMBER | 0 | pydata/xarray/pulls/172 | According to the toy benchmark below, this shaves off between 20% (diff-indexes) to 40% (same-indexes) of xray's overhead for array math:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
36240022 | MDExOlB1bGxSZXF1ZXN0MTc0NDY2Njc= | 171 | Implementation of DatasetGroupBy summary methods | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-22T08:38:51Z | 2014-06-23T07:25:10Z | 2014-06-23T07:25:08Z | MEMBER | 0 | pydata/xarray/pulls/171 | You can now do It is not optimized like the DataArray.groupby summary methods but it should work. Thanks @jhamman for laying the groundwork for this! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
36238726 | MDExOlB1bGxSZXF1ZXN0MTc0NDYwNjY= | 169 | Cleanups | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-22T06:44:17Z | 2014-06-22T06:56:22Z | 2014-06-22T06:56:20Z | MEMBER | 0 | pydata/xarray/pulls/169 | { "url": "https://api.github.com/repos/pydata/xarray/issues/169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
36211623 | MDU6SXNzdWUzNjIxMTYyMw== | 167 | Unable to load pickle Dataset that was picked with cPickle | rzlee 2382049 | closed | 0 | shoyer 1217238 | 0.2 650893 | 1 | 2014-06-21T00:02:43Z | 2014-06-22T01:40:58Z | 2014-06-22T01:40:58Z | NONE | ``` import cPickle as pickle import xray import numpy as np import pandas as pd foo_values = np.random.RandomState(0).rand(3,4) times = pd.date_range('2001-02-03', periods=3) ds = xray.Dataset({'time': ('time', times), 'foo': (['time', 'space'], foo_values)}) with open('mypickle.pkl', 'w') as f: pickle.dump(ds, f) with open('mypickle.pkl') as f: myds = pickle.load(f) myds ``` This code results in:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||
35684756 | MDExOlB1bGxSZXF1ZXN0MTcxMTc1NjY= | 161 | Rename "Coordinate", "labeled" and "indexed" | shoyer 1217238 | closed | 0 | 0.2 650893 | 1 | 2014-06-13T16:07:40Z | 2014-06-22T00:44:28Z | 2014-06-22T00:44:26Z | MEMBER | 0 | pydata/xarray/pulls/161 | Fixes #142 Fixes #148 All existing code should still work but issue a Full list of updates: | Old | New |
| --- | --- |
| Most of these are both |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
35262649 | MDU6SXNzdWUzNTI2MjY0OQ== | 148 | API: rename "labeled" and "indexed" | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-09T06:17:09Z | 2014-06-22T00:44:26Z | 2014-06-22T00:44:26Z | MEMBER | I'd like to rename the Dataset/DataArray methods I like option 2 (particularly because it's shorter), but to avoid confusion with the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
35965115 | MDExOlB1bGxSZXF1ZXN0MTcyODEzODQ= | 165 | WIP: cleanup conventions.encode_cf_variable | shoyer 1217238 | closed | 0 | 0.2 650893 | 0 | 2014-06-18T08:47:35Z | 2014-06-22T00:36:01Z | 2014-06-22T00:35:42Z | MEMBER | 0 | pydata/xarray/pulls/165 | Almost ready, except for failing tests on Python 3. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
36017422 | MDExOlB1bGxSZXF1ZXN0MTczMTI5MDQ= | 166 | Revert using __slots__ for Mapping subclasses in xray.utils | shoyer 1217238 | closed | 0 | 0.2 650893 | 1 | 2014-06-18T19:08:47Z | 2014-06-18T19:24:50Z | 2014-06-18T19:12:52Z | MEMBER | 0 | pydata/xarray/pulls/166 | This recently added some complexity for a very nominal speed benefit. And it appears that it breaks joblib serialization, somehow (even though pickle works). So for now, revert it -- and consider filing a joblib bug if we can narrow it down. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
35762823 | MDExOlB1bGxSZXF1ZXN0MTcxNTgzOTg= | 163 | BUG: fix encoding issues (array indexing now resets encoding) | shoyer 1217238 | closed | 0 | 0.2 650893 | 4 | 2014-06-16T01:29:22Z | 2014-06-17T07:28:45Z | 2014-06-16T04:52:43Z | MEMBER | 0 | pydata/xarray/pulls/163 | Fixes #156, #157 To elaborate on the changes:
1. When an array is indexed, its encoding will be reset. This
takes care of the invalid chunksize issue. More generally, this
seems like the right choice because it's not clear that the right
encoding will be the same after slicing an array, anyways.
2. If an array has |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
35263258 | MDExOlB1bGxSZXF1ZXN0MTY4NzMwNTA= | 149 | Data array constructor | shoyer 1217238 | closed | 0 | 0.2 650893 | 1 | 2014-06-09T06:29:49Z | 2014-06-12T20:38:27Z | 2014-06-11T16:53:58Z | MEMBER | 0 | pydata/xarray/pulls/149 | Fixes #115. Related: #116, #117. Note: a remaining major task will be to rewrite/reorganize the docs to introduce |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
32928159 | MDU6SXNzdWUzMjkyODE1OQ== | 115 | Direct constructor for DataArray objects | shoyer 1217238 | closed | 0 | 0.2 650893 | 1 | 2014-05-06T20:10:19Z | 2014-06-11T16:53:58Z | 2014-06-11T16:53:58Z | MEMBER | It shouldn't be necessary to put arrays in a Dataset to make a DataArray. See also: https://github.com/xray/xray/issues/85#issuecomment-38875079 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
35304758 | MDExOlB1bGxSZXF1ZXN0MTY4OTY2MjM= | 150 | Fix DecodedCFDatetimeArray was being incorrectly indexed. | akleeman 514053 | closed | 0 | 0.2 650893 | 0 | 2014-06-09T17:25:05Z | 2014-06-09T17:43:50Z | 2014-06-09T17:43:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/150 | This was causing an error in the following situation:
Thanks @shoyer for the fix. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);