issues
2 rows where milestone = 1143506 and type = "issue" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
83700033 | MDU6SXNzdWU4MzcwMDAzMw== | 416 | Automatically decode netCDF data to native endianness | shoyer 1217238 | closed | 0 | 0.5.1 1143506 | 1 | 2015-06-01T21:23:52Z | 2015-06-10T16:01:00Z | 2015-06-06T03:51:13Z | MEMBER | Unfortunately, netCDF3 is big endian, but most modern CPUs are little endian. Cython requires that data match native endianness in order to perform operations. This means that users can get strange errors when performing aggregations with bottleneck or after converting an xray dataset to pandas. It would be nice to handle this automatically as part of the "decoding" process. I don't think there are any particular advantages to preserving non-native endianness (except, I suppose, for serialization back to another netCDF3 file). My understanding is that most calculations require native endianness, anyways. CC @bareid |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
85662203 | MDU6SXNzdWU4NTY2MjIwMw== | 425 | xray.concat fails in an edge case involving identical coordinate variables | shoyer 1217238 | closed | 0 | 0.5.1 1143506 | 0 | 2015-06-06T00:22:45Z | 2015-06-07T06:03:16Z | 2015-06-07T06:03:16Z | MEMBER |
``` ValueError Traceback (most recent call last) <ipython-input-235-69cea5440248> in <module>() 1 ds1 = xray.Dataset({'foo': 1.5}, {'x': 0, 'y': 1}) 2 ds2 = xray.Dataset({'foo': 2.5}, {'x': 1, 'y': 1}) ----> 3 xray.concat([ds1, ds2], 'y') /Users/shoyer/dev/xray/xray/core/alignment.pyc in concat(objs, dim, indexers, mode, concat_over, compat) 276 raise ValueError('must supply at least one object to concatenate') 277 cls = type(first_obj) --> 278 return cls._concat(objs, dim, indexers, mode, concat_over, compat) 279 280 /Users/shoyer/dev/xray/xray/core/dataset.pyc in _concat(cls, datasets, dim, indexers, mode, concat_over, compat) 1732 for k in concat_over: 1733 vars = ensure_common_dims([ds._variables[k] for ds in datasets]) -> 1734 concatenated[k] = Variable.concat(vars, dim, indexers) 1735 1736 concatenated._coord_names.update(datasets[0].coords) /Users/shoyer/dev/xray/xray/core/dataset.pyc in setitem(self, key, value) 637 raise NotImplementedError('cannot yet use a dictionary as a key ' 638 'to set Dataset values') --> 639 self.update({key: value}) 640 641 def delitem(self, key): /Users/shoyer/dev/xray/xray/core/dataset.pyc in update(self, other, inplace) 1224 """ 1225 return self.merge( -> 1226 other, inplace=inplace, overwrite_vars=list(other), join='left') 1227 1228 def merge(self, other, inplace=False, overwrite_vars=set(), /Users/shoyer/dev/xray/xray/core/dataset.pyc in merge(self, other, inplace, overwrite_vars, compat, join) 1291 raise ValueError('cannot merge: the following variables are ' 1292 'coordinates on one dataset but not the other: %s' -> 1293 % list(ambiguous_coords)) 1294 1295 obj = self if inplace else self.copy() ValueError: cannot merge: the following variables are coordinates on one dataset but not the other: ['y'] ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);