issues
21 rows where user = 514053 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: milestone, comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23272260 | MDExOlB1bGxSZXF1ZXN0MTAyNzUzMTg= | 2 | Data objects now have a swappable backend store. | akleeman 514053 | closed | 0 | 3 | 2013-11-25T20:48:40Z | 2016-01-04T23:11:54Z | 2014-01-29T19:20:58Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
59730888 | MDExOlB1bGxSZXF1ZXN0MzA0MjcxMjU= | 359 | Raise informative exception when _FillValue and missing_value disagree | akleeman 514053 | closed | 0 | 0.4.1 1004936 | 2 | 2015-03-04T00:22:41Z | 2015-03-12T16:33:47Z | 2015-03-12T16:32:07Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/359 | Previously conflicting _FillValue and missing_value only raised an AssertionError, now it's more informative. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
58682523 | MDExOlB1bGxSZXF1ZXN0Mjk4NjQ5NzA= | 334 | Fix bug associated with reading / writing of mixed endian data. | akleeman 514053 | closed | 0 | 0.4 799013 | 1 | 2015-02-24T01:57:43Z | 2015-02-26T04:45:18Z | 2015-02-26T04:45:18Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/334 | The right solution to this is to figure out how to successfully round trip endian-ness, but that seems to be a deeper issue inside netCDF4 (https://github.com/Unidata/netcdf4-python/issues/346) Instead we force all data to little endian before netCDF4 write. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
54391570 | MDExOlB1bGxSZXF1ZXN0MjczOTI5OTU= | 310 | More robust CF datetime unit parsing | akleeman 514053 | closed | 0 | shoyer 1217238 | 0.4 799013 | 1 | 2015-01-14T23:19:07Z | 2015-01-14T23:36:34Z | 2015-01-14T23:35:27Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/310 | This makes it possible to read datasets that don't follow CF datetime conventions perfectly, such as the following example which (surprisingly) comes from NCEP/NCAR (you'd think they would follow CF!) ``` ds = xray.open_dataset('http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GEFS/Global_1p0deg_Ensemble/members/GEFS_Global_1p0deg_Ensemble_20150114_1200.grib2/GC') print ds['time'].encoding['units'] u'Hour since 2015-01-14T12:00:00Z' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||
29008494 | MDU6SXNzdWUyOTAwODQ5NA== | 55 | Allow datetime.timedelta coordinates. | akleeman 514053 | closed | 0 | 4 | 2014-03-07T23:56:39Z | 2014-12-12T09:41:01Z | 2014-12-12T09:41:01Z | CONTRIBUTOR | This would allow you to have coordinates which are offsets from a time coordinates which comes in handy when dealing with forecast data where the 'time' coordinate might be the forecast run time and you then want a 'lead' coordinate which is an offset from the run time. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/55/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
36467304 | MDExOlB1bGxSZXF1ZXN0MTc1ODI2ODQ= | 175 | Modular encoding | akleeman 514053 | closed | 0 | 9 | 2014-06-25T10:37:41Z | 2014-10-08T20:44:15Z | 2014-10-08T20:44:15Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/175 | Restructured Backends to make CF conventions handling consistent. Among other things this includes: - EncodedDataStores which can wrap other stores and allow for modular encoding/decoding. - Trivial indices ds['x'] = ('x', np.arange(10)) are no longer stored on disk and are only created when accessed. - AbstractDataStore API change. Shouldn't effect external users. - missing_value attributes now function like _FillValue All current tests are passing (though it could use more new ones). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
45289935 | MDExOlB1bGxSZXF1ZXN0MjI0NTA0NzA= | 248 | Removed the object oriented encoding/decoding scheme | akleeman 514053 | closed | 0 | 0 | 2014-10-08T20:06:33Z | 2014-10-08T20:12:56Z | 2014-10-08T20:12:56Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/248 | Removed the object oriented encoding/decoding scheme in favor of a model where encoding/decoding happens when a dataset is stored to/ loaded from a DataStore. Conventions can now be enforced at the DataStore level by overwriting the Datastore.store() and Datastore.load() methods, or as an optional arg to Dataset.load_store, Dataset.dump_to_store. Includes miscellaneous cleanup. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
26545877 | MDExOlB1bGxSZXF1ZXN0MTIwMDU3ODk= | 8 | Datasets now use data stores to allow swap-able backends | akleeman 514053 | closed | 0 | 0 | 2014-01-29T19:25:42Z | 2014-06-17T00:35:01Z | 2014-01-29T19:30:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8 | ``` Data objects now have a swap-able backend store.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
35564268 | MDExOlB1bGxSZXF1ZXN0MTcwNDU3Mjk= | 153 | Fix decode_cf_variable. | akleeman 514053 | closed | 0 | 5 | 2014-06-12T09:42:47Z | 2014-06-12T23:33:46Z | 2014-06-12T23:33:46Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/153 | decode_cf_variable was still using da.data instead of da.values. It now also works with DataArray as input. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
35627287 | MDExOlB1bGxSZXF1ZXN0MTcwODIzNjc= | 154 | Fix decode_cf_variable, without tests | akleeman 514053 | closed | 0 | 0 | 2014-06-12T21:56:10Z | 2014-06-12T23:30:15Z | 2014-06-12T23:30:15Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/154 | same as #153, but without tests. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
31510183 | MDExOlB1bGxSZXF1ZXN0MTQ3NDQzOTI= | 102 | Dataset.concat() can now automatically concat over non-equal variables. | akleeman 514053 | closed | 0 | 3 | 2014-04-14T22:19:02Z | 2014-06-12T17:33:49Z | 2014-04-23T03:24:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/102 | concat_over=True indicates that concat should concat over all variables that are not the same in the set of datasets that are to be concatenated. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
28315331 | MDExOlB1bGxSZXF1ZXN0MTI5NDE2MDI= | 21 | Cf time units persist | akleeman 514053 | closed | 0 | 4 | 2014-02-26T08:05:41Z | 2014-06-12T17:29:24Z | 2014-02-28T01:45:21Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/21 | Internally Datasets convert time coordinates to pandas.DatetimeIndex. The backend function convert_to_cf_variable will convert these datetimes back to CF style times, but the original units were not being preserved. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/21/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
30340123 | MDExOlB1bGxSZXF1ZXN0MTQwODExMjk= | 86 | BUG: Zero dimensional variables couldn't be written to file or serialized. | akleeman 514053 | closed | 0 | 0 | 2014-03-27T20:42:06Z | 2014-06-12T17:29:11Z | 2014-03-28T03:58:43Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/86 | Fixed a bug in which writes would fail if Datasets contained 0d variables. Also added the ability to open Datasets directly from NetCDF3 bytestrings. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/86/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
34649908 | MDExOlB1bGxSZXF1ZXN0MTY1MzU0ODE= | 143 | Fix decoded_cf_variable was not working. | akleeman 514053 | closed | 0 | 0 | 2014-05-30T14:27:13Z | 2014-06-12T09:39:20Z | 2014-06-12T09:39:20Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/143 | Small bug fix, and a test. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
35304758 | MDExOlB1bGxSZXF1ZXN0MTY4OTY2MjM= | 150 | Fix DecodedCFDatetimeArray was being incorrectly indexed. | akleeman 514053 | closed | 0 | 0.2 650893 | 0 | 2014-06-09T17:25:05Z | 2014-06-09T17:43:50Z | 2014-06-09T17:43:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/150 | This was causing an error in the following situation:
Thanks @shoyer for the fix. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
33307883 | MDExOlB1bGxSZXF1ZXN0MTU3NjcwMTU= | 125 | Only copy datetime64 data if it is using non-nanosecond precision. | akleeman 514053 | closed | 0 | 7 | 2014-05-12T13:36:22Z | 2014-05-20T19:09:40Z | 2014-05-20T19:09:40Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/125 | In an attempt to coerce all datetime arrays to nano second resolutoin utils.as_safe_array() was creating copies of any datetime64 array (via the astype method). This was causing unexpected behavior (bugs) for things such as concatenation over times. (see below). ``` import xray import pandas as pd ds = xray.Dataset() ds['time'] = ('time', pd.date_range('2011-09-01', '2011-09-11')) times = [ds.indexed(time=[i]) for i in range(10)] ret = xray.Dataset.concat(times, 'time') print ret['time'] <xray.DataArray 'time' (time: 10)> array(['1970-01-02T07:04:40.718526408-0800', '1969-12-31T16:00:00.099966608-0800', '1969-12-31T16:00:00.041748384-0800', '1969-12-31T16:00:00.041748360-0800', '1969-12-31T16:00:00.041748336-0800', '1969-12-31T16:00:00.041748312-0800', '1969-12-31T16:00:00.041748288-0800', '1969-12-31T16:00:00.041748264-0800', '1969-12-31T16:00:00.041748240-0800', '1969-12-31T16:00:00.041748216-0800'], dtype='datetime64[ns]') Attributes: Empty ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
28603092 | MDExOlB1bGxSZXF1ZXN0MTMxMDMwODQ= | 40 | Encodings for object data types are not saved. | akleeman 514053 | closed | 0 | 0 | 2014-03-03T07:22:37Z | 2014-04-09T04:10:56Z | 2014-03-07T02:21:16Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/40 | decode_cf_variable will not save encoding for any 'object' dtypes. When encoding cf variables check if dtype is np.datetime64 as well as DatetimeIndex. fixes akleeman/xray/issues/39 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/40/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
30328907 | MDExOlB1bGxSZXF1ZXN0MTQwNzQzOTg= | 84 | Fix: dataset_repr was failing on empty datasets. | akleeman 514053 | closed | 0 | 1 | 2014-03-27T18:29:18Z | 2014-03-27T20:09:45Z | 2014-03-27T20:05:49Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/84 | BUG: dataset_repr was failing on empty datasets. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/84/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
28600785 | MDU6SXNzdWUyODYwMDc4NQ== | 39 | OpenDAP loaded Dataset has lon/lats with type 'object'. | akleeman 514053 | closed | 0 | 4 | 2014-03-03T06:07:17Z | 2014-03-24T07:21:02Z | 2014-03-24T07:21:02Z | CONTRIBUTOR |
This makes serialization fail. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
28730473 | MDExOlB1bGxSZXF1ZXN0MTMxNzU2NzY= | 46 | Test lazy loading from stores using mock XArray classes. | akleeman 514053 | closed | 0 | 1 | 2014-03-04T18:50:40Z | 2014-03-04T23:24:52Z | 2014-03-04T23:10:28Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/46 | { "url": "https://api.github.com/repos/pydata/xarray/issues/46/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
28445412 | MDU6SXNzdWUyODQ0NTQxMg== | 26 | Allow the ability to add/persist details of how a dataset is stored. | akleeman 514053 | closed | 0 | 4 | 2014-02-27T19:10:38Z | 2014-03-03T02:54:16Z | 2014-03-03T02:54:16Z | CONTRIBUTOR | Both Issues https://github.com/akleeman/xray/pull/20 and https://github.com/akleeman/xray/pull/21 are dealing with similar conceptual issues. Namely sometimes the user may want fine control over how a dataset is stored (integer packing, time units and calendars ...). Taking time as an example, the current model interprets the units and calendar in order to create a DatetimeIndex, but then throws out those attributes so that if the dataset were re-serialized the units may not be preserved. One proposed solution to this issue is to include a distinct set of encoding attributes that would hold things like 'scale_factor', and 'add_offset' allowing something like this ``` ds['time'] = ('time', pd.date_range('1999-01-05', periods=10)) ds['time'].encoding['units'] = 'days since 1989-08-19' ds.dump('netcdf.nc')
The encoding attributes could also handle masking, scaling, compression etc ... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/26/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);