issues
4 rows where type = "issue" and user = 2002703 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 103380276 | MDU6SXNzdWUxMDMzODAyNzY= | 552 | Dataset.to_dataframe() loses datetime64 timezone localization | IamJeffG 2002703 | closed | 0 | 2 | 2015-08-26T22:34:11Z | 2015-08-29T14:11:46Z | 2015-08-29T14:11:46Z | CONTRIBUTOR | Not sure if feature or bug, but definitely made me look twice. ```
I'd expected to see the DataFrame index maintaining its local timezone info:
That said, the UTC that I actually get back is the same time so math should still work. I'm opening this issue mostly as a matter for discussion -- feel free to close if you think xray should be pushing users towards UTC. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 58288666 | MDU6SXNzdWU1ODI4ODY2Ng== | 326 | DataArray.groupby.apply with a generic ndarray function | IamJeffG 2002703 | closed | 0 | 0.5 987654 | 1 | 2015-02-19T23:37:34Z | 2015-02-20T04:41:08Z | 2015-02-20T04:41:08Z | CONTRIBUTOR | Need to apply a transformation function across one dimension of a DataArray, where that non-xray function speaks in ndarrays. Currently the only ways to do this involve wrapping the function. An example: ``` import numpy as np import xray from scipy.ndimage.morphology import binary_opening da = xray.DataArray(np.random.random_integers(0, 1, (10, 10, 3)), dims=['row', 'col', 'time']) I want to apply an operation the 2D image at each point in timeda.groupby('time').apply(binary_opening) AttributeError: 'numpy.ndarray' object has no attribute 'dims'def wrap_binary_opening(da, kwargs): return xray.DataArray(binary_opening(da.values, kwargs), da.coords) da.groupby('time').apply(wrap_binary_opening) da.groupby('time').apply(wrap_binary_opening, iterations=2) # func may take custom args ``` My proposed solution is that |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | |||||
| 51046413 | MDU6SXNzdWU1MTA0NjQxMw== | 284 | to_netcdf ValueError with 0d string variable | IamJeffG 2002703 | closed | 0 | 1 | 2014-12-04T23:59:07Z | 2014-12-05T03:57:16Z | 2014-12-05T03:57:16Z | CONTRIBUTOR |
results in ``` ----> 1 xray.Dataset( {'password': ([], 'abcd')} ).to_netcdf('/tmp/bar.nc') /export/data/envs/popcorn/lib/python2.7/site-packages/xray/core/dataset.pyc in to_netcdf(self, filepath, kwdargs) 801 """ 802 with backends.NetCDF4DataStore(filepath, mode='w', kwdargs) as store: --> 803 self.dump_to_store(store) 804 805 dump = to_netcdf /export/data/envs/popcorn/lib/python2.7/site-packages/xray/core/dataset.pyc in dump_to_store(self, store, encoder) 793 if encoder: 794 variables, attributes = encoder(variables, attributes) --> 795 store.store(variables, attributes) 796 store.sync() 797 /export/data/envs/popcorn/lib/python2.7/site-packages/xray/backends/netCDF4_.pyc in store(self, variables, attributes) 100 # to write times, for example, would fail. 101 cf_variables, cf_attrs = cf_encoder(variables, attributes) --> 102 AbstractWritableDataStore.store(self, cf_variables, cf_attrs) 103 104 def open_store_variable(self, var): /export/data/envs/popcorn/lib/python2.7/site-packages/xray/backends/common.pyc in store(self, variables, attributes) 153 variables = dict((k, v) for k, v in iteritems(variables) 154 if not (k in neccesary_dims and is_trivial_index(v))) --> 155 self.set_variables(variables) 156 157 def set_dimensions(self, dimensions): /export/data/envs/popcorn/lib/python2.7/site-packages/xray/backends/common.pyc in set_variables(self, variables) 165 def set_variables(self, variables): 166 for vn, v in iteritems(variables): --> 167 self.set_variable(_encode_variable_name(vn), v) 168 self.set_necessary_dimensions(v) 169 /export/data/envs/popcorn/lib/python2.7/site-packages/xray/backends/netCDF4_.pyc in set_variable(self, name, variable) 151 attrs = variable.attrs.copy() 152 if self.format == 'NETCDF4': --> 153 variable, datatype = _nc4_values_and_dtype(variable) 154 else: 155 variable = encode_nc3_variable(variable) /export/data/envs/popcorn/lib/python2.7/site-packages/xray/backends/netCDF4_.pyc in _nc4_values_and_dtype(var) 47 # use character arrays instead of unicode, because unicode suppot in 48 # netCDF4 is still rather buggy ---> 49 data, dims = maybe_convert_to_char_array(var.values, var.dims) 50 var = Variable(dims, data, var.attrs, var.encoding) 51 dtype = var.dtype /export/data/envs/popcorn/lib/python2.7/site-packages/xray/backends/netcdf3.pyc in maybe_convert_to_char_array(data, dims) 55 def maybe_convert_to_char_array(data, dims): 56 if data.dtype.kind == 'S' and data.dtype.itemsize > 1: ---> 57 data = conventions.string_to_char(data) 58 dims = dims + ('string%s' % data.shape[-1],) 59 return data, dims /export/data/envs/popcorn/lib/python2.7/site-packages/xray/conventions.pyc in string_to_char(arr) 344 if kind not in ['U', 'S']: 345 raise ValueError('argument must be a string') --> 346 return arr.view(kind + '1').reshape(*[arr.shape + (-1,)]) 347 348 ValueError: new type not compatible with array. ``` Note this does not fail when the 0d value is a number. This succeeds:
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 40536963 | MDU6SXNzdWU0MDUzNjk2Mw== | 217 | Strings are truncated when concatenating Datasets. | IamJeffG 2002703 | closed | 0 | 0.3 740776 | 0 | 2014-08-18T21:58:36Z | 2014-08-21T05:17:28Z | 2014-08-21T05:17:28Z | CONTRIBUTOR | When concatenating Datasets, a variable's string length is limited to the length in the first of the Datasets being concatenated. ```
(Note the I think this is the offending line: https://github.com/xray/xray/blob/master/xray/core/variable.py#L623
May want to use |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);