issues
3 rows where user = 6405510 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
169405530 | MDU6SXNzdWUxNjk0MDU1MzA= | 943 | InvalidIndexError on some reindexing operations since 0.8 | richardotis 6405510 | closed | 0 | 1 | 2016-08-04T15:39:17Z | 2016-08-05T19:43:46Z | 2016-08-05T19:43:46Z | NONE | Sometimes I want to reindex a dimension along which I've concatenated several Datasets. Index labels will often repeat until I've performed this operation. This has worked without problems up to xarray 0.7.2, but in 0.8 I now receive this error: ``` python import xarray import numpy as np ds = xarray.Dataset({'data': (['dim0', 'dim1'], np.empty((5,10)))}, coords={'dim0': [0, 1, 2, 0, 1], 'dim1': list(range(10))}) ds['dim0'] = list(range(5)) ``` ``` InvalidIndexError Traceback (most recent call last) <ipython-input-5-50e97aa6cb4e> in <module>() 4 ds = xarray.Dataset({'data': (['dim0', 'dim1'], np.empty((5,10)))}, 5 coords={'dim0': [0, 1, 2, 0, 1], 'dim1': list(range(10))}) ----> 6 ds['dim0'] = list(range(5)) /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/dataset.py in setitem(self, key, value) 536 raise NotImplementedError('cannot yet use a dictionary as a key ' 537 'to set Dataset values') --> 538 self.update({key: value}) 539 540 def delitem(self, key): /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/dataset.py in update(self, other, inplace) 1434 dataset. 1435 """ -> 1436 variables, coord_names, dims = dataset_update_method(self, other) 1437 1438 return self._replace_vars_and_dims(variables, coord_names, dims, /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/merge.py in dataset_update_method(dataset, other) 490 priority_arg = 1 491 indexes = dataset.indexes --> 492 return merge_core(objs, priority_arg=priority_arg, indexes=indexes) /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes) 371 372 coerced = coerce_pandas_values(objs) --> 373 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes) 374 expanded = expand_variable_dicts(aligned) 375 /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/alignment.py in deep_align(list_of_variable_maps, join, copy, indexes) 146 out.append(variables) 147 --> 148 aligned = partial_align(*targets, join=join, copy=copy, indexes=indexes) 149 150 for key, aligned_obj in zip(keys, aligned): /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/alignment.py in partial_align(objects, kwargs) 109 valid_indexers = dict((k, v) for k, v in joined_indexes.items() 110 if k in obj.dims) --> 111 result.append(obj.reindex(copy=copy, *valid_indexers)) 112 return tuple(result) 113 /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/dataset.py in reindex(self, indexers, method, tolerance, copy, **kw_indexers) 1216 1217 variables = alignment.reindex_variables( -> 1218 self.variables, self.indexes, indexers, method, tolerance, copy=copy) 1219 return self._replace_vars_and_dims(variables) 1220 /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/xarray/core/alignment.py in reindex_variables(variables, indexes, indexers, method, tolerance, copy) 218 target = utils.safe_cast_to_index(indexers[name]) 219 indexer = index.get_indexer(target, method=method, --> 220 **get_indexer_kwargs) 221 222 to_shape[name] = len(target) /home/rotis/anaconda/envs/calphadpy3/lib/python3.5/site-packages/pandas/indexes/base.py in get_indexer(self, target, method, limit, tolerance) 2011 2012 if not self.is_unique: -> 2013 raise InvalidIndexError('Reindexing only valid with uniquely' 2014 ' valued Index objects') 2015 InvalidIndexError: Reindexing only valid with uniquely valued Index objects ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
88868867 | MDU6SXNzdWU4ODg2ODg2Nw== | 435 | Working with labeled N-dimensional data with combinatoric independent variables | richardotis 6405510 | closed | 0 | 4 | 2015-06-16T23:49:42Z | 2016-05-17T22:48:44Z | 2016-05-17T22:48:44Z | NONE | Thanks for developing this exciting project. I'm a computational materials scientist trying to understand if xray is the right tool for my task. For https://github.com/richardotis/pycalphad/issues/15 I'm developing an interface for a thermodynamic computation in N dimensions. You can see a simple example of what I'm doing in this notebook: http://nbviewer.ipython.org/github/richardotis/pycalphad/blob/fitting/research/BroshPressureTest.ipynb In this case I'm performing a computation for all combinations of 50 temperatures and 50 pressures for a total of 50 x 50 = 2500 distinct systems. You can see in the linked notebook that I'm using a DataFrame for storing an intermediate result of the computation. This will continue to work as I add more dimensions, but the drawbacks are performance degrades rapidly and this flattened representation of the data in the DataFrame very quickly blows through memory; you can see that the simple example I linked already has 10 million rows in the DataFrame, with lots of repeats for the independent variables (temperature, pressure). What I'm thinking is to construct a DataArray or Dataset which, for each set of independent variables specified in the conditions, stores the chemical potentials and stable phases, their compositions, and the fractions present in the system (essentially the result of lower_convex_hull). Would this be a good candidate for xray? It needs to be easy to retrieve results for specific conditions, or sets of conditions, and ideally I could use the same object for intermediate steps in the computation, e.g., I could iteratively update the chemical potentials and stable phases associated with each set of conditions in a DataArray or Dataset, and eventually return that object to the user/plotting routine/whatever. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
129525746 | MDU6SXNzdWUxMjk1MjU3NDY= | 732 | 0.7 missing Python 3.3 conda package | richardotis 6405510 | closed | 0 | 10 | 2016-01-28T17:54:02Z | 2016-01-28T21:20:45Z | 2016-01-28T20:12:43Z | NONE | ``` $ conda info xarray Fetching package metadata: .... xarray 0.7.0 py35_0file name : xarray-0.7.0-py35_0.tar.bz2 name : xarray version : 0.7.0 build number: 0 build string: py35_0 channel : defaults size : 317 KB date : 2016-01-26 license : Apache md5 : b7d2c2e88f370bc701c44307772615ce installed environments: /home/rotis/anaconda/envs/calphadpy3 dependencies: numpy pandas >=0.15 python 3.5* setuptools xarray 0.7.0 py34_0file name : xarray-0.7.0-py34_0.tar.bz2 name : xarray version : 0.7.0 build number: 0 build string: py34_0 channel : defaults size : 319 KB date : 2016-01-26 license : Apache md5 : 784da1f14fd7b9b7c09b7229558abcd4 installed environments: dependencies: numpy pandas >=0.15 python 3.4* setuptools xarray 0.7.0 py27_0file name : xarray-0.7.0-py27_0.tar.bz2 name : xarray version : 0.7.0 build number: 0 build string: py27_0 channel : defaults size : 308 KB date : 2016-01-26 license : Apache md5 : bd9f94bcc78b18aac44be0340cc30913 installed environments: dependencies: numpy pandas >=0.15 python 2.7* setuptools ``` And for the legacy xray package ``` $ conda info xray==0.7 Fetching package metadata: .... xray 0.7.0 py35_0file name : xray-0.7.0-py35_0.tar.bz2 name : xray version : 0.7.0 build number: 0 build string: py35_0 channel : defaults size : 318 KB date : 2016-01-26 license : Apache md5 : 1aedbfacd688558f4f003b38498264c3 installed environments: dependencies: pandas python 3.5* setuptools xray 0.7.0 py34_0file name : xray-0.7.0-py34_0.tar.bz2 name : xray version : 0.7.0 build number: 0 build string: py34_0 channel : defaults size : 319 KB date : 2016-01-26 license : Apache md5 : b3e4e00c5b19fa92c97781444b298cf2 installed environments: dependencies: pandas python 3.4* setuptools xray 0.7.0 py27_0file name : xray-0.7.0-py27_0.tar.bz2 name : xray version : 0.7.0 build number: 0 build string: py27_0 channel : defaults size : 308 KB date : 2016-01-26 license : Apache md5 : 0af8b0e23e579d79363193d52827cccd installed environments: dependencies: pandas python 2.7* setuptools ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);