issues
2 rows where repo = 13221727 and user = 2539336 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
424538928 | MDU6SXNzdWU0MjQ1Mzg5Mjg= | 2847 | Cannot store data after group_by | volkerjaenisch 2539336 | open | 0 | 6 | 2019-03-23T19:59:30Z | 2022-06-26T15:06:03Z | NONE | Hi Xarray! I really like your Library. But now I am stuck completely. Code Sample, a copy-pastable example if possible```python import numpy as np import xarray as xr data = [1,2,3,4,5,6,7,8,9,10] bins = np.array(range(5)) * 2 xr_data = xr.Dataset({'data': data}) out = xr_data.groupby_bins('data', bins).mean() out.to_netcdf('/tmp/test') ``` Problem descriptionGet Error : Traceback (most recent call last): File "/home/volker/workspace/pycharm-community-2018.1.2/helpers/pydev/pydevd.py", line 1664, in <module> main() File "/home/volker/workspace/pycharm-community-2018.1.2/helpers/pydev/pydevd.py", line 1658, in main globals = debugger.run(setup['file'], None, None, is_module) File "/home/volker/workspace/pycharm-community-2018.1.2/helpers/pydev/pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/home/volker/workspace/pycharm-community-2018.1.2/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/volker/workspace/eprofile_wind/eprofile/src/eprofile/sandbox/test_xarray.py", line 12, in <module> out.to_netcdf('/tmp/test') File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/core/dataset.py", line 1232, in to_netcdf compute=compute) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/backends/api.py", line 747, in to_netcdf unlimited_dims=unlimited_dims) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/backends/api.py", line 790, in dump_to_store unlimited_dims=unlimited_dims) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/backends/common.py", line 261, in store variables, attributes = self.encode(variables, attributes) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/backends/common.py", line 347, in encode variables, attributes = cf_encoder(variables, attributes) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/conventions.py", line 605, in cf_encoder for k, v in iteritems(variables)) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/conventions.py", line 605, in <genexpr> for k, v in iteritems(variables)) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/conventions.py", line 241, in encode_cf_variable var = ensure_dtype_not_object(var, name=name) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/conventions.py", line 201, in ensure_dtype_not_object data = _copy_with_dtype(data, dtype=_infer_dtype(data, name)) File "/home/volker/workspace/eprofile_wind-CRxNsezQ/lib/python3.5/site-packages/xarray/conventions.py", line 139, in _infer_dtype .format(name)) ValueError: unable to infer dtype on variable 'data_bins'; xarray cannot serialize arbitrary Python objects Expected OutputThe Dataset should be written to file in netcdf Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
429835266 | MDU6SXNzdWU0Mjk4MzUyNjY= | 2870 | xr.concat changes dtype | volkerjaenisch 2539336 | closed | 0 | 1 | 2019-04-05T16:19:16Z | 2019-05-27T00:20:55Z | 2019-05-27T00:20:55Z | NONE | Code Sample, a copy-pastable example if possible```python
``` Problem descriptionUsing xr.concat to combine two datasets along the time axis. Dtype of variable wind_quality_flag changes from int64 to float. I suppose that this behavior has to do with NaN not available in int64 and the Datasets are not completely overlapping in the altitude dimension. How can this conversion be avoided? Expected OutputCombined Dataset with original datatype preserved. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);