issues
2 rows where repo = 13221727, state = "open" and user = 1634164 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
606846911 | MDExOlB1bGxSZXF1ZXN0NDA4OTY0MTM3 | 4007 | Allow DataArray.to_series() without invoking sparse.COO.todense() | khaeru 1634164 | open | 0 | 1 | 2020-04-25T20:15:16Z | 2022-06-09T14:50:17Z | FIRST_TIME_CONTRIBUTOR | 0 | pydata/xarray/pulls/4007 | This adds some code (from iiasa/ixmp#317) that allows DataArray.to_series() to be called without invoking sparse.COO.todense() when that is the backing data type. I'm aware this needs some improvement to meet the standard of the existing codebase, so I hope I could ask for some guidance on how to address the following points (including whom to ask about them): - [ ] Make the same improvement in {DataArray,Dataset}.to_dataframe(). - [ ] Possibly move the code out of dataarray.py to a more appropriate location (where?). - [ ] Possibly check for sparse.COO explicitly instead of xarray.core.pycompat.sparse_array_type. Other SparseArray subclasses, e.g. DOK, may not have the same attributes. Standard items:
- [ ] Tests added.
- [x] Passes |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
503711327 | MDU6SXNzdWU1MDM3MTEzMjc= | 3381 | concat() fails when args have sparse.COO data and different fill values | khaeru 1634164 | open | 0 | 4 | 2019-10-07T21:54:06Z | 2021-07-08T17:43:57Z | NONE | MCVE Code Sample```python import numpy as np import pandas as pd import sparse import xarray as xr Indices and raw datafoo = [f'foo{i}' for i in range(6)] bar = [f'bar{i}' for i in range(6)] raw = np.random.rand(len(foo) // 2, len(bar)) DataArraya = xr.DataArray( data=sparse.COO.from_numpy(raw), coords=[foo[:3], bar], dims=['foo', 'bar']) print(a.data.fill_value) # 0.0 Created from a pd.Seriesb_series = pd.DataFrame(raw, index=foo[3:], columns=bar) \ .stack() \ .rename_axis(index=['foo', 'bar']) b = xr.DataArray.from_series(b_series, sparse=True) print(b.data.fill_value) # nan Works despite inconsistent fill-valuesa + b a * b Fails: complains about inconsistent fill-valuesxr.concat([a, b], dim='foo') # ***The fill_value argument doesn't helpxr.concat([a, b], dim='foo', fill_value=np.nan)def fill_value(da): """Try to coerce one argument to a consistent fill-value.""" return xr.DataArray( data=sparse.as_coo(da.data, fill_value=np.nan), coords=da.coords, dims=da.dims, name=da.name, attrs=da.attrs, ) Fails: "Cannot provide a fill-value in combination with something thatalready has a fill-value"print(xr.concat([a.pipe(fill_value), b], dim='foo'))If we cheat by recreating 'a' from scratch, copying the fill value of theintended other argument, it works again:a = xr.DataArray( data=sparse.COO.from_numpy(raw, fill_value=b.data.fill_value), coords=[foo[:3], bar], dims=['foo', 'bar']) c = xr.concat([a, b], dim='foo') print(c.data.fill_value) # nan But simple operations again create objects with potentially incompatiblefill-valuesd = c.sum(dim='bar') print(d.data.fill_value) # 0.0 ``` Expected
Problem DescriptionSome basic xarray manipulations don't work on xarray should automatically coerce objects into a compatible state, or at least provide users with methods to do so. Behaviour should also be documented, e.g. in this instance, which operations (here, Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);