issues
2 rows where user = 6475152 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
490037439 | MDU6SXNzdWU0OTAwMzc0Mzk= | 3282 | Silent value assignment failure in open_zarr Dataset due to hidden mode='r' | jkmacc-LANL 6475152 | open | 0 | 0 | 2019-09-05T22:23:46Z | 2020-03-29T10:33:20Z | NONE | Hello Xarray devs, Thanks for your work on this fantastic package. I'm a new user, and the subtleties of different data stores are unfamiliar to me. I got tripped up by the fact that Zarr stores are (silently) read-only, and I think it would be helpful if this were more prominent in the docstring or zarr section of the docs. When I try to assign values to parts of a local Zarr-backed Dataset, I get a silent failure: ```python In [142]: ds = xr.open_zarr('tmp.zarr', chunks=None) In [143]: selector = dict(time='2014-06-06T01:00:00', azimuth=0, frequency=0.0) In [144]: ds['counts'].loc[selector].values Out[144]: array(4294967295, dtype=uint32) try to assign a value here, like the example in the docs:In [55]: ds['empty'].loc[dict(lon=260, lat=30)] = 100In [145]: ds['counts'].loc[selector].values = 0 just get the same value backIn [146]: ds['counts'].loc[selector].values Out[146]: array(4294967295, dtype=uint32) ``` The answer seems to be buried in the
Expected OutputAssignment that follows the examples in the documentation.
I'm happy to make a PR on 1 & 3, but I'm not familiar with the reasoning behind why stores are never mixed-mode. Thanks again! Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
496460488 | MDU6SXNzdWU0OTY0NjA0ODg= | 3326 | quantile with Dask arrays | jkmacc-LANL 6475152 | closed | 0 | 0 | 2019-09-20T17:14:59Z | 2019-11-25T15:57:49Z | 2019-11-25T15:57:49Z | NONE | Currently the
The problem with following the suggestion of the exception (loading the array into memory) is that "wide and shallow" arrays are too big to load into memory, yet each chunk is statistically independent if the quantile dimension is the "shallow" dimension. I'm not necessarily proposing delegating to Dask's quantile (unless it's super easy), but wanted to explore this special case described above. Related links: Thank you! EDIT: added stackoverflow link |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);