issues
3 rows where user = 1530840 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
342531772 | MDU6SXNzdWUzNDI1MzE3NzI= | 2300 | zarr and xarray chunking compatibility and `to_zarr` performance | chrisbarber 1530840 | closed | 0 | 15 | 2018-07-18T23:58:40Z | 2021-04-26T16:37:42Z | 2021-04-26T16:37:42Z | NONE | I have a situation where I build large zarr arrays based on chunks which correspond to how I am reading data off a filesystem, for best I/O performance. Then I set these as variables on an xarray dataset which I want to persist to zarr, but with different chunks more optimal for querying. One problem I ran into is that manually selecting chunks of a dataset prior to It's difficult for me to understand exactly how to select chunks manually at the dataset level which would also make this zarr "final chunk" constraint happy. I would have been satisfied however with letting zarr choose chunks for me, but could not find a way to trigger this through the xarray API short of "unchunking" it first, which would lead to loading entire variables into memory. I came up with the following hack to trigger zarr's automatic chunking despite having differently defined chunks on my xarray dataset:
I'm not sure if there is a plan going forward to make legal xarray chunks 100% compatible with zarr; if so that would go a fair ways in alleviating the first problem. Alternatively, perhaps the xarray API could expose some ability to adjust chunks according to zarr's liking, as well as the option of defaulting entirely to zarr's heuristics for chunking. As for the performance issue with differing chunks, I'm not sure whether my rechunking patch could be applied without causing side-effects. Or where the right place to solve this would be-- perhaps it could be more naturally addressed within |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
351343574 | MDU6SXNzdWUzNTEzNDM1NzQ= | 2371 | `AttributeError: 'DataArray' object has no attribute 'ravel'` when using `np.intersect1d(..., assume_unique=True)` | chrisbarber 1530840 | closed | 0 | 5 | 2018-08-16T19:47:36Z | 2018-10-22T21:27:22Z | 2018-10-22T21:27:22Z | NONE | Code Sample, a copy-pastable example if possible```python
Problem descriptionI believe this worked in a previous version, not sure what might have changed. But I don't see any reason calling Expected OutputOutput should be the same as calling
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
272325640 | MDExOlB1bGxSZXF1ZXN0MTUxNDc3Nzcx | 1702 | fix empty dataset from_dict | chrisbarber 1530840 | closed | 0 | 2 | 2017-11-08T19:47:37Z | 2018-05-15T04:51:11Z | 2018-05-15T04:51:03Z | NONE | 0 | pydata/xarray/pulls/1702 |
Not sure if you want an issue or a whats-new entry for a small fix like this. Also not sure how xarray tends to handle |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);