issues
2 rows where "created_at" is on date 2022-03-17, repo = 13221727 and user = 2448579 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1171932478 | I_kwDOAMm_X85F2kU- | 6373 | Zarr backend should avoid checking for invalid encodings | dcherian 2448579 | closed | 0 | 3 | 2022-03-17T04:55:35Z | 2022-03-18T10:06:01Z | 2022-03-18T04:19:48Z | MEMBER | What is your issue?The zarr backend has a list of "valid" encodings that needs to be updated any time zarr adds something new (e.g. https://github.com/pydata/xarray/pull/6348) Can we get rid of this? I don't know the backends code well, but won't all our encoding parameters have been removed by this stage? The @tomwhite points out that zarr will raise a warning: ``` python
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6373/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 1171916710 | I_kwDOAMm_X85F2gem | 6372 | apply_ufunc + dask="parallelized" + no core dimensions should raise a nicer error about core dimensions being absent | dcherian 2448579 | open | 0 | 0 | 2022-03-17T04:25:37Z | 2022-03-17T05:10:16Z | MEMBER | What happened?From https://github.com/pydata/xarray/discussions/6370 Calling
What did you expect to happen?With numpy data the apply_ufunc call does raise an error:
Minimal Complete Verifiable Example``` python import xarray as xr dt = xr.Dataset( data_vars=dict( value=(["x"], [1,1,2,2,2,3,3,3,3,3]), ), coords=dict( lon=(["x"], np.linspace(0,1,10)), ), ).chunk(chunks={'x': tuple([2,3,5])}) # three chunks of different size xr.apply_ufunc( lambda x: np.mean(x), dt, dask="parallelized" ) ``` Relevant log outputNo response Anything else we need to know?No response EnvironmentN/A |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);