home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

2 rows where state = "closed", type = "pull" and user = 102827 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • pull · 2 ✖

state 1

  • closed · 2 ✖

repo 1

  • xarray 2
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
376162232 MDExOlB1bGxSZXF1ZXN0MjI3NDQzNTI3 2532 [WIP] Fix problem with wrong chunksizes when using rolling_window on dask.array cchwala 102827 closed 0     2 2018-10-31T21:12:03Z 2021-03-26T19:50:50Z 2021-03-26T19:50:50Z CONTRIBUTOR   0 pydata/xarray/pulls/2532
  • [ ] Closes #2514
  • [ ] Closes #2531
  • [ ] Tests added (for all bug fixes or enhancements)
  • [ ] Fully documented, including whats-new.rst for all changes

Short summary

The two rolling-window functions for dask.array * dask_rolling_wrapper * rolling_window

will be fixed to preserve dask.array chunksizes.

Long summary

The specific initial problem with chunksizes and interpolate_na() in #2514 is caused by the padding done in

https://github.com/pydata/xarray/blob/5940100761478604080523ebb1291ecff90e779e/xarray/core/dask_array_ops.py#L74-L85

which adds a small array with a small chunk to the initial array.

There is another related problem where DataArray.rolling() changes the size and distribution of dask.array chunks which stems from this code

https://github.com/pydata/xarray/blob/b622c5e7da928524ef949d9e389f6c7f38644494/xarray/core/dask_array_ops.py#L23

For some (historic) reason there are these two rolling-window functions for dask. Both need to be fixed to preserve chunksize of a dask.array in all cases.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2532/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
229807027 MDExOlB1bGxSZXF1ZXN0MTIxMzc5NjAw 1414 Speed up `decode_cf_datetime` cchwala 102827 closed 0     12 2017-05-18T21:15:40Z 2017-07-26T07:40:24Z 2017-07-25T17:42:52Z CONTRIBUTOR   0 pydata/xarray/pulls/1414
  • [x] Closes #1399
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Instead of casting the input numeric dates to float, they are now casted to nanoseconds as int64 which makes pd.to_timedelta() work much faster (x100 speedup on my machine).

On my machine all existing tests for conventions.py pass. Overflows should be handled by these two already existing lines since everything in the valid range of pd.to_datetime should be save.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1414/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 554.932ms · About: xarray-datasette