home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

3 rows where comments = 1, type = "pull" and user = 20629530 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

state 2

  • open 2
  • closed 1

type 1

  • pull · 3 ✖

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
991544027 MDExOlB1bGxSZXF1ZXN0NzI5OTkzMTE0 5781 Add encodings to save_mfdataset aulemahal 20629530 open 0     1 2021-09-08T21:24:13Z 2022-10-06T21:44:18Z   CONTRIBUTOR   0 pydata/xarray/pulls/5781
  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

Simply add a encodings argument to save_mfdataset. As for the other args, it expects a list of dictionaries, with encoding information to be passed to to_netcdf for each dataset. Added a minimal test, simply to see if the argument was taken into account.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5781/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
906175200 MDExOlB1bGxSZXF1ZXN0NjU3MjA1NTM2 5402 `dt.to_pytimedelta` to allow arithmetic with cftime objects aulemahal 20629530 open 0     1 2021-05-28T22:48:50Z 2022-06-09T14:50:16Z   CONTRIBUTOR   0 pydata/xarray/pulls/5402
  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

When playing with cftime objects a problem I encountered many times is that I can sub two arrays and them add it back to another. Subtracting to cftime datetime arrays result in an array of np.timedelta64. And when trying to add it back to another cftime array, we get a UFuncTypeError because the two arrays have incompatible dtypes : '<m8[ns]' and 'O'.

Example: ```python import xarray as xr da = xr.DataArray(xr.cftime_range('1900-01-01', freq='D', periods=10), dims=('time',))

An array of timedelta64[ns]

dt = da - da[0]

da[-1] + dt # Fails ```

However, if the two arrays were of 'O' dtype, then the subtraction would be made by cftime which supports datetime.timedelta objects.

This solution here adds a to_pytimedelta to the TimedeltaAccessor, mirroring the name of the similar function on pd.Series.dt. It uses a monkeypatching workaround to prevent xarray to case the array back into numpy objects.

The user still has to check if the data is in cftime or numpy to adapt the operation (calling dt.to_pytimedelta or not), but custom workaround were always overly complicated for such a simple problem, this helps.

Also, this doesn't work with dask arrays because loading a dask array triggers the variable constructor and thus recasts the array of datetime.timedelta to numpy.timedelta[64].

I realize I maybe should have opened an issue before, but I had this idea and it all rushed along.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5402/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
635542241 MDExOlB1bGxSZXF1ZXN0NDMxODg5NjQ0 4135 Correct dask handling for 1D idxmax/min on ND data aulemahal 20629530 closed 0     1 2020-06-09T15:36:09Z 2020-06-25T16:09:59Z 2020-06-25T03:59:52Z CONTRIBUTOR   0 pydata/xarray/pulls/4135
  • [x] Closes #4123
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Based on comments on dask/dask#3096, I fixed the dask indexing error that occurred when idxmax/idxmin were called on ND data (where N > 2). Added tests are very simplistic, I believe the 1D and 2D tests already cover most cases, I just wanted to test that is was indeed working on ND data, assuming that non-dask data was already treated properly.

I believe this doesn't conflict with #3936.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4135/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1041.602ms · About: xarray-datasette