home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

9 rows where comments = 5, type = "pull" and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), closed_at (date)

state 2

  • closed 8
  • open 1

type 1

  • pull · 9 ✖

repo 1

  • xarray 9
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2234142680 PR_kwDOAMm_X85sK0g8 8923 `"source"` encoding for datasets opened from `fsspec` objects keewis 14808389 open 0     5 2024-04-09T19:12:45Z 2024-04-23T16:54:09Z   MEMBER   0 pydata/xarray/pulls/8923

When opening files from path-like objects (str, pathlib.Path), the backend machinery (_dataset_from_backend_dataset) sets the "source" encoding. This is useful if we need the original path for additional processing, like writing to a similarly named file, or to extract additional metadata. This would be useful as well when using fsspec to open remote files.

In this PR, I'm extracting the path attribute that most fsspec objects have to set that value. I've considered using isinstance checks instead of the getattr-with-default, but the list of potential classes is too big to be practical (at least 4 classes just within fsspec itself).

If this sounds like a good idea, I'll update the documentation of the "source" encoding to mention this feature.

  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8923/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1534634670 PR_kwDOAMm_X85Hc1wx 7442 update the docs environment keewis 14808389 closed 0     5 2023-01-16T09:58:43Z 2023-03-03T10:17:14Z 2023-03-03T10:14:13Z MEMBER   0 pydata/xarray/pulls/7442

Most notably: - bump python to 3.10 - bump sphinx to at least 5.0 - remove the pydata-sphinx-theme pin: sphinx-book-theme pins to a exact minor version so pinning as well does not change anything - xref https://github.com/executablebooks/sphinx-book-theme/issues/686

~Edit: it seems this is blocked by sphinx-book-theme pinning sphinx to >=3,<5. They already changed the pin, we're just waiting on a release~

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7442/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1519058102 PR_kwDOAMm_X85GoVw9 7415 install `numbagg` from `conda-forge` keewis 14808389 closed 0     5 2023-01-04T14:17:44Z 2023-01-20T19:46:46Z 2023-01-20T19:46:43Z MEMBER   0 pydata/xarray/pulls/7415

It seems there is a numbagg package on conda-forge now.

Not sure what to do about the min-all-deps CI, but given that the most recent version of numbagg happened more than 12 months ago (more than 18 months, even) maybe we can just bump it to the version on conda-forge?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7415/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1404894283 PR_kwDOAMm_X85AlZGn 7153 use a hook to synchronize the versions of `black` keewis 14808389 closed 0     5 2022-10-11T16:07:05Z 2022-10-12T08:00:10Z 2022-10-12T08:00:07Z MEMBER   0 pydata/xarray/pulls/7153

We started to pin the version of black used in the environment of blackdoc, but the version becomes out-of-date pretty quickly. The new hook I'm adding here is still experimental, but pretty limited in what it can destroy (the pre-commit configuration) so for now we can just review any new autoupdate PRs from the pre-commit-ci a bit more thoroughly.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7153/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
818051289 MDExOlB1bGxSZXF1ZXN0NTgxNDE3NTg0 4971 add a combine_attrs option to open_mfdataset keewis 14808389 closed 0     5 2021-02-27T23:05:01Z 2021-04-03T15:43:17Z 2021-04-03T15:43:14Z MEMBER   0 pydata/xarray/pulls/4971

In order to fix the failing tests in #4902 we need to expose combine_attrs to be able to properly construct the expected result (to be passed through to the combine function).

This overlaps with the fallback of the attrs_file code, which I removed for now. Maybe combine_attrs="override" would be better?

  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4971/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
663977922 MDExOlB1bGxSZXF1ZXN0NDU1Mjk1NTcw 4254 fix the RTD timeouts keewis 14808389 closed 0     5 2020-07-22T18:56:16Z 2020-07-23T16:10:55Z 2020-07-22T21:17:59Z MEMBER   0 pydata/xarray/pulls/4254

This attempts to fix the RTD timeouts. I think these are due to a warning from matplotlib, which we can probably just ignore for now.

  • [x] attempts to close #4249
  • [ ] Tests added
  • [x] Passes isort . && black . && mypy . && flake8
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4254/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
600641276 MDExOlB1bGxSZXF1ZXN0NDA0MDMzMjk0 3975 pint support for Dataset keewis 14808389 closed 0     5 2020-04-15T23:11:15Z 2020-06-17T20:40:12Z 2020-06-17T20:40:07Z MEMBER   0 pydata/xarray/pulls/3975

This is part of the effort to add support for pint (see #3594) to Dataset objects (although it will probably be a test-only PR, just like #3643).

  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

The list of failing tests from #3594: * Dataset methods - __init__: Needs unit support in IndexVariable, and merge does not work yet (test bug is also possible) - aggregation: xarray does not implement __array_function__ (see #3917) - rank: depends on bottleneck and thus only works with numpy.array - ffill, bfill: uses bottleneck - interpolate_na: uses numpy.vectorize, which does not support NEP-18, yet - equals, identical: works (but no units / unit checking in IndexVariable) - broadcast_like: works (but no units / unit checking in IndexVariable) - to_stacked_array: no units in IndexVariable - sel, loc: no units in IndexVariable - interp, reindex: partially blocked by IndexVariable. reindex works with units in data, but interp uses scipy - interp_like, reindex_like: same as interp / reindex - quantile: works, but needs pint >= 0.12 - groupby_bins: needs pint >= 0.12 (for isclose) - rolling: uses numpy.lib.stride_tricks.as_strided - rolling_exp: uses numbagg (supports NEP-18, but pint doesn't support its functions)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3975/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
636225143 MDExOlB1bGxSZXF1ZXN0NDMyNDM3Njc5 4138 Fix the upstream-dev pandas build failure keewis 14808389 closed 0     5 2020-06-10T12:58:29Z 2020-06-11T10:10:50Z 2020-06-11T02:14:49Z MEMBER   0 pydata/xarray/pulls/4138

As pointed out by @TomAugspurger in https://github.com/pydata/xarray/issues/4133#issuecomment-641332231, there are pre-built nightly wheels for numpy, scipy and pandas in the scipy-wheels-nightly repository.

Not sure how frequently these are updated, though, at least the numpy wheel doesn't really seem to be built daily.

  • [x] Closes #4133
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4138/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
543276677 MDExOlB1bGxSZXF1ZXN0MzU3NTc5ODQ5 3654 Tests for variables with units keewis 14808389 closed 0     5 2019-12-28T20:21:06Z 2020-01-15T16:59:00Z 2020-01-15T16:53:01Z MEMBER   0 pydata/xarray/pulls/3654

As promised in #3493, this adds integration tests for units. I'm doing this now rather than later since I encountered a few cases in #3643 where a increased test coverage for variables would have been helpful.

  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3654/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1758.527ms · About: xarray-datasette