home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

10 rows where comments = 7, type = "pull" and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: draft, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 9
  • open 1

type 1

  • pull · 10 ✖

repo 1

  • xarray 10
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2123950388 PR_kwDOAMm_X85mT6XD 8720 groupby: Dispatch quantile to flox. dcherian 2448579 closed 0     7 2024-02-07T21:42:42Z 2024-03-26T15:08:32Z 2024-03-26T15:08:30Z MEMBER   0 pydata/xarray/pulls/8720
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

@aulemahal would you be able to test against xclim's test suite. I imagine you're doing a bunch of grouped quantiles.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8720/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2184830377 PR_kwDOAMm_X85pjN8A 8829 Revert "Do not attempt to broadcast when global option ``arithmetic_b… dcherian 2448579 closed 0     7 2024-03-13T20:27:12Z 2024-03-20T15:30:12Z 2024-03-15T03:59:07Z MEMBER   0 pydata/xarray/pulls/8829

…roadcast=False`` (#8784)"

This reverts commit 11f89ecdd41226cf93da8d1e720d2710849cd23e.

Reverting #8784

Sadly that PR broke a lot of tests by breaking create_test_data with from xarray.tests import create_test_data create_test_data()

```

AssertionError Traceback (most recent call last) Cell In[3], line 2 1 from xarray.tests import create_test_data ----> 2 create_test_data()

File ~/repos/xarray/xarray/tests/init.py:329, in create_test_data(seed, add_attrs, dim_sizes) 327 obj.coords["numbers"] = ("dim3", numbers_values) 328 obj.encoding = {"foo": "bar"} --> 329 assert all(var.values.flags.writeable for var in obj.variables.values()) 330 return obj

AssertionError: ```

Somehow that code changes whether IndexVariable.values returns a writeable numpy array. I spent some time debugging but couldn't figure it out.

cc @etienneschalk

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8829/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1976752481 PR_kwDOAMm_X85ekPdj 8412 Minimize duplication in `map_blocks` task graph dcherian 2448579 closed 0     7 2023-11-03T18:30:02Z 2024-01-03T04:10:17Z 2024-01-03T04:10:15Z MEMBER   0 pydata/xarray/pulls/8412

Builds on #8560

  • [x] Closes #8409
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst

cc @max-sixty

``` print(len(cloudpickle.dumps(da.chunk(lat=1, lon=1).map_blocks(lambda x: x))))

779354739 -> 47699827

print(len(cloudpickle.dumps(da.chunk(lat=1, lon=1).drop_vars(da.indexes).map_blocks(lambda x: x))))

15981508

```

This is a quick attempt. I think we can generalize this to minimize duplication.

The downside is that the graphs are not totally embarrassingly parallel any more. This PR:

vs main:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8412/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
540451721 MDExOlB1bGxSZXF1ZXN0MzU1MjU4NjMy 3646 [WIP] GroupBy plotting dcherian 2448579 open 0     7 2019-12-19T17:26:39Z 2022-06-09T14:50:17Z   MEMBER   1 pydata/xarray/pulls/3646
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

This adds plotting methods to GroupBy objects so that it's easy to plot each group as a facet. I'm finding this super helpful in my current research project.

It's pretty self-contained, mostly just adding map_groupby* methods to FacetGrid. But that's because I make GroupBy mimic the underlying DataArray by adding coords, attrs and __getitem__.

This still needs more tests but I would like feedback on the feature and the implementation.

Example

``` python import numpy as np import xarray as xr

time = np.arange(80) da = xr.DataArray(5 * np.sin(2np.pitime/10), coords={"time": time}, dims="time") da["period"] = da.time.where((time % 10) == 0).ffill("time")/10 da.plot() ```

python da.groupby("period").plot(col="period", col_wrap=4)

python da = da.expand_dims(y=10) da.groupby("period").plot(col="period", col_wrap=4, sharex=False, sharey=True, robust=True)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3646/reactions",
    "total_count": 3,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
507599878 MDExOlB1bGxSZXF1ZXN0MzI4NTU4Mjg3 3406 Drop groups associated with nans in group variable dcherian 2448579 closed 0     7 2019-10-16T04:04:46Z 2022-01-05T18:57:07Z 2019-10-28T23:46:41Z MEMBER   0 pydata/xarray/pulls/3406
  • [x] Closes #2383
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3406/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
819003369 MDExOlB1bGxSZXF1ZXN0NTgyMTc1Mjg5 4977 Use numpy & dask sliding_window_view for rolling dcherian 2448579 closed 0     7 2021-03-01T15:54:22Z 2021-03-26T19:50:53Z 2021-03-26T19:50:50Z MEMBER   0 pydata/xarray/pulls/4977

Should merge after https://github.com/dask/dask/pull/7234 is merged

  • [x] Closes #3277, closes #2531, closes #2532, closes #2514
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4977/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
573972855 MDExOlB1bGxSZXF1ZXN0MzgyMzgyODA5 3818 map_blocks: Allow passing dask-backed objects in args dcherian 2448579 closed 0     7 2020-03-02T13:26:12Z 2020-06-11T18:22:42Z 2020-06-07T16:13:35Z MEMBER   0 pydata/xarray/pulls/3818
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

It parses args and breaks any xarray objects into appropriate blocks before passing them on to the user function.

e.g. ```python da1 = xr.DataArray( np.ones((10, 20)), dims=["x", "y"], coords={"x": np.arange(10), "y": np.arange(20)} ).chunk({"x": 5, "y": 4}) da1

def sumda(da1, da2): #print(da1.shape) #print(da2.shape) return da1 - da2

da3 = (da1 + 1).isel(x=1, drop=True).rename({"y": "k"}) mapped = xr.map_blocks(sumda, da1, args=[da3]) xr.testing.assert_equal(da1-da3, mapped) # passes ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3818/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
558230156 MDExOlB1bGxSZXF1ZXN0MzY5NjYwMzg2 3737 Fix/rtd dcherian 2448579 closed 0     7 2020-01-31T16:22:38Z 2020-03-19T19:30:50Z 2020-01-31T17:10:02Z MEMBER   0 pydata/xarray/pulls/3737
  1. python 3.8 is not allowed on RTD (yet)
  2. I pinned a few versions (based on the env solution obtained locally). This seems to have fixed the memory problem.
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3737/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
475730053 MDExOlB1bGxSZXF1ZXN0MzAzNDIzNjI0 3175 Add join='override' dcherian 2448579 closed 0     7 2019-08-01T14:53:52Z 2019-08-16T22:26:54Z 2019-08-16T22:26:45Z MEMBER   0 pydata/xarray/pulls/3175

This adds join='override' which checks that indexes along a dimension are of the same size and overwrites those indices with indices from the first object.

Definitely need help, feedback.

  • [x] With #3102 this partially helps with #2039, #2217
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3175/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
322200856 MDExOlB1bGxSZXF1ZXN0MTg3MzkwNjcz 2118 Add "awesome xarray" list to faq. dcherian 2448579 closed 0     7 2018-05-11T07:45:59Z 2018-05-14T21:19:51Z 2018-05-14T21:04:31Z MEMBER   0 pydata/xarray/pulls/2118

partially addresses #1850 closes #946

I tried to make an "awesome xarray" list by doing a github search for 'xarray'. I only put packages that looked like they were intended for general use. Also, I moved the list from internals.rst to faq.rst.

Let me know if there are any I've missed or any that should be added.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2118/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 39.325ms · About: xarray-datasette