home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where user = 26591824 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 2
  • pull 1

state 2

  • closed 2
  • open 1

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1532853152 I_kwDOAMm_X85bXXug 7439 Add clarifying language to contributor's guide paigem 26591824 closed 0     3 2023-01-13T20:11:57Z 2023-03-13T17:51:26Z 2023-03-13T17:51:26Z CONTRIBUTOR      

What is your issue?

I am going through the contributor's guide for xarray, and I have a few suggested updates to make the instructions clearer to relatively new contributors like me!

General questions

  • If making updates to docstrings, I am unclear if I should use the virtual env xarray-tests or xarray-docs. I assumed I should use xarray-docs since I am only updating docstrings which are fed into the documentation. But this isn't entirely clear, since the file I updated is not in the \docs folder, but is at \xarray\backends\api.rst.
  • If only updating docs or docstrings, should I still run pytest locally before pushing? Or do those tests only apply to code updates? Either way, this should be made clear in the contributing guide.

Suggestion updates

  • Under Code Formatting:
  • Contributors are recommended to use pre-commit via pre-commit install, but when I tried this I found that pre-commit was not installed in the virtual env xarray-docs. It does appear to be installed in virtual env xarray-tests (yml file). Should I run pre-commit when updating docs? If so, we should add pre-commit to virtual env xarray-docs (yml file).
  • Under Building the Documentation:
  • Add a sentence to make it clear that users can preview the html files from their updated documentation in their local browsers to verify it looks as expected. (This is a minor suggestion, but we might as well be as explicit as possible, especially since docs might more likely be updated by newer contributors.)
  • Suggested wording update: "Then you can find the HTML output in the folder xarray/doc/_build/html/. You can preview the html files in your local browser to verify that you see expected behavior based on your changes."
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7439/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1532822084 PR_kwDOAMm_X85HW5C_ 7438 Refer to open_zarr in open_dataset docstring paigem 26591824 closed 0     1 2023-01-13T19:47:21Z 2023-01-13T21:10:08Z 2023-01-13T20:50:32Z CONTRIBUTOR   0 pydata/xarray/pulls/7438

This adds a sentence (below) under "chunks" in doctoring of open_dataset() to clarify current usage for those who want the same behavior as open_zarr() used to provide:

In order to reproduce the default behavior of xr.open_zarr(...) use `xr.open_dataset(..., engine='zarr', chunks={})`.

  • [x] Closes #7293
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7438/reactions",
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
732910109 MDU6SXNzdWU3MzI5MTAxMDk= 4554 Unexpected chunking of 3d DataArray in `polyfit()` paigem 26591824 open 0     3 2020-10-30T06:07:34Z 2021-04-19T15:44:07Z   CONTRIBUTOR      

What happened: When running polyfit() on a 3d chunked xarray DataArray, the output is chunked differently than the input array.

What you expected to happen: I expect the output to have the same chunking as the input.

Minimal Complete Verifiable Example: (from @rabernat in https://github.com/xgcm/xrft/issues/116)

Example: number of chunks decreases ```python import dask.array as dsa import xarray as xr

nz, ny, nx = (10, 20, 30) data = dsa.ones((nz, ny, nx), chunks=(1, 5, nx)) da = xr.DataArray(data, dims=['z', 'y', 'x']) da.chunks

-> ((1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (5, 5, 5, 5), (30,))

pf = da.polyfit('x', 1) pf.polyfit_coefficients.chunks

-> ((1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (20,), (30,))

chunks on the y dimension have been consolidated!

pv = xr.polyval(da.x, pf.polyfit_coefficients).transpose('z', 'y', 'x') pv.chunks

-> ((1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (20,), (30,))

and this propagates to polyval

align back against the original data

(da - pv).chunks

-> ((1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (5, 5, 5, 5), (30,))

hides the fact that we have chunk consolidation happening upstream

```

Example: number of chunks increases ```python nz, ny, nx = (6, 10, 4) data = dsa.ones((nz, ny, nx), chunks=(2, 10, 2)) da = xr.DataArray(data, dims=['z', 'y', 'x']) da.chunks

-> ((2, 2, 2), (10,), (2, 2))

pf = da.polyfit('y', 1) pf.polyfit_coefficients.chunks

-> ((2,), (1, 1, 1, 1, 1, 1), (4,))

pv = xr.polyval(da.y, pf.polyfit_coefficients).transpose('z', 'y', 'x') pv.chunks

-> ((1, 1, 1, 1, 1, 1), (10,), (4,))

(da - pv).chunks

-> ((1, 1, 1, 1, 1, 1), (10,), (2, 2))

``` (This discussion started in https://github.com/xgcm/xrft/issues/116 with @rabernat and @navidcy.)

Environment:

Running on Pangeo Cloud

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 19:08:05) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 4.19.112+ machine: x86_64 processor: x86_64 byteorder: little LC_ALL: C.UTF-8 LANG: C.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.1 pandas: 1.1.3 numpy: 1.19.2 scipy: 1.5.2 netCDF4: 1.5.4 pydap: installed h5netcdf: 0.8.1 h5py: 2.10.0 Nio: None zarr: 2.4.0 cftime: 1.2.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: 1.1.7 cfgrib: 0.9.8.4 iris: None bottleneck: 1.3.2 dask: 2.30.0 distributed: 2.30.0 matplotlib: 3.3.2 cartopy: 0.18.0 seaborn: None numbagg: None pint: 0.16.1 setuptools: 49.6.0.post20201009 pip: 20.2.3 conda: None pytest: 6.1.1 IPython: 7.18.1 sphinx: 3.2.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4554/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 23.925ms · About: xarray-datasette