home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

2 rows where state = "open" and user = 23738400 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue 2

state 1

  • open · 2 ✖

repo 1

  • xarray 2
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1240234432 I_kwDOAMm_X85J7HnA 6620 Using the html repr in the documentation OriolAbril 23738400 open 0     1 2022-05-18T16:53:46Z 2022-05-18T17:46:26Z   CONTRIBUTOR      

What is your issue?

most (if not all) of xarray documentation is written as rst files and using the ipython directive. Due to this, the html repr is not use in the documentation. I find the html repr to be much more informative and intuitive, especially for beginners and I think it would be great to use it in the documentation. There are multiple ways to do this (not necessarly incompatible between them):

  • Use jupyter sphinx instead of ipython to run and embed code cells from rst files. I use this in the documentation of xarray-einstats for example
  • Use jupyter notebooks instead of rst. We use this in arviz and xarray-einstats docs. However, in order to keep using all the sphinx roles and directives used the sphinx configuration would need to be modified to use myst-nb instead of nbsphinx
  • Use myst notebooks instead or rst. Also used in ArviZ, also needs myst-nb instead of nbsphinx.

afaik, nbsphinx can be changed with myst-nb without needing to do any changes to the documentation, then rst files could be progressively updated to ipynb, myst or any other format supported by jupytext, rst, markdown and notebook sources can all be used at the same time to generate the documentation and link from one to the other with sphinx roles and cross references.

Is this something that sounds interesting? I could update the infrastructure at some point whenever I have time and update a page (to any or multiple of the options above) as an example and then let other people take over

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6620/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
457716471 MDU6SXNzdWU0NTc3MTY0NzE= 3032 apply_ufunc should preemptively broadcast OriolAbril 23738400 open 0     11 2019-06-18T22:02:36Z 2019-06-19T22:27:27Z   CONTRIBUTOR      

Code Sample

I am having some troubles understanding apply_ufunc broadcasting rules. As I had some trouble understanding the docs, I am not 100% sure it is a bug, but I am quite sure. I will try to explain why with the following really simple example.

```python import xarray as xr import numpy as np

a = xr.DataArray(data=np.random.normal(size=(7, 3)), dims=["dim1", "dim2"]) c = xr.DataArray(data=np.random.normal(size=(5, 6)), dims=["dim3", "dim4"])

def func(x,y): print(x.shape) print(y.shape) return ```

The function defined always raises an error when trying to call apply_ufunc, but this is intended, as the shapes have already been printed by then, and this keeps the example as simple as possible.

Problem description

```python xr.apply_ufunc(func, a, c)

Out

(7, 3, 1, 1)

(5, 6)

```

Here, a has been kind of broadcasted, but I would expect the shapes of a and c to be the same as when calling xr.broadcast, as there are no input core dims, so all dimensions are broadcasted. However:

```python print([ary.shape for ary in xr.broadcast(a,c)])

[(7, 3, 5, 6), (7, 3, 5, 6)]

```

Using different input core dims does not get rid of the problem, instead I believe it shows some more issues:

```python xr.apply_ufunc(func, a, c, input_core_dims=[["dim1"],[]])

(3, 1, 1, 7), expected (3, 5, 6, 7)

(5, 6), expected (3, 5, 6)

xr.apply_ufunc(func, a, c, input_core_dims=[[],["dim3"]])

(7, 3, 1), expected (7, 3, 6)

(6, 5), expected (7, 3, 6, 5)

xr.apply_ufunc(func, a, c, input_core_dims=[["dim1"],["dim3"]])

(3, 1, 7), expected (3, 6, 7)

(6, 5), expected (3, 6, 5)

```

Is this current behaviour what should be expected?

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] python-bits: 64 OS: Linux OS-release: 4.15.0-52-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.3 xarray: 0.12.1 pandas: 0.24.2 numpy: 1.16.4 scipy: 1.3.0 netCDF4: 1.5.1.2 pydap: None h5netcdf: None h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.1.0 cartopy: None seaborn: None setuptools: 41.0.0 pip: 19.1.1 conda: None pytest: 4.5.0 IPython: 7.5.0 sphinx: 2.0.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3032/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 22.755ms · About: xarray-datasette