home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

3 rows where repo = 13221727 and user = 12912489 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 2
  • pull 1

state 2

  • closed 2
  • open 1

repo 1

  • xarray · 3 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
863506023 MDExOlB1bGxSZXF1ZXN0NjE5OTAxMzA2 5201 Fix lag in Jupyter caused by CSS in `_repr_html_` SimonHeybrock 12912489 closed 0     29 2021-04-21T06:46:28Z 2023-03-28T04:21:43Z 2023-03-28T04:21:43Z NONE   0 pydata/xarray/pulls/5201

What

The CSS used by _repr_html_ (for displaying objects in Jupyter) placed font colors in :root as CSS custom properties. This seems to cause lag in notebooks with more than a couple of dozens of cells when running a cell that displays outputs. We observed this on Chrome and Firefox, so it is probably not browser-specific.

To reproduce

  • In a new notebook, create a simple array: python import xarray as xr import numpy as np data = np.random.rand(4) a = xr.DataArray(data, coords=[np.arange(4)], dims=['x'])
  • From a second cell, display a: python a This is probably fast with no noticable lag.
  • Add 50-100 more cells. They CAN be empty. Run the second cell again. You may notice a small lag before the array is displayed, which can exceed 1 second in some cases (depending on number of cells in the notebook, and probably the type of hardware the browser is running on, it is clearly visible on my 2015 Macbook Pro).
  • Probably other UI interactions such as switching tabs are also affected, but there the effect is not entirely clear.

Fix

  • Set CSS custom properties not in :root but the top-level class xr-wrap.
  • TODO: I think the vscode-dark settings also may need to change, but my understanding of CSS is too limited. Can someone suggest how to fix this or take care of it?

Discussion

Interestingly we ran into this problem independently of xarray: While scipp borrowed an early draft of xarrays's _repr_html_ implementation (thanks!), this was before the color configs were placed into :root --- we just happened to add such a color config independently. Since this was a recent change we managed to find the culprit yesterday (https://github.com/scipp/scipp/pull/1847). And then it occurred to me to check whether xarray has the same problem...

So this makes me think that other projects that define _repr_html_ may well suffer from the same problem. Can we do anything to spread the word, or better, could there be a way to fix this for everyone (in Jupyter)?

Checklist

  • [x] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5201/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1379000408 I_kwDOAMm_X85SMeBY 7057 Hook for better control over copy behavior with duck-arrays? SimonHeybrock 12912489 open 0     0 2022-09-20T08:11:12Z 2022-09-20T08:12:22Z   NONE      

Context

By using copy(deep=False) custom operations may avoid copying large amounts of data but can modify, e.g., coord dicts:

```python tmp = da.copy(deep=False) del tmp.coords['abc']

Use tmp

`` Whenda` wraps a duck-array with substructure the current implementation is insufficient:

```python tmp = da.copy(deep=False)

Imagine a duck-array similar to numpy.ma but with a dict of masks

del tmp.data.masks['abc'] # Bad: breaks da

Use tmp

```

Describe the solution you'd like

Currently there does not appear a solution to this, unless we know details about the duck array. Therefore, I wonder if we need an additional "hook" that duck-arrays may provide, which could be called by Xarray to make a non-deep copy?

python class MyDuckArray: def _copy_shallow_(self): # TODO: better name """Copy everything except buffers"""

```python

in xarray.Variable

def copy(self, deep=True): if deep: data = copy.deepcopy(data) elif hasattr(data, 'copy_shallow'): data = data.copy_shallow() ```

Additional context

This is the current implementation for Variable. There is no operation for deep=False: https://github.com/pydata/xarray/blob/716973e41060184beebd64935afe196a805ef481/xarray/core/variable.py#L956-L957

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7057/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
520815068 MDU6SXNzdWU1MjA4MTUwNjg= 3509 NEP 18, physical units, uncertainties, and the scipp library? SimonHeybrock 12912489 closed 0     3 2019-11-11T08:45:29Z 2022-09-09T13:10:47Z 2022-09-09T13:10:47Z NONE      

This is an idea and meant as a discussion starter on an potential route to providing support for physical units and propagation of uncertainties for xarray.

Context

  1. NEP 18 (which, as far as I understand, was pushed by you guys, for similar purposes) provides means to combine such features with xarray using __array_function__, if the underlying array implementation supports it.

  2. I am working on scipp, which (based on a decision I may or may not regret in the future) is reimplementing a lot of features that xarray provides, plus some additional features. Two of these features are physical units and propagation of uncertainties.

  3. scipp.Variable is essentially equivalent to a numpy array with a unit, dimension labels, and an optional array of uncertainties. [*]

  4. scipp implements basic arithmetic operations (and some more) for scipp.Variable, including efficient propagation of uncertainties.

[] Caveat: scipp's current unit implementation is static an would probably need to be replaced for becoming useful for a wider audience.*

Idea and questions

Can we implement the __array_function__ protocol for scipp.Variable so it can be used with xarray? As far as I can tell this would simply be a lightweight wrapper.

  • Did I understand __array_function__ correctly?
  • Is there anything else I should be aware of?
  • Would anyone be interested in this?

This would amount to using the lower-level parts of scipp which is quite compact and can be extended to support more data types and more operations in a relatively simple manner (requiring recompilation, since it is written in C++).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3509/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 3279.892ms · About: xarray-datasette