home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

7 rows where comments = 7, state = "closed" and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 5
  • issue 2

state 1

  • closed · 7 ✖

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
166441031 MDU6SXNzdWUxNjY0NDEwMzE= 907 unstack() treats string coords as objects crusaderky 6213168 closed 0     7 2016-07-19T21:33:28Z 2022-09-27T12:11:36Z 2022-09-27T12:11:35Z MEMBER      

unstack() should be smart enough to recognise that all labels in a coord are strings, and convert them to numpy strings. This is particularly relevant e.g. if you want to dump the xarray to netcdf and then read it with a non-python library.

``` python import xarray

a = xarray.DataArray([[1,2],[3,4]], dims=['x', 'y'], coords={'x': ['x1', 'x2'], 'y': ['y1', 'y2']}) a ```

<xarray.DataArray (x: 2, y: 2)> array([[1, 2], [3, 4]]) Coordinates: * y (y) <U2 'y1' 'y2' * x (x) <U2 'x1' 'x2'

python a.stack(s=['x', 'y']).unstack('s')

<xarray.DataArray (x: 2, y: 2)> array([[1, 2], [3, 4]]) Coordinates: * x (x) object 'x1' 'x2' * y (y) object 'y1' 'y2'

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/907/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
264509098 MDU6SXNzdWUyNjQ1MDkwOTg= 1624 Improve documentation and error validation for set_options(arithmetic_join) crusaderky 6213168 closed 0     7 2017-10-11T09:05:49Z 2022-06-25T20:01:07Z 2022-06-25T20:01:07Z MEMBER      

The documentation for set_options laconically says:

arithmetic_join: DataArray/Dataset alignment in binary operations. Default: 'inner'.

leaving the user wonder what the other options are. Also, the set_options code does not make any kind of domain check on the possible values. By scanning the code I gathered that the valid values (and their meanings) should be the same as align(join=...), but I'd like confirmation on that...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1624/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
671108068 MDExOlB1bGxSZXF1ZXN0NDYxMzM1NDAx 4296 Increase support window of all dependencies crusaderky 6213168 closed 0 crusaderky 6213168   7 2020-08-01T18:55:54Z 2020-08-14T09:52:46Z 2020-08-14T09:52:42Z MEMBER   0 pydata/xarray/pulls/4296

Closes #4295

Increase width of the sliding window for minimum supported version: - setuptools from 6 months sliding window to hardcoded >= 38.4, and to 42 months sliding window starting from July 2021 - dask and distributed from 6 months sliding window to hardcoded >= 2.9, and to 12 months sliding window starting from January 2021 - all other libraries from 6 months to 12 months sliding window

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4296/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
522935511 MDExOlB1bGxSZXF1ZXN0MzQxMDM3NTg5 3533 2x~5x speed up for isel() in most cases crusaderky 6213168 closed 0     7 2019-11-14T15:34:24Z 2019-12-05T16:45:40Z 2019-12-05T16:39:40Z MEMBER   0 pydata/xarray/pulls/3533

Yet another major improvement for #2799.

Achieve a 2x to 5x boost in isel performance when slicing small arrays by int, slice, list of int, scalar ndarray, or 1-dimensional ndarray.

```python import xarray

da = xarray.DataArray([[1, 2], [3, 4]], dims=['x', 'y']) v = da.variable a = da.variable.values ds = da.to_dataset(name="d")

ds_with_idx = xarray.Dataset({ 'x': [10, 20], 'y': [100, 200], 'd': (('x', 'y'), [[1, 2], [3, 4]]) }) da_with_idx = ds_with_idx.d

before -> after

%timeit a[0] # 121 ns %timeit v[0] # 7 µs %timeit v.isel(x=0) # 10 µs %timeit da[0] # 65 µs -> 15 µs %timeit da.isel(x=0) # 63 µs -> 13 µs %timeit ds.isel(x=0) # 48 µs -> 24 µs %timeit da_with_idx[0] # 209 µs -> 82 µs %timeit da_with_idx.isel(x=0, drop=False) # 135 µs -> 34 µs %timeit da_with_idx.isel(x=0, drop=True) # 101 µs -> 34 µs %timeit ds_with_idx.isel(x=0, drop=False) # 90 µs -> 49 µs %timeit ds_with_idx.isel(x=0, drop=True) # 65 µs -> 49 µs ```

Marked as WIP because this commands running the asv suite to verify there are no regressions for large arrays. (on a separate note, we really need to add the small size cases to asv - as discussed in #3382).

This profoundly alters one of the most important methods in xarray and I must confess it makes me nervous, particularly as I am unsure if the test coverage of DataArray.isel() is as through as that for Dataset.isel().

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3533/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
497945632 MDExOlB1bGxSZXF1ZXN0MzIwOTgwNzIw 3340 CI environments overhaul crusaderky 6213168 closed 0     7 2019-09-24T22:01:10Z 2019-09-25T01:50:08Z 2019-09-25T01:40:55Z MEMBER   0 pydata/xarray/pulls/3340

Propaedeutic CI work to #3222.

  • py36 and py37 are now identical
  • Many optional dependencies were missing in one test suite or another (see details below)
  • Tests that require hypothesis now always run if hypothesis is installed
  • py37-windows.yml requirements file has been rebuilt starting from py37.yml
  • Sorted requirements files alphabetically for better maintainability
  • Added black. This is not needed by CI, but I personally use these yaml files to deploy my dev environment and I would expect many more developers to do the same. Alternatively, we could go the other way around and remove flake8 from everywhere and mypy from py36 and py37-windows. IMHO the marginal speedup would not be worth the complication.

Added packages to py36.yml (net of changes in order): + black + hypothesis + nc-time-axis + numba + numbagg + pynio (https://github.com/pydata/xarray/issues/3154 seems to be now fixed upstream) + sparse

Added packages to py37.yml (net of changes in order):

  • black
  • cdms2
  • hypothesis
  • iris>=1.10
  • numba (previously implicitly installed from pip by numbagg; now installed from conda)
  • pynio

Added packages to py37-windows.yml (net of changes in order):

  • black
  • bottleneck
  • flake8
  • hypothesis
  • iris>=1.10
  • lxml
  • mypy==0.720
  • numba
  • numbagg
  • pseudonetcdf>=3.0.1
  • pydap
  • sparse
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3340/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
462401539 MDExOlB1bGxSZXF1ZXN0MjkzMTAxODQx 3065 kwargs.pop() cleanup crusaderky 6213168 closed 0     7 2019-06-30T12:47:07Z 2019-07-09T20:06:13Z 2019-07-01T01:58:50Z MEMBER   0 pydata/xarray/pulls/3065
  • Clean up everywhere the pattern def my_func(*args, **kwargs): my_optional_arg = kwargs.pop('my_optional_arg', None) which was inherited from not being able to put named keyword arguments after *args in Python 2.

  • Fix bug in SplineInterpolator where the __init__ method would write to the class attributes of BaseInterpolator.

  • map_dataarray was unintentionally and subtly relying on _process_cmap_cbar_kwargs to modify the kwargs in place. _process_cmap_cbar_kwargs is now strictly read-only and the modifications in kwargs have been made explicit in the caller function.
  • Rename all 'kwds' to 'kwargs' for sake of coherency
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3065/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
254841785 MDExOlB1bGxSZXF1ZXN0MTM5MDI5NzMx 1551 Load nonindex coords ahead of concat() crusaderky 6213168 closed 0   0.10 2415632 7 2017-09-02T23:19:03Z 2017-10-09T23:32:50Z 2017-10-09T21:15:31Z MEMBER   0 pydata/xarray/pulls/1551
  • [x] Closes #1521
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1551/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1356.43ms · About: xarray-datasette