home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

6 rows where comments = 8, type = "pull" and user = 14371165 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: draft, created_at (date), updated_at (date)

state 2

  • closed 3
  • open 3

type 1

  • pull · 6 ✖

repo 1

  • xarray 6
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1953053810 PR_kwDOAMm_X85dURGi 8344 Add mean to NamedArray._array_api Illviljan 14371165 open 0     8 2023-10-19T21:05:06Z 2023-12-19T17:49:22Z   MEMBER   1 pydata/xarray/pulls/8344
  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8344/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
970245117 MDExOlB1bGxSZXF1ZXN0NzEyMjIzNzc2 5704 Allow in-memory arrays with open_mfdataset Illviljan 14371165 open 0     8 2021-08-13T09:50:26Z 2023-04-29T06:58:26Z   MEMBER   0 pydata/xarray/pulls/5704

The docstring seems to imply that it's possible to get in-memory arrays: https://github.com/pydata/xarray/blob/4bb9d9c6df77137f05e85c7cc6508fe7a93dc0e4/xarray/backends/api.py#L732

But it doesn't seem possible because of: https://github.com/pydata/xarray/blob/4bb9d9c6df77137f05e85c7cc6508fe7a93dc0e4/xarray/backends/api.py#L899

This PR removes that or check, changes the default to chunk={}, and fixes the failing tests.

  • [x] Noticed in #5689
  • [ ] Closes #7792
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5704/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
931016490 MDExOlB1bGxSZXF1ZXN0Njc4NTc5MjIx 5542 Do not transpose 1d arrays during interpolation Illviljan 14371165 open 0     8 2021-06-27T20:56:13Z 2022-10-12T20:12:11Z   MEMBER   0 pydata/xarray/pulls/5542

Seems a waste of time to transpose 1d arrays.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5542/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
957432870 MDExOlB1bGxSZXF1ZXN0NzAwODYwMzY4 5661 Speed up _mapping_repr Illviljan 14371165 closed 0     8 2021-08-01T08:44:17Z 2022-08-12T09:07:44Z 2021-08-02T19:45:16Z MEMBER   0 pydata/xarray/pulls/5661

Creating a ordered list for filtering purposes using .items() turns out being rather slow. Use .keys() instead as that doesn't trigger a bunch of dataarray initializations.

  • [x] Passes pre-commit run --all-files

Test case:

```python import numpy as np import xarray as xr

a = np.arange(0, 2000) data_vars = dict() for i in a: data_vars[f"long_variable_name_{i}"] = xr.DataArray( name=f"long_variable_name_{i}", data=np.arange(0, 20), dims=[f"long_coord_name_{i}x"], coords={f"long_coord_name{i}x": np.arange(0, 20) * 2}, ) ds0 = xr.Dataset(data_vars) ds0.attrs = {f"attr{k}": 2 for k in a} ```

Before: python %timeit print(ds0) 14.6 s ± 215 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

After: python %timeit print(ds0) 120 ms ± 2.06 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5661/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
830918156 MDExOlB1bGxSZXF1ZXN0NTkyMzc2Mzk2 5031 Keep coord attrs when interpolating Illviljan 14371165 closed 0     8 2021-03-13T15:05:39Z 2021-05-18T18:16:10Z 2021-04-27T07:00:08Z MEMBER   0 pydata/xarray/pulls/5031
  • [x] Closes #4239
  • [x] Closes #4839
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5031/reactions",
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
    xarray 13221727 pull
777526340 MDExOlB1bGxSZXF1ZXN0NTQ3Nzk5MDk2 4750 Limit number of data rows shown in repr Illviljan 14371165 closed 0     8 2021-01-02T21:14:50Z 2021-01-04T02:13:52Z 2021-01-04T02:13:52Z MEMBER   0 pydata/xarray/pulls/4750
  • [x] Closes #4736
  • [x] Tests added
  • [x] Passes isort . && black . && mypy . && flake8
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

Test example: python a = np.arange(0, 2000) b = np.core.defchararray.add("long_variable_name", a.astype(str)) c = np.arange(0, 30) d = np.core.defchararray.add("attr_", c.astype(str)) e = {k: 2 for k in d} coords = dict(time=da.array([0, 1])) data_vars = dict() for v in b: data_vars[v] = xr.DataArray( name=v, data=da.array([3, 4]), dims=["time"], coords=coords, ) ds0 = xr.Dataset(data_vars) ds0.attrs = e

Looks like this with 24 max rows of interesting data: python print(ds0) Out[15]: <xarray.Dataset> Dimensions: (time: 2) Coordinates: * time (time) int32 0 1 Data variables: long_variable_name0 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name2 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name3 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name4 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name5 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name6 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name7 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name8 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name9 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name10 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name11 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> ... ... long_variable_name1988 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1989 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1990 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1991 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1992 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1993 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1994 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1995 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1996 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1997 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1998 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1999 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> Attributes: attr_0: 2 attr_1: 2 attr_2: 2 attr_3: 2 attr_4: 2 attr_5: 2 attr_6: 2 attr_7: 2 attr_8: 2 attr_9: 2 attr_10: 2 attr_11: 2 ... ... attr_18: 2 attr_19: 2 attr_20: 2 attr_21: 2 attr_22: 2 attr_23: 2 attr_24: 2 attr_25: 2 attr_26: 2 attr_27: 2 attr_28: 2 attr_29: 2 With 20 rows of interesting data: python xr.set_options(display_max_rows=20) ds0 Out[26]: <xarray.Dataset> Dimensions: (time: 2) Coordinates: * time (time) int32 0 1 Data variables: long_variable_name0 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name2 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name3 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name4 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name5 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name6 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name7 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name8 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name9 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> ... long_variable_name1990 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1991 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1992 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1993 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1994 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1995 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1996 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1997 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1998 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1999 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> Attributes: attr_0: 2 attr_1: 2 attr_2: 2 attr_3: 2 attr_4: 2 attr_5: 2 attr_6: 2 attr_7: 2 attr_8: 2 attr_9: 2 ... attr_20: 2 attr_21: 2 attr_22: 2 attr_23: 2 attr_24: 2 attr_25: 2 attr_26: 2 attr_27: 2 attr_28: 2 attr_29: 2

With 16 rows of interesting data: python xr.set_options(display_max_rows=16) ds0 Out[28]: <xarray.Dataset> Dimensions: (time: 2) Coordinates: * time (time) int32 0 1 Data variables: long_variable_name0 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name2 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name3 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name4 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name5 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name6 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name7 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> ... long_variable_name1992 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1993 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1994 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1995 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1996 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1997 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1998 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1999 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> Attributes: attr_0: 2 attr_1: 2 attr_2: 2 attr_3: 2 attr_4: 2 attr_5: 2 attr_6: 2 attr_7: 2 ... attr_22: 2 attr_23: 2 attr_24: 2 attr_25: 2 attr_26: 2 attr_27: 2 attr_28: 2 attr_29: 2

With 12 rows of interesting data: ```python xr.set_options(display_max_rows=12) print(ds0) Out[79]: <xarray.Dataset> Dimensions: (time: 2) Coordinates: * time (time) int32 0 1 Data variables: long_variable_name0 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name2 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name3 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name4 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name5 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> ... long_variable_name1994 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1995 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1996 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1997 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1998 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1999 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> Attributes: attr_0: 2 attr_1: 2 attr_2: 2 attr_3: 2 attr_4: 2 attr_5: 2 ... attr_24: 2 attr_25: 2 attr_26: 2 attr_27: 2 attr_28: 2 attr_29: 2

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4750/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 620.694ms · About: xarray-datasette