issues
7 rows where comments = 8 and user = 14371165 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1680031454 | I_kwDOAMm_X85kIz7e | 7780 | mypy does not understand output of binary operations | Illviljan 14371165 | open | 0 | 8 | 2023-04-23T13:38:55Z | 2024-04-28T20:07:04Z | MEMBER | What happened?When doing operations on numpy arrays and xarray variables mypy does not understand that the output is always a xarray variable regardless of the order. See example. What did you expect to happen?mypy to pass for the example code. Minimal Complete Verifiable Example```Python import numpy as np import xarray as xr x = np.array([1, 2, 4]) v = xr.Variable(["x"], x) numpy first:xv = x * v xv.values # error: "ndarray[Any, dtype[bool_]]" has no attribute "values" [attr-defined] if isinstance(xv, xr.Variable): xv.values variable first:vx = v * x vx.values if isinstance(vx, xr.Variable): vx.values ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?Seen in #7741 Environment
xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.16 (main, Mar 8 2023, 10:39:24) [MSC v.1916 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
libhdf5: 1.10.6
libnetcdf: None
xarray: 2023.4.2
pandas: 2.0.0
numpy: 1.23.5
scipy: 1.10.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: 2.10.0
Nio: None
zarr: None
cftime: None
nc_time_axis: None
PseudoNetCDF: None
iris: None
bottleneck: None
dask: 2023.4.0
distributed: 2023.4.0
matplotlib: 3.5.3
cartopy: None
seaborn: 0.12.2
numbagg: None
fsspec: 2023.4.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 67.7.1
pip: 23.1.1
conda: 23.3.1
pytest: 7.3.1
mypy: 1.2.0
IPython: 8.12.0
sphinx: 6.1.3
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/7780/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1953053810 | PR_kwDOAMm_X85dURGi | 8344 | Add mean to NamedArray._array_api | Illviljan 14371165 | open | 0 | 8 | 2023-10-19T21:05:06Z | 2023-12-19T17:49:22Z | MEMBER | 1 | pydata/xarray/pulls/8344 |
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 970245117 | MDExOlB1bGxSZXF1ZXN0NzEyMjIzNzc2 | 5704 | Allow in-memory arrays with open_mfdataset | Illviljan 14371165 | open | 0 | 8 | 2021-08-13T09:50:26Z | 2023-04-29T06:58:26Z | MEMBER | 0 | pydata/xarray/pulls/5704 | The docstring seems to imply that it's possible to get in-memory arrays: https://github.com/pydata/xarray/blob/4bb9d9c6df77137f05e85c7cc6508fe7a93dc0e4/xarray/backends/api.py#L732 But it doesn't seem possible because of: https://github.com/pydata/xarray/blob/4bb9d9c6df77137f05e85c7cc6508fe7a93dc0e4/xarray/backends/api.py#L899 This PR removes that
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 931016490 | MDExOlB1bGxSZXF1ZXN0Njc4NTc5MjIx | 5542 | Do not transpose 1d arrays during interpolation | Illviljan 14371165 | open | 0 | 8 | 2021-06-27T20:56:13Z | 2022-10-12T20:12:11Z | MEMBER | 0 | pydata/xarray/pulls/5542 | Seems a waste of time to transpose 1d arrays.
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 957432870 | MDExOlB1bGxSZXF1ZXN0NzAwODYwMzY4 | 5661 | Speed up _mapping_repr | Illviljan 14371165 | closed | 0 | 8 | 2021-08-01T08:44:17Z | 2022-08-12T09:07:44Z | 2021-08-02T19:45:16Z | MEMBER | 0 | pydata/xarray/pulls/5661 | Creating a ordered list for filtering purposes using
Test case: ```python import numpy as np import xarray as xr a = np.arange(0, 2000) data_vars = dict() for i in a: data_vars[f"long_variable_name_{i}"] = xr.DataArray( name=f"long_variable_name_{i}", data=np.arange(0, 20), dims=[f"long_coord_name_{i}x"], coords={f"long_coord_name{i}x": np.arange(0, 20) * 2}, ) ds0 = xr.Dataset(data_vars) ds0.attrs = {f"attr{k}": 2 for k in a} ``` Before:
After:
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 830918156 | MDExOlB1bGxSZXF1ZXN0NTkyMzc2Mzk2 | 5031 | Keep coord attrs when interpolating | Illviljan 14371165 | closed | 0 | 8 | 2021-03-13T15:05:39Z | 2021-05-18T18:16:10Z | 2021-04-27T07:00:08Z | MEMBER | 0 | pydata/xarray/pulls/5031 |
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5031/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 777526340 | MDExOlB1bGxSZXF1ZXN0NTQ3Nzk5MDk2 | 4750 | Limit number of data rows shown in repr | Illviljan 14371165 | closed | 0 | 8 | 2021-01-02T21:14:50Z | 2021-01-04T02:13:52Z | 2021-01-04T02:13:52Z | MEMBER | 0 | pydata/xarray/pulls/4750 |
Test example:
Looks like this with 24 max rows of interesting data:
With 16 rows of interesting data:
With 12 rows of interesting data: ```python xr.set_options(display_max_rows=12) print(ds0) Out[79]: <xarray.Dataset> Dimensions: (time: 2) Coordinates: * time (time) int32 0 1 Data variables: long_variable_name0 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name2 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name3 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name4 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name5 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> ... long_variable_name1994 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1995 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1996 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1997 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1998 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> long_variable_name1999 (time) int32 dask.array<chunksize=(2,), meta=np.ndarray> Attributes: attr_0: 2 attr_1: 2 attr_2: 2 attr_3: 2 attr_4: 2 attr_5: 2 ... attr_24: 2 attr_25: 2 attr_26: 2 attr_27: 2 attr_28: 2 attr_29: 2 ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/4750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);