issues
7 rows where comments = 2, state = "open" and user = 35968931 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2267780811 | PR_kwDOAMm_X85t8kgX | 8979 | Warn on automatic coercion to coordinate variables in Dataset constructor | TomNicholas 35968931 | open | 0 | 2 | 2024-04-28T19:44:20Z | 2024-04-29T21:13:00Z | MEMBER | 0 | pydata/xarray/pulls/8979 |
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 2027231531 | I_kwDOAMm_X8541Rkr | 8524 | PR labeler bot broken and possibly dead | TomNicholas 35968931 | open | 0 | 2 | 2023-12-05T22:23:44Z | 2023-12-06T15:33:42Z | MEMBER | What is your issue?The PR labeler bot seems to be broken https://github.com/pydata/xarray/actions/runs/7107212418/job/19348227101?pr=8404 and even worse the repository has been archived! https://github.com/andymckay/labeler I actually like this bot, but unless a similar bot exists somewhere else I guess we should just delete this action 😞 |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
reopened | xarray 13221727 | issue | |||||||
| 1790161818 | PR_kwDOAMm_X85UvI4i | 7963 | Suggest installing dask when not discovered by ChunkManager | TomNicholas 35968931 | open | 0 | 2 | 2023-07-05T19:34:06Z | 2023-10-16T13:31:44Z | MEMBER | 0 | pydata/xarray/pulls/7963 |
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/7963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 1812188730 | I_kwDOAMm_X85sA846 | 8004 | Rotation Functional Index example | TomNicholas 35968931 | open | 0 | 2 | 2023-07-19T15:23:20Z | 2023-08-24T13:26:56Z | MEMBER | Is your feature request related to a problem?I'm trying to think of an example that would demonstrate the "functional index" pattern discussed in https://github.com/pydata/xarray/issues/3620. I think a 2D rotation is the simplest example of an analytically-expressible, non-trivial, domain-agnostic case where you might want to back a set of multiple coordinates with a single functional index. It's also nice because there is additional information that must be passed and stored (the angle of the rotation), but that part is very simple, and domain-agnostic. I'm proposing we make this example work and put it in the custom index docs. I had a go at making that example (notebook here) @benbovy, but I'm confused about a couple of things: 1) How do I implement Describe the solution you'd likeNo response Describe alternatives you've consideredNo response Additional contextThis example is inspired by @jni's use case in napari, where (IIUC) they want to do a lazy functional affine transformation from pixel to physical coordinates, where the simplest example of such a transform might be a linear shear (caused by the imaging focal plane being at an angle to the physical sample). |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8004/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1742035781 | I_kwDOAMm_X85n1VtF | 7894 | Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? | TomNicholas 35968931 | open | 0 | 2 | 2023-06-05T15:32:35Z | 2023-06-05T21:59:45Z | MEMBER | Discussed in https://github.com/pydata/xarray/discussions/5283
<sup>Originally posted by **chfite** May 9, 2021</sup>
I am using the Dataset.integrate() function and noticed that because one of my variables has a NaN in it the function returns a NaN for the integrated value for that variable. I know based on the trapezoidal rule one could not get an integrated value at the location of the NaN, but is it not possible for it to calculate the integrated values where there were regular values?
Assuming 0 for NaNs does not work because it would still integrate between the values before and after 0 and add additional area I do not want. Using DataArray.dropna() also is not sufficient because it would assume the value before the NaN is then connected to the value after the NaN and again add additional area that I would not want included.
If a "skipna" functionality or something could not be added to the integrate function, does anyone have a suggestion for another way to get around to calculating my integrated area while excluding the NaNs? |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/7894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1188523721 | I_kwDOAMm_X85G127J | 6431 | Bug when padding coordinates with NaNs | TomNicholas 35968931 | open | 0 | 2 | 2022-03-31T18:57:16Z | 2023-03-30T13:33:10Z | MEMBER | What happened?
ValueError Traceback (most recent call last) Input In [12], in <cell line: 1>() ----> 1 da.pad({'x': 1}, 'constant', constant_values=np.NAN) File ~/Documents/Work/Code/xarray/xarray/core/dataarray.py:4158, in DataArray.pad(self, pad_width, mode, stat_length, constant_values, end_values, reflect_type, pad_width_kwargs) 4000 def pad( 4001 self, 4002 pad_width: Mapping[Any, int | tuple[int, int]] | None = None, (...) 4012 pad_width_kwargs: Any, 4013 ) -> DataArray: 4014 """Pad this array along one or more dimensions. 4015 4016 .. warning:: (...) 4156 z (x) float64 nan 100.0 200.0 nan 4157 """ -> 4158 ds = self._to_temp_dataset().pad( 4159 pad_width=pad_width, 4160 mode=mode, 4161 stat_length=stat_length, 4162 constant_values=constant_values, 4163 end_values=end_values, 4164 reflect_type=reflect_type, 4165 **pad_width_kwargs, 4166 ) 4167 return self._from_temp_dataset(ds) File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:7368, in Dataset.pad(self, pad_width, mode, stat_length, constant_values, end_values, reflect_type, pad_width_kwargs) 7366 variables[name] = var 7367 elif name in self.data_vars: -> 7368 variables[name] = var.pad( 7369 pad_width=var_pad_width, 7370 mode=mode, 7371 stat_length=stat_length, 7372 constant_values=constant_values, 7373 end_values=end_values, 7374 reflect_type=reflect_type, 7375 ) 7376 else: 7377 variables[name] = var.pad( 7378 pad_width=var_pad_width, 7379 mode=coord_pad_mode, 7380 coord_pad_options, # type: ignore[arg-type] 7381 ) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1360, in Variable.pad(self, pad_width, mode, stat_length, constant_values, end_values, reflect_type, pad_width_kwargs) 1357 if reflect_type is not None: 1358 pad_option_kwargs["reflect_type"] = reflect_type # type: ignore[assignment] -> 1360 array = np.pad( # type: ignore[call-overload] 1361 self.data.astype(dtype, copy=False), 1362 pad_width_by_index, 1363 mode=mode, 1364 pad_option_kwargs, 1365 ) 1367 return type(self)(self.dims, array) File <array_function internals>:5, in pad(args, *kwargs) File ~/miniconda3/envs/py39/lib/python3.9/site-packages/numpy/lib/arraypad.py:803, in pad(array, pad_width, mode, **kwargs) 801 for axis, width_pair, value_pair in zip(axes, pad_width, values): 802 roi = _view_roi(padded, original_area_slice, axis) --> 803 _set_pad_area(roi, axis, width_pair, value_pair) 805 elif mode == "empty": 806 pass # Do nothing as _pad_simple already returned the correct result File ~/miniconda3/envs/py39/lib/python3.9/site-packages/numpy/lib/arraypad.py:147, in _set_pad_area(padded, axis, width_pair, value_pair)
130 """
131 Set empty-padded area in given dimension.
132
(...)
144 broadcastable to the shape of ValueError: cannot convert float NaN to integer ``` What did you expect to happen?It should have successfully padded with a NaN, same as it does if you don't specify
Minimal Complete Verifiable ExampleNo response Relevant log outputNo response Anything else we need to know?No response EnvironmentINSTALLED VERSIONScommit: None python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:46) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.11.0-7620-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.20.3.dev4+gdbc02d4e pandas: 1.4.0 numpy: 1.21.4 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.10.3 cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2022.01.1 distributed: 2022.01.1 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2022.01.0 cupy: None pint: None sparse: None setuptools: 59.6.0 pip: 21.3.1 conda: 4.11.0 pytest: 6.2.5 IPython: 8.2.0 sphinx: 4.4.0 |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 446054247 | MDU6SXNzdWU0NDYwNTQyNDc= | 2975 | Inconsistent/confusing behaviour when concatenating dimension coords | TomNicholas 35968931 | open | 0 | 2 | 2019-05-20T11:01:37Z | 2021-07-08T17:42:52Z | MEMBER | I noticed that with multiple conflicting dimension coords then concat can give pretty weird/counterintuitive results, at least compared to what the documentation suggests they should give: ```python Create two datasets with conflicting coordinatesobjs = [Dataset({'x': [0], 'y': [1]}), Dataset({'y': [0], 'x': [1]})] [<xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: * x (x) int64 0 * y (y) int64 1 Data variables: empty, <xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: * y (y) int64 0 * x (x) int64 1 Data variables: empty] ``` ```python Try to join along only 'x',coords='minimal' so concatenate "Only coordinates in which the dimension already appears"concat(objs, dim='x', coords='minimal') <xarray.Dataset> Dimensions: (x: 2, y: 2) Coordinates: * y (y) int64 0 1 * x (x) int64 0 1 Data variables: empty It's joined along x and y!``` Based on my reading of the docstring for concat, I would have expected this to not attempt to concatenate y, because Now let's try to get concat to broadcast 'y' across 'x': ```python Try to join along only 'x' by setting coords='different'concat(objs, dim='x', coords='different') ``` Now as "Data variables which are not equal (ignoring attributes) across all datasets are also concatenated" then I would have expected 'y' to be concatenated across 'x', i.e. to add the 'x' dimension to the 'y' coord, i.e:
Same again but without dimension coordsIf we create the same sort of objects but the variables are data vars not coords, then everything behaves exactly as expected: ```python objs2 = [Dataset({'a': ('x', [0]), 'b': ('y', [1])}), Dataset({'a': ('x', [1]), 'b': ('y', [0])})] [<xarray.Dataset> Dimensions: (x: 1, y: 1) Dimensions without coordinates: x, y Data variables: a (x) int64 0 b (y) int64 1, <xarray.Dataset> Dimensions: (x: 1, y: 1) Dimensions without coordinates: x, y Data variables: a (x) int64 1 b (y) int64 0] concat(objs2, dim='x', data_vars='minimal') ValueError: variable b not equal across datasets concat(objs2, dim='x', data_vars='different') <xarray.Dataset> Dimensions: (x: 2, y: 1) Dimensions without coordinates: x, y Data variables: a (x) int64 0 1 b (x, y) int64 1 0 ``` Also if you do the same again but with coordinates which are not dimension coords, i.e: ```python objs3 = [Dataset(coords={'a': ('x', [0]), 'b': ('y', [1])}), Dataset(coords={'a': ('x', [1]), 'b': ('y', [0])})] [<xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: a (x) int64 0 b (y) int64 1 Dimensions without coordinates: x, y Data variables: empty, <xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: a (x) int64 1 b (y) int64 0 Dimensions without coordinates: x, y Data variables: empty] ``` then this again gives the expected concatenation behaviour. So this implies that the compatibility checks that are being done on the data vars are not being done on the coords, but only if they are dimension coordinates! Either this is not the desired behaviour or the concat docstring needs to be a lot clearer. If we agree that this is not the desired behaviour then I will have a look inside EDIT: Presumably this has something to do with the ToDo in the code for |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);