home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

6 rows where repo = 13221727 and user = 10720577 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 4
  • issue 2

state 1

  • closed 6

repo 1

  • xarray · 6 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
438537597 MDExOlB1bGxSZXF1ZXN0Mjc0NTQ3OTcz 2930 Bugfix/coords not deep copy pletchm 10720577 closed 0     3 2019-04-29T22:43:10Z 2023-01-20T10:58:37Z 2019-05-02T22:46:36Z CONTRIBUTOR   0 pydata/xarray/pulls/2930

This pull request fixes a bug that prevented making a complete deep copy of a DataArray or Dataset, because the coords weren't being deep copied. It took a small fix in the IndexVariable.copy method. This method now allows both deep and shallow copies of coords to be made.

This pull request corresponds to this issue https://github.com/pydata/xarray/issues/1463.

  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2930/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
403326458 MDU6SXNzdWU0MDMzMjY0NTg= 2710 xarray.DataArray.expand_dims() can only expand dimension for a point coordinate pletchm 10720577 closed 0     14 2019-01-25T20:46:05Z 2020-02-20T15:35:22Z 2020-02-20T15:35:22Z CONTRIBUTOR      

Current expand_dims functionality

Apparently, expand_dims can only create a dimension for a point coordinate, i.e. it promotes a scalar coordinate into 1D coordinate. Here is an example: ```python

coords = {"b": range(5), "c": range(3)} da = xr.DataArray(np.ones([5, 3]), coords=coords, dims=list(coords.keys())) da <xarray.DataArray (b: 5, c: 3)> array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]) Coordinates: * b (b) int64 0 1 2 3 4 * c (c) int64 0 1 2 da["a"] = 0 # create a point coordinate da <xarray.DataArray (b: 5, c: 3)> array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]) Coordinates: * b (b) int64 0 1 2 3 4 * c (c) int64 0 1 2 a int64 0 da.expand_dims("a") # create a new dimension "a" for the point coordinated <xarray.DataArray (a: 1, b: 5, c: 3)> array([[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]]) Coordinates: * b (b) int64 0 1 2 3 4 * c (c) int64 0 1 2 * a (a) int64 0

```

Problem description

I want to be able to do 2 more things with expand_dims or maybe a related/similar method: 1) broadcast the data across 1 or more new dimensions 2) expand an existing dimension to include 1 or more new coordinates

Here is the code I currently use to accomplish this

``` from collections import OrderedDict

import xarray as xr

def expand_dimensions(data, fill_value=np.nan, **new_coords): """Expand (or add if it doesn't yet exist) the data array to fill in new coordinates across multiple dimensions.

If a dimension doesn't exist in the dataarray yet, then the result will be
`data`, broadcasted across this dimension.

>>> da = xr.DataArray([1, 2, 3], dims="a", coords=[[0, 1, 2]])
>>> expand_dimensions(da, b=[1, 2, 3, 4, 5])
<xarray.DataArray (a: 3, b: 5)>
array([[ 1.,  1.,  1.,  1.,  1.],
       [ 2.,  2.,  2.,  2.,  2.],
       [ 3.,  3.,  3.,  3.,  3.]])
Coordinates:
  * a        (a) int64 0 1 2
  * b        (b) int64 1 2 3 4 5

Or, if `dim` is already a dimension in `data`, then any new coordinate
values in `new_coords` that are not yet in `data[dim]` will be added,
and the values corresponding to those new coordinates will be `fill_value`.

>>> da = xr.DataArray([1, 2, 3], dims="a", coords=[[0, 1, 2]])
>>> expand_dimensions(da, a=[1, 2, 3, 4, 5])
<xarray.DataArray (a: 6)>
array([ 1.,  2.,  3.,  0.,  0.,  0.])
Coordinates:
  * a        (a) int64 0 1 2 3 4 5

Args:
    data (xarray.DataArray):
        Data that needs dimensions expanded.
    fill_value (scalar, xarray.DataArray, optional):
        If expanding new coords this is the value of the new datum.
        Defaults to `np.nan`.
    **new_coords (list[int | str]):
        The keywords are arbitrary dimensions and the values are
        coordinates of those dimensions that the data will include after it
        has been expanded.
Returns:
    xarray.DataArray:
        Data that had its dimensions expanded to include the new
        coordinates.
"""
ordered_coord_dict = OrderedDict(new_coords)
shape_da = xr.DataArray(
    np.zeros(list(map(len, ordered_coord_dict.values()))),
    coords=ordered_coord_dict,
    dims=ordered_coord_dict.keys())
expanded_data = xr.broadcast(data, shape_da)[0].fillna(fill_value)
return expanded_data

Here's an example of broadcasting data across a new dimension:

coords = {"b": range(5), "c": range(3)} da = xr.DataArray(np.ones([5, 3]), coords=coords, dims=list(coords.keys())) expand_dimensions(da, a=[0, 1, 2]) <xarray.DataArray (b: 5, c: 3, a: 3)> array([[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]],

   [[1., 1., 1.],
    [1., 1., 1.],
    [1., 1., 1.]],

   [[1., 1., 1.],
    [1., 1., 1.],
    [1., 1., 1.]],

   [[1., 1., 1.],
    [1., 1., 1.],
    [1., 1., 1.]],

   [[1., 1., 1.],
    [1., 1., 1.],
    [1., 1., 1.]]])

Coordinates: * b (b) int64 0 1 2 3 4 * c (c) int64 0 1 2 * a (a) int64 0 1 2 Here's an example of expanding an existing dimension to include new coordinates:

expand_dimensions(da, b=[5, 6]) <xarray.DataArray (b: 7, c: 3)> array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [nan, nan, nan], [nan, nan, nan]]) Coordinates: * b (b) int64 0 1 2 3 4 5 6 * c (c) int64 0 1 2 ```

Final Note

If no one else is already working on this, and if it seems like a useful addition to XArray, then I would more than happy to work on this. Please let me know.

Thank you, Martin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2710/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
439823329 MDExOlB1bGxSZXF1ZXN0Mjc1NTQ3ODcx 2936 BUGFIX: deep-copy wasn't copying coords, bug fixed within IndexVariable pletchm 10720577 closed 0     13 2019-05-02T22:58:40Z 2019-05-09T22:31:02Z 2019-05-08T14:44:25Z CONTRIBUTOR   0 pydata/xarray/pulls/2936

This pull request fixes a bug that prevented making a complete deep copy of a DataArray or Dataset, because the coords weren't being deep copied. It took a small fix in the IndexVariable.copy method. This method now allows both deep and shallow copies of coords to be made.

This pull request corresponds to this issue https://github.com/pydata/xarray/issues/1463. - [x] Tests added - [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2936/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
442394463 MDExOlB1bGxSZXF1ZXN0Mjc3NTE3NTQ2 2953 Mark test for copying coords of dataarray and dataset with xfail pletchm 10720577 closed 0     1 2019-05-09T19:25:40Z 2019-05-09T22:17:56Z 2019-05-09T22:17:53Z CONTRIBUTOR   0 pydata/xarray/pulls/2953

Mark test for copying coords of dataarray and dataset with xfail. It looks like the test fails for the shallow copy, and apparently only on Windows for some reason. In Windows coords seem to be immutable unless it's one dataarray deep copied from another (which is why only the deep=False test fails). So I decided to just mark the tests as xfail for now (but I'd be happy to create an issue and look into it more in the future).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2953/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
408340215 MDExOlB1bGxSZXF1ZXN0MjUxNjExMDM5 2757 Allow expand_dims() method to support inserting/broadcasting dimensions with size>1 pletchm 10720577 closed 0     4 2019-02-08T21:59:36Z 2019-03-26T02:42:11Z 2019-03-26T02:41:48Z CONTRIBUTOR   0 pydata/xarray/pulls/2757

This pull request enhances the expand_dims method for both Dataset and DataArray objects to support inserting/broadcasting dimensions with size > 1. It corresponds to this issue https://github.com/pydata/xarray/issues/2710.

Changes:

  1. dataset.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions
  2. dataarray.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions
  3. Add alternative option to passing a dict to the dim argument, which is now an optional kwarg, passing in each new dimension as its own kwarg
  4. Add expand_dims enhancement from issue 2710 to whats-new.rst

Included:

  • [ ] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

What's new:

All of the old functionality is still there, so it shouldn't break anyone's existing code that uses it.

You can now pass a dim as a dict, where the keys are the new dimensions and the values are either integers (giving the length of the new dimensions) or iterables (giving the coordinates of the new dimensions). ``` import numpy as np import xarray as xr

original = xr.Dataset({'x': ('a', np.random.randn(3)), 'y': (['b', 'a'], np.random.randn(4, 3))}, coords={'a': np.linspace(0, 1, 3), 'b': np.linspace(0, 1, 4), 'c': np.linspace(0, 1, 5)}, attrs={'key': 'entry'}) original <xarray.Dataset> Dimensions: (a: 3, b: 4, c: 5) Coordinates: * a (a) float64 0.0 0.5 1.0 * b (b) float64 0.0 0.3333 0.6667 1.0 * c (c) float64 0.0 0.25 0.5 0.75 1.0 Data variables: x (a) float64 -1.556 0.2178 0.6319 y (b, a) float64 0.5273 0.6652 0.3418 1.858 ... -0.3519 0.8088 0.8753 Attributes: key: entry original.expand_dims({"d": 4, "e": ["l", "m", "n"]}) <xarray.Dataset> Dimensions: (a: 3, b: 4, c: 5, d: 4, e: 3) Coordinates: * e (e) <U1 'l' 'm' 'n' * a (a) float64 0.0 0.5 1.0 * b (b) float64 0.0 0.3333 0.6667 1.0 * c (c) float64 0.0 0.25 0.5 0.75 1.0 Dimensions without coordinates: d Data variables: x (d, e, a) float64 -1.556 0.2178 0.6319 ... -1.556 0.2178 0.6319 y (d, e, b, a) float64 0.5273 0.6652 0.3418 ... -0.3519 0.8088 0.8753 Attributes: key: entry Or, equivalently, you can pass the new dimensions as kwargs instead of a dictionary. original.expand_dims(d=4, e=["l", "m", "n"]) ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2757/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
403367810 MDU6SXNzdWU0MDMzNjc4MTA= 2713 xarray.DataArray.mean() can't calculate weighted mean pletchm 10720577 closed 0     2 2019-01-25T23:08:01Z 2019-01-26T02:50:07Z 2019-01-26T02:49:53Z CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

Currently xarray.DataArray.mean() and xarray.Dataset.mean() cannot calculate weighted means. I think it would be useful if it had a similar API to numpy.average: https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.average.html

Here is the code I currently use to get the weighted mean of an xarray.DataArray. ```python def weighted_mean(data_da, dim, weights): r"""Computes the weighted mean.

We can only do the actual weighted mean over the dimensions that
``data_da`` and ``weights`` share, so for dimensions in ``dim`` that aren't
included in ``weights`` we must take the unweighted mean.

This functions skips NaNs, i.e. Data points that are NaN have corresponding
NaN weights.

Args:
    data_da (xarray.DataArray):
        Data to compute a weighted mean for.
    dim (str | list[str]):
        dimension(s) of the dataarray to reduce over
    weights (xarray.DataArray):
        a 1-D dataarray the same length as the weighted dim, with dimension
        name equal to that of the weighted dim. Must be nonnegative.
Returns:
    (xarray.DataArray):
        The mean over the given dimension. So it will contain all
        dimensions of the input that are not in ``dim``.
Raises:
    (IndexError):
        If ``weights.dims`` is not a subset of ``dim``.
    (ValueError):
        If ``weights`` has values that are negative or infinite.
"""
if isinstance(dim, str):
    dim = [dim]
else:
    dim = list(dim)

if not set(weights.dims) <= set(dim):
    dim_err_msg = (
        "`weights.dims` must be a subset of `dim`. {} are dimensions in "
        "`weights`, but not in `dim`."
    ).format(set(weights.dims) - set(dim))
    raise IndexError(dim_err_msg)
else:
    pass  # `weights.dims` is a subset of `dim`

if (weights < 0).any() or xr.ufuncs.isinf(weights).any():
    negative_weight_err_msg = "Weight must be nonnegative and finite"
    raise ValueError(negative_weight_err_msg)
else:
    pass  # `weights` are nonnegative

weight_dims = [
    weight_dim for weight_dim in dim if weight_dim in weights.dims
]

if np.isnan(data_da).any():
    expanded_weights, _ = xr.broadcast(weights, data_da)
    weights_with_nans = expanded_weights.where(~np.isnan(data_da))
else:
    weights_with_nans = weights

mean_da = ((data_da * weights_with_nans).sum(weight_dims, skipna=True)
           / weights_with_nans.sum(weight_dims))
other_dims = list(set(dim) - set(weight_dims))
return mean_da.mean(other_dims, skipna=True)

``` If no one is already working on this and if it seems useful, then I would be happy to work on this.

Thank you, Martin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2713/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 74.742ms · About: xarray-datasette