home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

21 rows where user = 15331990 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 17
  • pull 4

state 2

  • closed 16
  • open 5

repo 1

  • xarray 21
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2115555965 I_kwDOAMm_X85-GNJ9 8695 Return a 3D object alongside 1D object in apply_ufunc ahuang11 15331990 closed 0     7 2024-02-02T18:47:14Z 2024-04-28T19:59:31Z 2024-04-28T19:59:31Z CONTRIBUTOR      

Is your feature request related to a problem?

Currently, I have something similar to this, where the input_lat is transformed to new_lat (here, +0.25, but in real use case, it's indeterministic).

Since xarray_ufunc doesn't return a dataset with actual coordinates values, I had to return a second output to retain new_lat to properly update the coordinate values, but this second output is shaped time, lat, lon so I have to ds["lat"] = new_lat.isel(lon=0, time=0).values, which I think is inefficient; I simply need it to be shaped lat.

Any ideas on how I can modify this to make it more efficient?

```python import xarray as xr import numpy as np

air = xr.tutorial.open_dataset("air_temperature")["air"] input_lat = np.arange(20, 45)

def interp1d_np(data, base_lat, input_lat): new_lat = input_lat + 0.25 return np.interp(new_lat, base_lat, data), new_lat

ds, new_lat = xr.apply_ufunc( interp1d_np, # first the function air, air.lat, # as above input_lat, # as above input_core_dims=[["lat"], ["lat"], ["lat"]], # list with one entry per arg output_core_dims=[["lat"], ["lat"]], # returned data has one dimension exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be a set! vectorize=True, # loop over non-core dims ) new_lat = new_lat.isel(lon=0, time=0).values ds["lat"] = new_lat ```

Describe the solution you'd like

Either be able to automatically assign the new_lat to the returned xarray object, or allow a 1D dataset to be returned

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8695/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
746929580 MDU6SXNzdWU3NDY5Mjk1ODA= 4596 Working with Multidimensional Coordinates - Plotting PlateCarree projection looks strange ahuang11 15331990 closed 0     5 2020-11-19T21:15:25Z 2024-02-28T19:09:33Z 2024-02-28T19:09:33Z CONTRIBUTOR      

The pixels seem stretched; https://xarray.pydata.org/en/stable/examples/multidimensional-coords.html

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4596/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
415774106 MDU6SXNzdWU0MTU3NzQxMDY= 2795 Add "unique()" method, mimicking pandas ahuang11 15331990 open 0     6 2019-02-28T18:58:15Z 2024-01-08T17:31:30Z   CONTRIBUTOR      

Would it be good to add a unique() method that mimics pandas?

import pandas as pd import xarray as xr pd.Series([0, 1, 1, 2]).unique() xr.DataArray([0, 1, 1, 2]).unique() # not implemented

Output: array([0, 1, 2]) AttributeError: 'DataArray' object has no attribute 'unique'

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2795/reactions",
    "total_count": 10,
    "+1": 10,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1052753606 I_kwDOAMm_X84-v77G 5985 Formatting data array as strings? ahuang11 15331990 open 0     7 2021-11-13T19:29:02Z 2023-03-17T13:10:06Z   CONTRIBUTOR      

https://github.com/pydata/xarray/discussions/5865#discussioncomment-1636647

I wonder if it's possible to implement a built-in function like: da.str.format("%.2f") or xr.string_format(da, "%.2f)

To wrap: ``` import xarray as xr

da = xr.DataArray([5., 6., 7.]) das = xr.DataArray("%.2f") das.str % da

<xarray.DataArray (dim_0: 3)> array(['5.00', '6.00', '7.00'], dtype='<U4') Dimensions without coordinates: dim_0 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5985/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
843961481 MDExOlB1bGxSZXF1ZXN0NjAzMjc2NjE0 5091 Add unique method ahuang11 15331990 closed 0     5 2021-03-30T01:09:09Z 2022-08-16T23:35:14Z 2022-08-16T23:35:14Z CONTRIBUTOR   0 pydata/xarray/pulls/5091
  • [x] Closes #2795
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5091/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
383945783 MDU6SXNzdWUzODM5NDU3ODM= 2568 Xarray equivalent of np.place or df.map(mapping)? ahuang11 15331990 closed 0     11 2018-11-24T00:33:11Z 2022-04-18T15:51:57Z 2022-04-18T15:51:57Z CONTRIBUTOR      

```python

numpy version

x = np.array([0, 1]) np.place(x, x == 0, 1)

pandas version

pd.Series([0, 1]).map({0: 1, 1: 1})

current workaround

ds = xr.Dataset({'test': [0, 1]}) np.place(ds['test'].values, ds['test'].values == 0, 1)

```

Problem description

Is there a built in method to map values like 0 to 1?

Expected Output

returns [1, 1]

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2568/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
816540158 MDU6SXNzdWU4MTY1NDAxNTg= 4958 to_zarr mode='a-', append_dim; if dim value exists raise error ahuang11 15331990 open 0     1 2021-02-25T15:26:02Z 2022-04-09T15:19:28Z   CONTRIBUTOR      

If I have a ds with time, lat, lon and I call the same command twice: python ds.to_zarr('test.zarr', append_dim='time') ds.to_zarr('test.zarr', append_dim='time') Can it raise an error since all the times already exist?

Kind of like: ```python import numpy as np import xarray as xr

ds = xr.tutorial.open_dataset('air_temperature') ds.to_zarr('test_air.zarr', append_dim='time') ds_tmp = xr.open_mfdataset('test_air.zarr', engine='zarr') overlap = np.intersect1d(ds['time'], ds_tmp['time']) if len(overlap) > 1: raise ValueError(f'Found overlapping values in datasets {overlap}') ds.to_zarr('test_air.zarr', append_dim='time') ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4958/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
873519048 MDExOlB1bGxSZXF1ZXN0NjI4MzIwMDkx 5239 Add drop_duplicates for dims ahuang11 15331990 closed 0     10 2021-05-01T03:23:26Z 2021-05-15T17:46:06Z 2021-05-15T17:46:06Z CONTRIBUTOR   0 pydata/xarray/pulls/5239

Ruined https://github.com/pydata/xarray/pull/5089 with reverting so remaking the PR for just dims

  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5239/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
842940980 MDExOlB1bGxSZXF1ZXN0NjAyMzk1MTE3 5089 Add drop duplicates ahuang11 15331990 closed 0     20 2021-03-29T03:51:07Z 2021-05-01T03:25:48Z 2021-05-01T03:25:47Z CONTRIBUTOR   0 pydata/xarray/pulls/5089

Semi related to https://github.com/pydata/xarray/issues/2795, but not really; still want a separate unique function

  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5089/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
809708107 MDU6SXNzdWU4MDk3MDgxMDc= 4917 Comparing against datetime.datetime and pd.Timestamp ahuang11 15331990 open 0     1 2021-02-16T22:54:39Z 2021-03-25T22:18:08Z   CONTRIBUTOR      

Not sure if exactly bug and what performance implications there are but it'd be more user friendly if supported:

1.) comparing against datetime python import datetime import pandas as pd import xarray as xr ds = xr.Dataset(coords={'int': [0, 1, 2]}) ds['data'] = ('int', [0, 5, 6]) ds.coords['time'] = ('int', pd.date_range('2017-02-01', '2017-02-03')) ds = ds.where(ds['time'] > datetime.datetime(2017, 2, 2)) ds

TypeError: '>' not supported between instances of 'int' and 'datetime.datetime'

2.) pd.Timestamp python import datetime import pandas as pd import xarray as xr ds = xr.Dataset(coords={'int': [0, 1, 2]}) ds['data'] = ('int', [0, 5, 6]) ds.coords['time'] = ('int', pd.date_range('2017-02-01', '2017-02-03')) ds = ds.where(ds['time'] > pd.to_datetime('2017-02-02')) ds

This works though when converting to np.datetime64 python import datetime import pandas as pd import xarray as xr ds = xr.Dataset(coords={'int': [0, 1, 2]}) ds['data'] = ('int', [0, 5, 6]) ds.coords['time'] = ('int', pd.date_range('2017-02-01', '2017-02-03')) ds = ds.where(ds['time'] > pd.to_datetime(['2017-02-02']).values) ds

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4917/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
744274576 MDU6SXNzdWU3NDQyNzQ1NzY= 4588 drop keyword in ds.rolling(time=7, drop=True).mean()? ahuang11 15331990 closed 0     4 2020-11-16T23:10:35Z 2021-02-18T22:17:07Z 2021-02-18T22:17:07Z CONTRIBUTOR      

Should rolling have a drop keyword, similar to squeeze(drop=True)? import xarray as xr air = xr.tutorial.open_dataset('air_temperature') air = air.rolling(time=7, drop=True).mean()

Equivalent: import xarray as xr air = xr.tutorial.open_dataset('air_temperature') air = air.rolling(time=7).mean() air = air.isel(time=slice(6, None))

Actual implementation will require considering min_period / center=True too

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4588/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
756415834 MDU6SXNzdWU3NTY0MTU4MzQ= 4647 DataArray transpose inconsistent with Dataset Ellipsis usage ahuang11 15331990 closed 0     7 2020-12-03T17:52:16Z 2021-01-05T23:45:03Z 2021-01-05T23:45:03Z CONTRIBUTOR      

This works: import xarray as xr ds = xr.tutorial.open_dataset('air_temperature') ds.transpose('not_existing_dim', 'lat', 'lon', 'time', ...)

This doesn't (subset air): import xarray as xr ds = xr.tutorial.open_dataset('air_temperature') ds['air'].transpose('not_existing_dim', 'lat', 'lon', 'time', ...)

The error message is a bit inaccurate too since I do have Ellipsis included; might be related to two calls of: dims = tuple(utils.infix_dims(dims, self.dims)) ```

ValueError: ('not_existing_dim', 'lat', 'lon', 'time') must be a permuted list of ('time', 'lat', 'lon'), unless ... is included

Traceback ...


ValueError Traceback (most recent call last) <ipython-input-5-793dfc1507ea> in <module> 2 ds = xr.tutorial.open_dataset('air_temperature') 3 ds.transpose('not_existing_dim', 'lat', 'lon', 'time', ...) ----> 4 ds['air'].transpose('not_existing_dim', 'lat', 'lon', 'time', ...)

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/core/dataarray.py in transpose(self, transpose_coords, dims) 2035 if dims: 2036 dims = tuple(utils.infix_dims(dims, self.dims)) -> 2037 variable = self.variable.transpose(dims) 2038 if transpose_coords: 2039 coords: Dict[Hashable, Variable] = {}

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/core/variable.py in transpose(self, *dims) 1388 if len(dims) == 0: 1389 dims = self.dims[::-1] -> 1390 dims = tuple(infix_dims(dims, self.dims)) 1391 axes = self.get_axis_num(dims) 1392 if len(dims) < 2 or dims == self.dims:

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/core/utils.py in infix_dims(dims_supplied, dims_all) 724 if set(dims_supplied) ^ set(dims_all): 725 raise ValueError( --> 726 f"{dims_supplied} must be a permuted list of {dims_all}, unless ... is included" 727 ) 728 yield from dims_supplied ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4647/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
743165216 MDU6SXNzdWU3NDMxNjUyMTY= 4587 ffill with datetime64 errors ahuang11 15331990 open 0     1 2020-11-15T02:38:39Z 2020-11-15T14:23:19Z   CONTRIBUTOR      

import xarray as xr import pandas as pd xr.DataArray(pd.date_range('2020-01-01', '2020-02-01').tolist() + [pd.NaT]).ffill('dim_0')

<xarray.DataArray (dim_0: 33)> array(['2020-01-01T00:00:00.000000000', '2020-01-02T00:00:00.000000000', '2020-01-03T00:00:00.000000000', '2020-01-04T00:00:00.000000000', '2020-01-05T00:00:00.000000000', '2020-01-06T00:00:00.000000000', '2020-01-07T00:00:00.000000000', '2020-01-08T00:00:00.000000000', '2020-01-09T00:00:00.000000000', '2020-01-10T00:00:00.000000000', '2020-01-11T00:00:00.000000000', '2020-01-12T00:00:00.000000000', '2020-01-13T00:00:00.000000000', '2020-01-14T00:00:00.000000000', '2020-01-15T00:00:00.000000000', '2020-01-16T00:00:00.000000000', '2020-01-17T00:00:00.000000000', '2020-01-18T00:00:00.000000000', '2020-01-19T00:00:00.000000000', '2020-01-20T00:00:00.000000000', '2020-01-21T00:00:00.000000000', '2020-01-22T00:00:00.000000000', '2020-01-23T00:00:00.000000000', '2020-01-24T00:00:00.000000000', '2020-01-25T00:00:00.000000000', '2020-01-26T00:00:00.000000000', '2020-01-27T00:00:00.000000000', '2020-01-28T00:00:00.000000000', '2020-01-29T00:00:00.000000000', '2020-01-30T00:00:00.000000000', '2020-01-31T00:00:00.000000000', '2020-02-01T00:00:00.000000000', 'NaT'], dtype='datetime64[ns]') Dimensions without coordinates: dim_0

```

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/core/computation.py in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, vectorize, keep_attrs, dask_gufunc_kwargs, args) 698 ) 699 --> 700 result_data = func(input_data) 701 702 if signature.num_outputs == 1:

~/anaconda3/envs/py3/lib/python3.7/site-packages/bottleneck/slow/nonreduce_axis.py in push(a, n, axis) 49 elif ndim == 0: 50 return y ---> 51 fidx = ~np.isnan(y) 52 recent = np.empty(y.shape[:-1]) 53 count = np.empty(y.shape[:-1])

TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4587/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
653554923 MDU6SXNzdWU2NTM1NTQ5MjM= 4210 Use weighted with coarsen? ahuang11 15331990 closed 0     1 2020-07-08T19:53:28Z 2020-07-08T20:01:35Z 2020-07-08T20:01:35Z CONTRIBUTOR      

I want to do something similar as xesmf's weighted regridding, but without the need to install esmpy which has a lot of dependencies.

Are variations of the following possible? ds.weighted(coslat_weights).coarsen(lat=2, lon=2).mean()

ds.coarsen(lat=2, lon=2).weighted(coslat_weights).mean()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4210/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
621202499 MDU6SXNzdWU2MjEyMDI0OTk= 4083 Better default formatting of timedelta with plot method ahuang11 15331990 closed 0     1 2020-05-19T18:43:11Z 2020-05-19T19:39:35Z 2020-05-19T19:39:34Z CONTRIBUTOR      

Currently, it shows nanoseconds. import xarray as xr ds = xr.tutorial.open_dataset('air_temperature') ds.coords['tau'] = ds['time'] - ds['time'][0] ds.mean(['lat', 'lon'])['air'].swap_dims({'time': 'tau'}).plot()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4083/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
619089111 MDU6SXNzdWU2MTkwODkxMTE= 4066 Feature request: ds.interp_like() keyword to exclude certain dimensions ahuang11 15331990 closed 0     2 2020-05-15T16:15:59Z 2020-05-15T17:28:09Z 2020-05-15T17:28:09Z CONTRIBUTOR      

If I have two datasets and I want to match the lat/lon, but not the time, I would have to do ds1.interp(lat=ds2['lat'], lon=ds2['lon']) would be nice if I could do ds1.interp_like(ds2, exclude_dims=['time'])

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4066/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
522402552 MDU6SXNzdWU1MjI0MDI1NTI= 3522 CFTimeIndex changes to normal Index after renaming ahuang11 15331990 closed 0     3 2019-11-13T18:42:02Z 2019-11-15T19:49:30Z 2019-11-15T19:49:30Z CONTRIBUTOR      

MCVE Code Sample

Since this code crashes for me and I don't have xarray master, I can't create a tested MCVE Code sample at the moment, but I think something along the lines of: python import xarray as xr ds = xr.Dataset(coords={'time': xr.cftime_range(start='2000', periods=6, freq='2MS', calendar='noleap'), 'something': [0, 1, 2, 3]}) print(ds.indexes['time']) ds = ds.rename({'something': 'something_else'}) print(ds.indexes['time'])

Expected Output

CFTimeIndex([2000-05-01 12:00:00, 2000-05-02 12:00:00, ... 2001-04-01 12:00:00], dtype='object', name='time', length=425) After renaming Index([2000-05-01 12:00:00, 2000-05-02 12:00:00, ... 2001-04-01 12:00:00], dtype='object', name='time', length=425)

Problem Description

CFTimeIndex changes to normal Index after renaming

Output of xr.show_versions()

xarray: 0.14.0 pandas: 0.25.2 numpy: 1.17.2 scipy: 1.3.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.0 cfgrib: None iris: None bottleneck: 1.2.1 dask: 2.6.0 distributed: 2.6.0 matplotlib: 3.1.1 cartopy: 0.17.0 seaborn: None numbagg: None setuptools: 41.4.0 pip: 19.3.1 conda: None pytest: 5.2.1 IPython: 7.9.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3522/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
522397677 MDU6SXNzdWU1MjIzOTc2Nzc= 3521 cftime_range example doesn't work ahuang11 15331990 closed 0     1 2019-11-13T18:31:26Z 2019-11-13T18:34:43Z 2019-11-13T18:34:43Z CONTRIBUTOR      

MCVE Code Sample

python import xarray as xr xr.cftime_range(start='2000', periods=6, freq='2MS', calendar='noleap')

Expected Output

The output from the example

Problem Description

```

ValueError Traceback (most recent call last) <ipython-input-3-4ff834de4bd2> in <module> 1 import xarray as xr ----> 2 xr.cftime_range(start='2000', periods=6, freq='2MS', calendar='noleap')

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/coding/cftime_offsets.py in cftime_range(start, end, periods, freq, normalize, name, closed, calendar) 961 962 if start is not None: --> 963 start = to_cftime_datetime(start, calendar) 964 start = _maybe_normalize_date(start, normalize) 965 if end is not None:

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/coding/cftime_offsets.py in to_cftime_datetime(date_str_or_date, calendar) 677 "a calendar type must be provided" 678 ) --> 679 date, _ = _parse_iso8601_with_reso(get_date_type(calendar), date_str_or_date) 680 return date 681 elif isinstance(date_str_or_date, cftime.datetime):

~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/coding/cftimeindex.py in _parse_iso8601_with_reso(date_type, timestr) 114 # 1.0.3.4. 115 replace["dayofwk"] = -1 --> 116 return default.replace(**replace), resolution 117 118

cftime/_cftime.pyx in cftime._cftime.datetime.replace()

ValueError: Replacing the dayofyr or dayofwk of a datetime is not supported. ```

Output of xr.show_versions()

xarray: 0.14.0 pandas: 0.25.2 numpy: 1.17.2 scipy: 1.3.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.0 cfgrib: None iris: None bottleneck: 1.2.1 dask: 2.6.0 distributed: 2.6.0 matplotlib: 3.1.1 cartopy: 0.17.0 seaborn: None numbagg: None setuptools: 41.4.0 pip: 19.3.1 conda: None pytest: 5.2.1 IPython: 7.9.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3521/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
447268579 MDU6SXNzdWU0NDcyNjg1Nzk= 2981 Plot title using loc keyword doesn't override automated title ahuang11 15331990 closed 0     2 2019-05-22T17:56:17Z 2019-05-22T18:34:03Z 2019-05-22T18:34:03Z CONTRIBUTOR      

```python import xarray as xr import matplotlib.pyplot as plt

ds = xr.tutorial.open_dataset('air_temperature')['air'].isel(time=0) ```

This works as expected python ax = plt.axes() ds.plot(x='lon', y='lat', ax=ax) ax.set_title('new_title')

This doesn't ax = plt.axes() ds.plot(x='lon', y='lat', ax=ax) ax.set_title('new_title', loc='left')

With non-default loc, the old title still shows

xarray: 0.12.1 pandas: 0.23.4 numpy: 1.15.1 scipy: 1.1.0 netCDF4: 1.4.0 pydap: None h5netcdf: 0.6.1 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.0 nc_time_axis: None PseudonetCDF: None rasterio: 1.0.1 cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.1.1 distributed: 1.25.3 matplotlib: 3.1.0 cartopy: 0.17.0 seaborn: 0.8.1 setuptools: 39.1.0 pip: 19.0.1 conda: 4.6.14 pytest: 3.5.1 IPython: 6.4.0 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2981/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
403350812 MDU6SXNzdWU0MDMzNTA4MTI= 2711 Substituting values based on condition ahuang11 15331990 closed 0     2 2019-01-25T22:03:10Z 2019-01-26T03:22:07Z 2019-01-26T03:22:07Z CONTRIBUTOR      

Is there a more intuitive, built-in way of substituting values based on the conditions without having to flip every logic operator?

``` import xarray as xr ds = xr.tutorial.open_dataset('air_temperature') ds['text'] = (('time', 'lat', 'lon'), np.zeros_like(ds['air'].values).astype(str))

ds['text'] = ds['text'].where(ds['air'] < 273, 'above freezing') ds['text'] = ds['text'].where(ds['air'] > 273, 'below freezing') ds['text'] = ds['text'].where(ds['air'] != 273, 'freezing')

ds.hvplot('lon', 'lat', z='air', hover_cols=['text']).opts(color_levels=[200, 273, 300]) ```

The numpy equivalent (also seems faster by 2x) ``` above_freezing = np.where(ds['air'].values > 273) ds['text'].data[above_freezing] = 'above_freezing'

below_freezing = np.where(ds['air'].values < 273) ds['text'].data[below_freezing] = 'below_freezing'

freezing = np.where(ds['air'].values == 273) ds['text'].data[freezing] = 'freezing' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2711/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
349378661 MDExOlB1bGxSZXF1ZXN0MjA3NTA4ODEz 2360 Add option to not roll coords ahuang11 15331990 closed 0     1 2018-08-10T05:14:35Z 2018-08-15T08:11:57Z 2018-08-15T08:11:29Z CONTRIBUTOR   0 pydata/xarray/pulls/2360
  • [x] Closes #1875
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

Will add the others stuff from the checklist soon.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2360/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 24.353ms · About: xarray-datasette