home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

12 rows where comments = 3, type = "issue" and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: updated_at (date), closed_at (date)

state 2

  • closed 10
  • open 2

type 1

  • issue · 12 ✖

repo 1

  • xarray 12
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1952621896 I_kwDOAMm_X850YqVI 8337 Support rolling with numbagg dcherian 2448579 open 0     3 2023-10-19T16:11:40Z 2023-10-23T15:46:36Z   MEMBER      

Is your feature request related to a problem?

We can do plain reductions, and groupby reductions with numbagg. Rolling is the last one left!

I don't think coarsen will benefit since it's basically a reshape and reduce on that view, so it should already be accelerated. There may be small gains in handling the boundary conditions but that's probably it.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8337/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1333514579 I_kwDOAMm_X85Pe9FT 6902 Flox based groupby operations don't support `dtype` in mean method dcherian 2448579 closed 0     3 2022-08-09T16:38:25Z 2022-10-11T17:45:27Z 2022-10-11T17:45:27Z MEMBER      

Discussed in https://github.com/pydata/xarray/discussions/6901

<sup>Originally posted by **tasansal** August 9, 2022</sup> We have been using the new groupby logic with Flox and numpy_groupies; however, when we run the following, the dtype is not recognized as a valid argument. This breaks API compatibility for cases where you may not have the acceleration libraries installed. Not sure if this has to be upstream in In addition to base Xarray we have the following extras installed: Flox numpy_groupies Bottleneck We do this because our data is `float32` but we want the accumulator in mean to be `float64` for accuracy. One solution is to cast the variable to float64 before mean, which may cause a copy and spike in memory usage. When Flox and numpy_groupies are not installed, it works as expected. We are working with multi-dimensional time-series of weather forecast models. ```python da = xr.load_mfdataset(...) da.groupby("time.month").mean(dtype='float64').compute() ``` Here is the end of the traceback and it appears it is on Flox. ```shell File "/home/altay_sansal_tgs_com/miniconda3/envs/wind-data-mos/lib/python3.10/site-packages/flox/core.py", line 786, in _aggregate return _finalize_results(results, agg, axis, expected_groups, fill_value, reindex) File "/home/altay_sansal_tgs_com/miniconda3/envs/wind-data-mos/lib/python3.10/site-packages/flox/core.py", line 747, in _finalize_results finalized[agg.name] = agg.finalize(*squeezed["intermediates"], **agg.finalize_kwargs) TypeError: <lambda>() got an unexpected keyword argument 'dtype' ``` What is the best way to handle this, maybe fix it in Flox?
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6902/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1298145215 I_kwDOAMm_X85NYB-_ 6763 Map_blocks should raise nice error if provided template has no dask arrays dcherian 2448579 closed 0     3 2022-07-07T21:58:06Z 2022-07-14T17:42:26Z 2022-07-14T17:42:26Z MEMBER      

Discussed in https://github.com/pydata/xarray/discussions/6762

<sup>Originally posted by **tlsw231** July 7, 2022</sup> I am trying to use `map_blocks` to: ingest a multi-dimensional array as input, reduce along one dimension and add extra dimensions to the output. Is this possible? I am attaching a simple MRE below that gives me an `zip argument #2 must support iteration` error. Any pointers on what I might be doing wrong? [My real example is a 3d-dataset with `(time,lat,lon)` dimensions and I am trying to reduce along `time` while adding two new dimensions to the output. I tried so many things and got so many errors, including the one in the title, that I thought it is better to first understand how `map_blocks` works!] ``` # The goal is to feed in a 2d array, reduce along one dimension and add two new dimensions to the output. chunks={} dummy = xr.DataArray(data=np.random.random([8,100]),dims=['dim1','dim2']).chunk(chunks) def some_func(func): dims=func.dims n1 = len(func[func.dims[1]]) # This is 'dim2', we will average along 'dim1' below in the for loop newdim1 = 2; newdim2 = 5; output = xr.DataArray(np.nan*np.ones([n1,newdim1,newdim2]),dims=[dims[1],'new1','new2']) for n in range(n1): fmean = func.isel(dim2=n).mean(dims[0]).compute() for i in range(newdim1): for j in range(newdim2): output[n,i,j] = fmean return output #out = some_func(dummy) # This works template=xr.DataArray(np.nan*np.ones([len(dummy.dim2),2,5]), dims=['dim2','new1','new2']) out = xr.map_blocks(some_func,dummy,template=template).compute() # gives me the error message in the title ``` [Edit: Fixed a typo in the `n1 = len(func[func.dims[1]])` line, of course getting the same error.]
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6763/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1189140909 I_kwDOAMm_X85G4Nmt 6434 concat along dim with mix of scalar coordinate and array coordinates is not right dcherian 2448579 closed 0     3 2022-04-01T02:29:16Z 2022-04-06T01:19:47Z 2022-04-06T01:19:47Z MEMBER      

What happened?

Really hard to describe in words =)

concat = xr.concat([da.isel(time=0), da.isel(time=[1])], dim="time") xr.align(concat, da, dim="time")

fails when concat and da should be identical. This is causing failures in cf-xarray:https://github.com/xarray-contrib/cf-xarray/issues/319

cc @benbovy

What did you expect to happen?

No response

Minimal Complete Verifiable Example

```Python import numpy as np import xarray as xr

time = xr.DataArray( np.array( ["2013-01-01T00:00:00.000000000", "2013-01-01T06:00:00.000000000"], dtype="datetime64[ns]", ), dims="time", name="time", )

da = time concat = xr.concat([da.isel(time=0), da.isel(time=[1])], dim="time") xr.align(da, concat, join="exact") # works

da = xr.DataArray(np.ones(time.shape), dims="time", coords={"time": time}) concat = xr.concat([da.isel(time=0), da.isel(time=[1])], dim="time") xr.align(da, concat, join="exact") ```

Relevant log output

```

ValueError Traceback (most recent call last) Input In [27], in <module> 17 da = xr.DataArray(np.ones(time.shape), dims="time", coords={"time": time}) 18 concat = xr.concat([da.isel(time=0), da.isel(time=[1])], dim="time") ---> 19 xr.align(da, concat, join="exact")

File ~/work/python/xarray/xarray/core/alignment.py:761, in align(join, copy, indexes, exclude, fill_value, *objects) 566 """ 567 Given any number of Dataset and/or DataArray objects, returns new 568 objects with aligned indexes and dimension sizes. (...) 751 752 """ 753 aligner = Aligner( 754 objects, 755 join=join, (...) 759 fill_value=fill_value, 760 ) --> 761 aligner.align() 762 return aligner.results

File ~/work/python/xarray/xarray/core/alignment.py:549, in Aligner.align(self) 547 self.find_matching_unindexed_dims() 548 self.assert_no_index_conflict() --> 549 self.align_indexes() 550 self.assert_unindexed_dim_sizes_equal() 552 if self.join == "override":

File ~/work/python/xarray/xarray/core/alignment.py:395, in Aligner.align_indexes(self) 393 if need_reindex: 394 if self.join == "exact": --> 395 raise ValueError( 396 "cannot align objects with join='exact' where " 397 "index/labels/sizes are not equal along " 398 "these coordinates (dimensions): " 399 + ", ".join(f"{name!r} {dims!r}" for name, dims in key[0]) 400 ) 401 joiner = self._get_index_joiner(index_cls) 402 joined_index = joiner(matching_indexes)

ValueError: cannot align objects with join='exact' where index/labels/sizes are not equal along these coordinates (dimensions): 'time' ('time',) ```

Anything else we need to know?

No response

Environment

xarray main

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6434/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1171932478 I_kwDOAMm_X85F2kU- 6373 Zarr backend should avoid checking for invalid encodings dcherian 2448579 closed 0     3 2022-03-17T04:55:35Z 2022-03-18T10:06:01Z 2022-03-18T04:19:48Z MEMBER      

What is your issue?

The zarr backend has a list of "valid" encodings that needs to be updated any time zarr adds something new (e.g. https://github.com/pydata/xarray/pull/6348)

https://github.com/pydata/xarray/blob/53172cb1e03a98759faf77ef48efaa64676ad24a/xarray/backends/zarr.py#L215-L234

Can we get rid of this? I don't know the backends code well, but won't all our encoding parameters have been removed by this stage? The raise_on_invalid kwarg suggests so.

@tomwhite points out that zarr will raise a warning: ``` python

zarr.create((1,), blah=1) /Users/tom/miniconda/envs/sgkit-3.8/lib/python3.8/site-packages/zarr/creation.py:221: UserWarning: ignoring keyword argument 'blah' warn('ignoring keyword argument %r' % k) <zarr.core.Array (1,) float64> ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6373/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
501108453 MDU6SXNzdWU1MDExMDg0NTM= 3363 user-friendly additions for dask usage dcherian 2448579 closed 0     3 2019-10-01T19:48:27Z 2021-04-19T03:34:18Z 2021-04-19T03:34:18Z MEMBER      

Any thoughts on adding

  1. .chunksize or .nbytes_chunk
  2. .ntasks : this would be len(obj.__dask_graph__())
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3363/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
685825824 MDU6SXNzdWU2ODU4MjU4MjQ= 4376 wrong chunk sizes in html repr with nonuniform chunks dcherian 2448579 open 0     3 2020-08-25T21:23:11Z 2020-10-07T11:11:23Z   MEMBER      

What happened:

The HTML repr is using the first element in a chunks tuple;

What you expected to happen:

it should be using whatever dask does in this case

Minimal Complete Verifiable Example:

```python

import xarray as xr import dask

test = xr.DataArray( dask.array.zeros( (12, 901, 1001), chunks=( (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 899, 1), (1, 199, 1, 199, 1, 199, 1, 199, 1, 199, 1), ), ) ) test.to_dataset(name="a") ```

EDIT: The text repr has the same issue <xarray.Dataset> Dimensions: (dim_0: 12, dim_1: 901, dim_2: 1001) Dimensions without coordinates: dim_0, dim_1, dim_2 Data variables: a (dim_0, dim_1, dim_2) float64 dask.array<chunksize=(1, 1, 1), meta=np.ndarray>

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4376/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
584429748 MDU6SXNzdWU1ODQ0Mjk3NDg= 3867 macos py38 CI failing dcherian 2448579 closed 0     3 2020-03-19T13:54:10Z 2020-03-29T22:13:26Z 2020-03-29T22:13:26Z MEMBER      

import matplotlib is failing when it imports PIL

```python

E ImportError: dlopen(/usr/local/miniconda/envs/xarray-tests/lib/python3.8/site-packages/PIL/_imaging.cpython-38-darwin.so, 2): Library not loaded: @rpath/libwebp.7.dylib E Referenced from: /usr/local/miniconda/envs/xarray-tests/lib/libtiff.5.dylib E Reason: Incompatible library version: libtiff.5.dylib requires version 9.0.0 or later, but libwebp.7.dylib provides version 8.0.0

/usr/local/miniconda/envs/xarray-tests/lib/python3.8/site-packages/PIL/Image.py:69: ImportError ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3867/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
499477368 MDU6SXNzdWU0OTk0NzczNjg= 3350 assert_equal and dask dcherian 2448579 closed 0     3 2019-09-27T14:25:14Z 2020-01-10T16:10:57Z 2020-01-10T16:10:57Z MEMBER      

MCVE Code Sample

Example 1

```python import xarray as xr import numpy as np

da = xr.DataArray(np.random.randn(10, 20), name="a") ds = da.to_dataset() xr.testing.assert_equal(da, da.chunk({"dim_0": 2})) # works xr.testing.assert_equal(da.chunk(), da.chunk({"dim_0": 2})) # works

xr.testing.assert_equal(ds, ds.chunk({"dim_0": 2})) # works xr.testing.assert_equal(ds.chunk(), ds.chunk({"dim_0": 2})) # does not work

``` I get

```

ValueError Traceback (most recent call last) <ipython-input-1-bc8216a67408> in <module> 8 9 xr.testing.assert_equal(ds, ds.chunk({"dim_0": 2})) # works ---> 10 xr.testing.assert_equal(ds.chunk(), ds.chunk({"dim_0": 2})) # does not work

~/work/python/xarray/xarray/testing.py in assert_equal(a, b) 56 assert a.equals(b), formatting.diff_array_repr(a, b, "equals") 57 elif isinstance(a, Dataset): ---> 58 assert a.equals(b), formatting.diff_dataset_repr(a, b, "equals") 59 else: 60 raise TypeError("{} not supported by assertion comparison".format(type(a)))

~/work/python/xarray/xarray/core/dataset.py in equals(self, other) 1322 """ 1323 try: -> 1324 return self._all_compat(other, "equals") 1325 except (TypeError, AttributeError): 1326 return False

~/work/python/xarray/xarray/core/dataset.py in _all_compat(self, other, compat_str) 1285 1286 return self._coord_names == other._coord_names and utils.dict_equiv( -> 1287 self._variables, other._variables, compat=compat 1288 ) 1289

~/work/python/xarray/xarray/core/utils.py in dict_equiv(first, second, compat) 335 """ 336 for k in first: --> 337 if k not in second or not compat(first[k], second[k]): 338 return False 339 for k in second:

~/work/python/xarray/xarray/core/dataset.py in compat(x, y) 1282 # require matching order for equality 1283 def compat(x: Variable, y: Variable) -> bool: -> 1284 return getattr(x, compat_str)(y) 1285 1286 return self._coord_names == other._coord_names and utils.dict_equiv(

~/work/python/xarray/xarray/core/variable.py in equals(self, other, equiv) 1558 try: 1559 return self.dims == other.dims and ( -> 1560 self._data is other._data or equiv(self.data, other.data) 1561 ) 1562 except (TypeError, AttributeError):

~/work/python/xarray/xarray/core/duck_array_ops.py in array_equiv(arr1, arr2) 201 warnings.filterwarnings("ignore", "In the future, 'NAT == x'") 202 flag_array = (arr1 == arr2) | (isnull(arr1) & isnull(arr2)) --> 203 return bool(flag_array.all()) 204 205

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/array/core.py in bool(self) 1380 ) 1381 else: -> 1382 return bool(self.compute()) 1383 1384 nonzero = bool # python 2

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/base.py in compute(self, kwargs) 173 dask.base.compute 174 """ --> 175 (result,) = compute(self, traverse=False, kwargs) 176 return result 177

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/base.py in compute(args, kwargs) 444 keys = [x.dask_keys() for x in collections] 445 postcomputes = [x.dask_postcompute() for x in collections] --> 446 results = schedule(dsk, keys, kwargs) 447 return repack([f(r, a) for r, (f, a) in zip(results, postcomputes)]) 448

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/threaded.py in get(dsk, result, cache, num_workers, pool, kwargs) 80 get_id=_thread_get_id, 81 pack_exception=pack_exception, ---> 82 kwargs 83 ) 84

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/local.py in get_async(apply_async, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, **kwargs) 489 _execute_task(task, data) # Re-execute locally 490 else: --> 491 raise_exception(exc, tb) 492 res, worker_id = loads(res_info) 493 state["cache"][key] = res

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/compatibility.py in reraise(exc, tb) 128 if exc.traceback is not tb: 129 raise exc.with_traceback(tb) --> 130 raise exc 131 132 import pickle as cPickle

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/local.py in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 231 try: 232 task, data = loads(task_info) --> 233 result = _execute_task(task, data) 234 id = get_id() 235 result = dumps((result, id))

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/core.py in _execute_task(arg, cache, dsk) 117 func, args = arg[0], arg[1:] 118 args2 = [_execute_task(a, cache) for a in args] --> 119 return func(*args2) 120 elif not ishashable(arg): 121 return arg

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/optimization.py in call(self, *args) 1057 if not len(args) == len(self.inkeys): 1058 raise ValueError("Expected %d args, got %d" % (len(self.inkeys), len(args))) -> 1059 return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args))) 1060 1061 def reduce(self):

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/core.py in get(dsk, out, cache) 147 for key in toposort(dsk): 148 task = dsk[key] --> 149 result = _execute_task(task, cache) 150 cache[key] = result 151 result = _execute_task(out, cache)

~/miniconda3/envs/dcpy/lib/python3.7/site-packages/dask/core.py in _execute_task(arg, cache, dsk) 117 func, args = arg[0], arg[1:] 118 args2 = [_execute_task(a, cache) for a in args] --> 119 return func(*args2) 120 elif not ishashable(arg): 121 return arg

ValueError: operands could not be broadcast together with shapes (0,20) (2,20) ```

Example 2

The relevant xarray line in the previous traceback is flag_array = (arr1 == arr2) | (isnull(arr1) & isnull(arr2)), so I tried

python (ds.isnull() & ds.chunk({"dim_0": 1}).isnull()).compute() # works (ds.chunk().isnull() & ds.chunk({"dim_0": 1}).isnull()).compute() # does not work?!

```

ValueError Traceback (most recent call last) <ipython-input-4-abdfbeda355a> in <module> 1 (ds.isnull() & ds.chunk({"dim_0": 1}).isnull()).compute() # works ----> 2 (ds.chunk().isnull() & ds.chunk({"dim_0": 1}).isnull()).compute() # does not work?!

~/work/python/xarray/xarray/core/dataset.py in compute(self, kwargs) 791 """ 792 new = self.copy(deep=False) --> 793 return new.load(kwargs) 794 795 def _persist_inplace(self, **kwargs) -> "Dataset":

~/work/python/xarray/xarray/core/dataset.py in load(self, **kwargs) 645 646 for k, data in zip(lazy_data, evaluated_data): --> 647 self.variables[k].data = data 648 649 # load everything else sequentially

~/work/python/xarray/xarray/core/variable.py in data(self, data) 331 data = as_compatible_data(data) 332 if data.shape != self.shape: --> 333 raise ValueError("replacement data must match the Variable's shape") 334 self._data = data 335

ValueError: replacement data must match the Variable's shape ```

Problem Description

I don't know what's going on here. I expect assert_equal should return True for all these examples.

Our test for isnull with dask always calls load before comparing:

def test_isnull_with_dask(): da = construct_dataarray(2, np.float32, contains_nan=True, dask=True) assert isinstance(da.isnull().data, dask_array_type) assert_equal(da.isnull().load(), da.load().isnull())

Output of xr.show_versions()

xarray master & dask 2.3.0

# Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: 6ece6a1cf424c3080e216fad8fc8058d3b70aadc python: 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 21:52:21) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-64-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.13.0+13.g6ece6a1c pandas: 0.25.1 numpy: 1.17.2 scipy: 1.3.1 netCDF4: 1.5.1.2 pydap: None h5netcdf: 0.7.4 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: 0.9.7.1 iris: None bottleneck: 1.2.1 *dask: 2.3.0* distributed: 2.3.2 matplotlib: 3.1.1 cartopy: 0.17.0 seaborn: 0.9.0 numbagg: None setuptools: 41.2.0 pip: 19.2.3 conda: 4.7.11 pytest: 5.1.2 IPython: 7.8.0 sphinx: 2.2.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3350/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
452629448 MDU6SXNzdWU0NTI2Mjk0NDg= 2999 median on dask arrays dcherian 2448579 closed 0     3 2019-06-05T17:37:46Z 2019-12-30T17:46:44Z 2019-12-30T17:46:44Z MEMBER      

Dask has updated it's percentile, quantile implementation: https://github.com/dask/dask/pull/4677

Can we now update our median method to work with dask arrays?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2999/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
399042126 MDU6SXNzdWUzOTkwNDIxMjY= 2673 NaT tests need to be fixed on master dcherian 2448579 closed 0     3 2019-01-14T19:37:45Z 2019-01-15T16:54:56Z 2019-01-15T11:19:59Z MEMBER      

```python =================================== FAILURES =================================== ____ TestVariable.test_index_0d_not_a_time _______ self = <xarray.tests.test_variable.TestVariable object at 0x7f0dd7b6bda0> def test_index_0d_not_a_time(self): d = np.datetime64('NaT', 'ns') x = self.cls(['x'], [d])

  self._assertIndexedLikeNDArray(x, d)

xarray/tests/test_variable.py:197:


self = <xarray.tests.test_variable.TestVariable object at 0x7f0dd7b6bda0> variable = <xarray.Variable (x: 1)> array(['NaT'], dtype='datetime64[ns]') expected_value0 = numpy.datetime64('NaT'), expected_dtype = None def _assertIndexedLikeNDArray(self, variable, expected_value0, expected_dtype=None): """Given a 1-dimensional variable, verify that the variable is indexed like a numpy.ndarray. """ assert variable[0].shape == () assert variable[0].ndim == 0 assert variable[0].size == 1 # test identity assert variable.equals(variable.copy()) assert variable.identical(variable.copy()) # check value is equal for both ndarray and Variable with warnings.catch_warnings(): warnings.filterwarnings('ignore', "In the future, 'NAT == x'")

      assert variable.values[0] == expected_value0

E AssertionError: assert numpy.datetime64('NaT') == numpy.datetime64('NaT') E -numpy.datetime64('NaT') E +numpy.datetime64('NaT') xarray/tests/test_variable.py:143: AssertionError

____ TestVariableWithDask.test_index_0d_not_a_time _____ self = <xarray.tests.test_variable.TestVariableWithDask object at 0x7f0e00bc5978> def test_index_0d_not_a_time(self): d = np.datetime64('NaT', 'ns') x = self.cls(['x'], [d])

  self._assertIndexedLikeNDArray(x, d)

xarray/tests/test_variable.py:197:


self = <xarray.tests.test_variable.TestVariableWithDask object at 0x7f0e00bc5978> variable = <xarray.Variable (x: 1)> dask.array<shape=(1,), dtype=datetime64[ns], chunksize=(1,)> expected_value0 = numpy.datetime64('NaT'), expected_dtype = None def _assertIndexedLikeNDArray(self, variable, expected_value0, expected_dtype=None): """Given a 1-dimensional variable, verify that the variable is indexed like a numpy.ndarray. """ assert variable[0].shape == () assert variable[0].ndim == 0 assert variable[0].size == 1 # test identity assert variable.equals(variable.copy()) assert variable.identical(variable.copy()) # check value is equal for both ndarray and Variable with warnings.catch_warnings(): warnings.filterwarnings('ignore', "In the future, 'NAT == x'")

      assert variable.values[0] == expected_value0

E AssertionError: assert numpy.datetime64('NaT') == numpy.datetime64('NaT') E -numpy.datetime64('NaT') E +numpy.datetime64('NaT') xarray/tests/test_variable.py:143: AssertionError

___ TestIndexVariable.testindex_0d_not_a_time ____ self = <xarray.tests.test_variable.TestIndexVariable object at 0x7f0e01063390> def test_index_0d_not_a_time(self): d = np.datetime64('NaT', 'ns') x = self.cls(['x'], [d])

  self._assertIndexedLikeNDArray(x, d)

xarray/tests/test_variable.py:197:


self = <xarray.tests.test_variable.TestIndexVariable object at 0x7f0e01063390> variable = <xarray.IndexVariable 'x' (x: 1)> array(['NaT'], dtype='datetime64[ns]') expected_value0 = numpy.datetime64('NaT'), expected_dtype = None def _assertIndexedLikeNDArray(self, variable, expected_value0, expected_dtype=None): """Given a 1-dimensional variable, verify that the variable is indexed like a numpy.ndarray. """ assert variable[0].shape == () assert variable[0].ndim == 0 assert variable[0].size == 1 # test identity assert variable.equals(variable.copy()) assert variable.identical(variable.copy()) # check value is equal for both ndarray and Variable with warnings.catch_warnings(): warnings.filterwarnings('ignore', "In the future, 'NAT == x'")

      assert variable.values[0] == expected_value0

E AssertionError: assert numpy.datetime64('NaT') == numpy.datetime64('NaT') E -numpy.datetime64('NaT') E +numpy.datetime64('NaT') xarray/tests/test_variable.py:143: AssertionError ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2673/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
331387539 MDU6SXNzdWUzMzEzODc1Mzk= 2224 Add axis scaling kwargs to DataArray.plot() dcherian 2448579 closed 0     3 2018-06-11T23:41:42Z 2018-07-31T22:28:44Z 2018-07-31T22:28:44Z MEMBER      

It would be useful to add the boolean kwargs logx, logy, loglog to DataArray.plot() just as in pandas.DataFrame.plot()

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html#pandas.DataFrame.plot

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2224/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 3198.785ms · About: xarray-datasette