home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

7 rows where user = 1610850 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 4
  • pull 3

state 2

  • closed 5
  • open 2

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
365367839 MDExOlB1bGxSZXF1ZXN0MjE5MzAzNTk0 2449 Add 'to_iris' and 'from_iris' to methods Dataset jacobtomlinson 1610850 closed 0     7 2018-10-01T09:02:26Z 2023-09-18T09:33:53Z 2023-09-18T09:33:53Z CONTRIBUTOR   0 pydata/xarray/pulls/2449

This PR adds to_iris and from_iris methods to DataSet. I've added this because I frequently find myself writing little list and dictionary comprehensions to pack and unpack both DataSets from DataArrays and Iris CubeLists from Cubes.

  • [x] Tests added (for all bug fixes or enhancements)
  • [ ] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2449/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
659129613 MDU6SXNzdWU2NTkxMjk2MTM= 4234 Add ability to change underlying array type jacobtomlinson 1610850 open 0     12 2020-07-17T10:37:34Z 2021-04-19T03:21:54Z   CONTRIBUTOR      

Is your feature request related to a problem? Please describe.

In order to use Xarray with alternative array types like cupy the user needs to be able to specify the underlying array type without digging into internals.

Right now I'm doing something like this.

```python import xarray as xr import cupy as cp

ds = xr.tutorial.load_dataset("air_temperature") ds.air.data = cp.asarray(ds.air.data) ```

However this will become burdensome when there are many data arrays and feels brittle and prone to errors.

As I see it a conversion could instead be done in a couple of places; on load, or as a utility method.

Currently Xarray supports NumPy and Dask array well. Numpy is the defrault and the way you specify whether a Dask array should be used is to give the chunks kwargs to an open_ function or by calling .chunk() on a DataSet or DataArray.

Side note: There are a few places where the Dask array API bleeds into Xarray in order to have compatibility, the chunk kwarg/method is one, the .compute() method is another. I'm hesitant to do this for other array types, however surfacing the cupy.ndarray.get method could feel natural for cupy users. But for now I think it would be best to take Dask as a special case and try and be generic for everything else.

Describe the solution you'd like

For other array types I would like to propose the addition of an asarray kwarg for the open_ methods and an .asarray() method on DataSet and DataArray. This should take either the array type cupy.ndarray, the asarray method cp.asarray, or preferably either.

This would result in something like the following.

```python import xarray as xr import cupy as cp

ds = xr.open_mfdataset("/path/to/files/*.nc", asarray=cp.ndarray)

or

ds = xr.open_mfdataset("/path/to/files/*.nc") gds = ds.asarray(cp.ndarray) ```

These operations would convert all data arrays to cupy arrays. For the case that ds is backed by Dask arrays it would use map_blocks to cast each block to the appropriate array type.

It is still unclear what to do about index variables, which are currently of type pandas.Index. For cupy it may be more appropriate to use a cudf.Index instead to ensure both are on the GPU. However this would add a dependency on cudf and potentially increase complexity here.

Describe alternatives you've considered

Instead of an asarray kwarg/method something like to_cupy/from_cupy could be done. However I feel this makes less sense because the object type is not changing, just that of the underlying data structure.

Another option would be to go more high level with it. For example a gpu kwarg and to_gpu/from_gpu method could be added in the same way. This would abstract things even further and give users a choice about hardware rather than software. This would also be a fine solution but I think it may special case too much and a more generic solution would be better.

Additional context Related to #4212.

I'm keen to start implementing this. But would like some discussion/feedback before I dive in here.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4234/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
654135405 MDU6SXNzdWU2NTQxMzU0MDU= 4212 Add cupy support jacobtomlinson 1610850 open 0     7 2020-07-09T15:06:37Z 2021-02-08T16:50:38Z   CONTRIBUTOR      

I'm intending on working on cupy support in xarray along with @quasiben. Thanks for the warm welcome in the xarray dev meeting yesterday!

I'd like to use this issue to track cupy support and discuss certain design decisions. I appreciate there are also issues such as #4208, #3484 and #3232 which are related to cupy support, but maybe this could represent an umbrella issue for cupy specifically.

The main goal here is to improve support for array types other than numpy and dask in general. However, it is likely there will need to be some cupy specific compatibility code in xarray. (@andersy005 raised issues with calling __array__ on cupy in #3232 for example).

I would love to hear from folks wanting to use cupy with xarray to help build up some use cases for us to develop against. We have some ideas but more are welcome.

My first steps here will be to add some tests which use cupy. These will skip in the main CI but we will also look at running xarray tests on some GPU CI too as we develop. A few limited experiments that I've run seem to work, so I'll start with tests which reproduce those.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4212/reactions",
    "total_count": 11,
    "+1": 11,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
658374300 MDExOlB1bGxSZXF1ZXN0NDUwMzQ1MDgy 4232 Support cupy in as_shared_dtype jacobtomlinson 1610850 closed 0     1 2020-07-16T16:52:30Z 2020-07-27T10:32:48Z 2020-07-24T20:38:58Z CONTRIBUTOR   0 pydata/xarray/pulls/4232

This implements solution 2 for #4231.

cc @quasiben

  • [x] Closes #4231
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4232/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
658361860 MDU6SXNzdWU2NTgzNjE4NjA= 4231 as_shared_dtype coerces scalars into numpy regardless of other array types jacobtomlinson 1610850 closed 0     0 2020-07-16T16:36:19Z 2020-07-24T20:38:57Z 2020-07-24T20:38:57Z CONTRIBUTOR      

Related to #4212

When trying to get the Calculating Seasonal Averages from Timeseries of Monthly Means example from the documentation to work with cupy I'm experiencing an unexpected Unsupported type <class 'numpy.ndarray'> error when calling ds_unweighted = ds.groupby('time.season').mean('time')

I dug through this with @quasiben and it seems to be related to the as_shared_dtype function.

What happened:

Running the MCVE below results in Unsupported type <class 'numpy.ndarray'>. It seems at somewhere in the stack there is a call to _replace_nan(a, 0) where the cupy array is having nan values replaced with 0. This ends up as a call to xarray.core.duck_array_ops.where with the "is nan", 0 and the cupy array being passed.

However _where calls as_shared_dtype on the 0 and cupy array, which converts the 0 to a scalar numpy array.

Cupy is then passed this numpy array to it's where function which does raises the exception.

What you expected to happen:

The cupy.where function can either take a Python int/float or a cupy array, not a numpy scalar.

Therefore a few things could be done here: 1. Xarray could not convert the int/float to a numpy array 1. It could convert it to a cupy array 1. Cupy could be modified to accept a numpy scalar.

We thew together a quick fix for option 2, which I'll put in a draft PR. But happy to discuss the alternatives.

Minimal Complete Verifiable Example:

```python import numpy as np import pandas as pd import xarray as xr import matplotlib.pyplot as plt

import cupy as cp

Load data

ds = xr.tutorial.open_dataset("rasm").load()

Move data to GPU

ds.Tair.data = cp.asarray(ds.Tair.data)

ds_unweighted = ds.groupby("time.season").mean("time")

Calculate the weights by grouping by 'time.season'.

month_length = ds.time.dt.days_in_month weights = ( month_length.groupby("time.season") / month_length.groupby("time.season").sum() )

Test that the sum of the weights for each season is 1.0

np.testing.assert_allclose(weights.groupby("time.season").sum().values, np.ones(4))

Move weights to GPU

weights.data = cp.asarray(weights.data)

Calculate the weighted average

ds_weighted = ds * weights ds_weighted = ds_weighted.groupby("time.season") ds_weighted = ds_weighted.sum(dim="time") ```

Traceback ```python-traceback Traceback (most recent call last): File "/home/jacob/miniconda3/envs/dask/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/jacob/miniconda3/envs/dask/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/jacob/.vscode-server/extensions/ms-python.python-2020.6.91350/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module> cli.main() File "/home/jacob/.vscode-server/extensions/ms-python.python-2020.6.91350/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/jacob/.vscode-server/extensions/ms-python.python-2020.6.91350/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 267, in run_file runpy.run_path(options.target, run_name=compat.force_str("__main__")) File "/home/jacob/miniconda3/envs/dask/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/home/jacob/miniconda3/envs/dask/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/home/jacob/miniconda3/envs/dask/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/jacob/Projects/pydata/xarray/test_seasonal_averages.py", line 32, in <module> ds_weighted = ds_weighted.sum(dim="time") File "/home/jacob/Projects/pydata/xarray/xarray/core/common.py", line 84, in wrapped_func func, dim, skipna=skipna, numeric_only=numeric_only, **kwargs File "/home/jacob/Projects/pydata/xarray/xarray/core/groupby.py", line 994, in reduce return self.map(reduce_dataset) File "/home/jacob/Projects/pydata/xarray/xarray/core/groupby.py", line 923, in map return self._combine(applied) File "/home/jacob/Projects/pydata/xarray/xarray/core/groupby.py", line 943, in _combine applied_example, applied = peek_at(applied) File "/home/jacob/Projects/pydata/xarray/xarray/core/utils.py", line 183, in peek_at peek = next(gen) File "/home/jacob/Projects/pydata/xarray/xarray/core/groupby.py", line 922, in <genexpr> applied = (func(ds, *args, **kwargs) for ds in self._iter_grouped()) File "/home/jacob/Projects/pydata/xarray/xarray/core/groupby.py", line 990, in reduce_dataset return ds.reduce(func, dim, keep_attrs, **kwargs) File "/home/jacob/Projects/pydata/xarray/xarray/core/dataset.py", line 4313, in reduce **kwargs, File "/home/jacob/Projects/pydata/xarray/xarray/core/variable.py", line 1591, in reduce data = func(input_data, axis=axis, **kwargs) File "/home/jacob/Projects/pydata/xarray/xarray/core/duck_array_ops.py", line 324, in f return func(values, axis=axis, **kwargs) File "/home/jacob/Projects/pydata/xarray/xarray/core/nanops.py", line 111, in nansum a, mask = _replace_nan(a, 0) File "/home/jacob/Projects/pydata/xarray/xarray/core/nanops.py", line 21, in _replace_nan return where_method(val, mask, a), mask File "/home/jacob/Projects/pydata/xarray/xarray/core/duck_array_ops.py", line 274, in where_method return where(cond, data, other) File "/home/jacob/Projects/pydata/xarray/xarray/core/duck_array_ops.py", line 268, in where return _where(condition, *as_shared_dtype([x, y])) File "/home/jacob/Projects/pydata/xarray/xarray/core/duck_array_ops.py", line 56, in f return wrapped(*args, **kwargs) File "<__array_function__ internals>", line 6, in where File "cupy/core/core.pyx", line 1343, in cupy.core.core.ndarray.__array_function__ File "/home/jacob/miniconda3/envs/dask/lib/python3.7/site-packages/cupy/sorting/search.py", line 211, in where return _where_ufunc(condition.astype('?'), x, y) File "cupy/core/_kernel.pyx", line 906, in cupy.core._kernel.ufunc.__call__ File "cupy/core/_kernel.pyx", line 90, in cupy.core._kernel._preprocess_args TypeError: Unsupported type <class 'numpy.ndarray'> ```

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: 52043bc57f20438e8923790bca90b646c82442ad python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.3.0-62-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: None libnetcdf: None xarray: 0.15.1 pandas: 0.25.3 numpy: 1.18.5 scipy: 1.5.0 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.2.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: 0.9.8.3 iris: None bottleneck: None dask: 2.20.0 distributed: 2.20.0 matplotlib: 3.2.2 cartopy: 0.17.0 seaborn: 0.10.1 numbagg: None pint: None setuptools: 49.1.0.post20200704 pip: 20.1.1 conda: None pytest: 5.4.3 IPython: 7.16.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4231/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
654678508 MDExOlB1bGxSZXF1ZXN0NDQ3MzU2ODAw 4214 Add initial cupy tests jacobtomlinson 1610850 closed 0     8 2020-07-10T10:20:33Z 2020-07-13T16:32:35Z 2020-07-13T15:07:45Z CONTRIBUTOR   0 pydata/xarray/pulls/4214

Added some initial unit tests for cupy. Mainly to create a place for cupy tests to go and to check some basic functionality.

I've created a fixture which constructs the dataset from the Toy weather data example and converts the underlying arrays to cupy. Then I've added a test which checks that after calling operations such as mean and groupby the resulting DataArray is still backed by a cupy array.

The main penalty with working on GPUs is accidentally shunting data back and forth between the GPU and system memory. Copying data over the PCI bus is slow compared to the rest of the work so should be avoided. So this first test is checking that we are leaving things on the GPU.

Because this data copying is so expensive cupy have intentionally broken the __array__ method and introduced a .get method instead. This means that users have to be explicit in converting back to numpy and copying back to the main memory. Therefore we will need to add some logic to xarray to use .get in appropriate situations such as plotting.

  • [x] Releated to #4212
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4214/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
346525275 MDU6SXNzdWUzNDY1MjUyNzU= 2335 Spurious "Zarr requires uniform chunk sizes excpet for final chunk." jacobtomlinson 1610850 closed 0     3 2018-08-01T09:43:06Z 2018-08-14T17:15:02Z 2018-08-14T17:15:01Z CONTRIBUTOR      

Problem description

Using xarray 0.10.7 I'm getting the following error when trying to write out a zarr.

ValueError: Zarr requires uniform chunk sizes excpet for final chunk. Variable ((3, 3, 3, 3), (3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2), (1, 1, 1), (100, 100, 100, 100, 100, 100), (100, 100, 100, 100, 100, 100, 100, 100)) has incompatible chunks. Consider rechunking using `chunk()`.

Those chunks look fine to me, only one has an inconsistent chunking and it's the final chunk in the second index. Seems related to #2225.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2335/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 20.716ms · About: xarray-datasette