home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

4 rows where repo = 13221727 and user = 8982598 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 2
  • pull 2

state 1

  • closed 4

repo 1

  • xarray · 4 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
257079041 MDU6SXNzdWUyNTcwNzkwNDE= 1571 to_netcdf fails for engine=h5netcdf when using dask-backed arrays jcmgray 8982598 closed 0     2 2017-09-12T15:08:27Z 2019-02-12T05:39:19Z 2019-02-12T05:39:19Z CONTRIBUTOR      

When using dask-backed datasets/arrays it does not seem possible to use the 'h5netcdf' engine to write to disk:

python import xarray as xr ds = xr.Dataset({'a': ('x', [1, 2])}, {'x': [3, 4]}).chunk() ds.to_netcdf("test.h5", engine='h5netcdf')

results in the error:

```bash ...

h5py/h5a.pyx in h5py.h5a.open()

KeyError: "Can't open attribute (can't locate attribute: 'dask')" ```

Not sure if this is a xarray or h5netcdf issue - or some inherent limitation in which case apologies!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1571/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
326834855 MDExOlB1bGxSZXF1ZXN0MTkwNzk4NTYw 2189 docs: add xyzpy to projects jcmgray 8982598 closed 0     1 2018-05-27T17:38:44Z 2018-05-27T20:46:07Z 2018-05-27T20:45:38Z CONTRIBUTOR   0 pydata/xarray/pulls/2189

Justs adds xyzpy to the list of projects utilizing xarray.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2189/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
130753818 MDU6SXNzdWUxMzA3NTM4MTg= 742 merge and align DataArrays/Datasets on different domains jcmgray 8982598 closed 0     11 2016-02-02T17:27:17Z 2017-01-23T22:42:18Z 2017-01-23T22:42:18Z CONTRIBUTOR      

Firstly, I think xarray is great and for the type of physics simulations I run n-dimensional labelled arrays is exactly what I need. But, and I may be missing something, is there a way to merge (or concatenate/update) DataArrays with different domains on the same coordinates?

For example consider this setup:

``` python import xarray as xr

x1 = [100] y1 = [1, 2, 3, 4, 5] dat1 = [[101, 102, 103, 104, 105]]

x2 = [200] y2 = [3, 4, 5, 6] # different size and domain dat2 = [[203, 204, 205, 206]]

da1 = xr.DataArray(dat1, dims=['x', 'y'], coords={'x': x1, 'y': y1}) da2 = xr.DataArray(dat2, dims=['x', 'y'], coords={'x': x2, 'y': y2}) ```

I would like to aggregate such DataArrays into a new, single DataArray with nan padding such that:

``` python

merge(da1, da2, align=True) # made up syntax <xarray.DataArray (x: 2, y: 6)> array([[ 101., 102., 103., 104., 105., nan], [ nan, nan, 203., 204., 205., 206.]]) Coordinates: * x (x) int64 100 200 * y (y) int64 1 2 3 4 5 6 ```

Here is a quick function I wrote to do such but I would worried about the performance of 'expanding' the new data to the old data's size every iteration (i.e. supposing that the first argument is a large DataArray that you are adding to but doesn't necessarily contain the dimensions already).

python def xrmerge(*das, accept_new=True): da = das[0] for new_da in das[1:]: # Expand both to have same dimensions, padding with NaN da, new_da = xr.align(da, new_da, join='outer') # Fill NaNs one way or the other re. accept_new da = new_da.fillna(da) if accept_new else da.fillna(new_da) return da

Might this be (or is this already!) possible in simpler form in xarray? I know Datasets have merge and update methods but I couldn't make them work as above. I also notice there are possible plans ( #417 ) to introduce a merge function for DataArrays.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/742/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
174404136 MDExOlB1bGxSZXF1ZXN0ODM1NDgwNzU= 996 add 'no_conflicts' as compat option for merging non-conflicting data jcmgray 8982598 closed 0     8 2016-08-31T23:38:00Z 2016-09-15T16:07:16Z 2016-09-15T16:07:16Z CONTRIBUTOR   0 pydata/xarray/pulls/996

This solves #742 and partially #835 (does not address a combine_first option yet). It essentially adds a notnull_equals method to Variable (and currently DataArray/set) that merge can use to compare objects, subsequently combining them with 'fillna'. Used as such:

``` python

import xarray as xr ds1 = xr.Dataset(data_vars={'x': ('a', [1, 2])}) ds2 = xr.Dataset(data_vars={'x': ('a', [2, 3])}, coords={'a': [1, 2]}) xr.merge([ds1, ds2], compat='notnull_equals') <xarray.Dataset> Dimensions: (a: 3) Coordinates: * a (a) int64 0 1 2 Data variables: x (a) float64 1.0 2.0 3.0 ```

Should be very easy to add other merging options such as overwriting left to write and vice versa.

TODO

  • docs
  • tests for various type combinations
  • some of the error excepting is mirrored from equals and might not be necessary (e.g. for numpy structured arrays).

ISSUES

  • It seemed natural to add notnull_equals as a method to Dataset, but it might be unnecessary/not useful. Current version is conservative in that it still requires aligned Datasets with all the same variables.
  • Due to the float nature of NaN, type is sometimes not preserved, e.g. merging two int arrays yields a float array, even when the final array has no NaN values itself.
  • float rounding errors can cause unintended failures, so a notnull_allclose option might be nice.

Currently all tests passing (6 new), and no flake8 errors on the diff...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/996/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 33.766ms · About: xarray-datasette