home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where user = 10554254 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: author_association, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 2
  • pull 1

state 2

  • open 2
  • closed 1

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
494210818 MDExOlB1bGxSZXF1ZXN0MzE4MDA5MjU3 3312 convert DataArray to DataSet before combine friedrichknuth 10554254 open 0     10 2019-09-16T18:37:35Z 2022-06-09T14:50:17Z   FIRST_TIME_CONTRIBUTOR   0 pydata/xarray/pulls/3312

Enables combine_by_coords on DataArrays. Will convert DataArray to DataSet before proceeding.

As mentioned in #3248, this will still fail if the DataArray is unnamed, but at least the error message tells the user why.

Previously, combining both named

``` da1 = xr.DataArray(name='foo', data=np.random.rand(3,3), coords=[('x', [1, 2, 3]), ('y', [1, 2, 3])])

da2 = xr.DataArray(name='foo2', data=np.random.rand(3,3), coords=[('x', [5, 6, 7]), ('y', [5, 6, 7])])

xr.combine_by_coords([da1, da2]) ```

and unnamed DataArrays ``` da1 = xr.DataArray(data=np.random.rand(3,3), coords=[('x', [1, 2, 3]), ('y', [1, 2, 3])])

da2 = xr.DataArray(data=np.random.rand(3,3), coords=[('x', [5, 6, 7]), ('y', [5, 6, 7])])

xr.combine_by_coords([da1, da2]) `` failed withValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`

With this PR, combining the named DataArrays results in a combined DataSet, while the latter example will result in ValueError: unable to convert unnamed DataArray to a Dataset without providing an explicit name

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3312/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
494906646 MDU6SXNzdWU0OTQ5MDY2NDY= 3315 xr.combine_nested() fails when passed nested DataSets friedrichknuth 10554254 open 0     8 2019-09-17T23:47:44Z 2021-07-08T17:42:53Z   NONE      

xr.__version__ '0.13.0'

xr.combine_nested() works when passed a nested list of DataArray objects. da1 = xr.DataArray(name="a", data=[[0]], dims=["x", "y"]) da2 = xr.DataArray(name="b", data=[[1]], dims=["x", "y"]) da3 = xr.DataArray(name="a", data=[[2]], dims=["x", "y"]) da4 = xr.DataArray(name="b", data=[[3]], dims=["x", "y"]) xr.combine_nested([[da1, da2], [da3, da4]], concat_dim=["x", "y"]) returns <xarray.DataArray 'a' (x: 2, y: 2)> array([[0, 1], [2, 3]]) Dimensions without coordinates: x, y but fails if passed a nested list of DataSet objects.

ds1 = da1.to_dataset() ds2 = da2.to_dataset() ds3 = da3.to_dataset() ds4 = da4.to_dataset() xr.combine_nested([[ds1, ds2], [ds3, ds4]], concat_dim=["x", "y"]) returns

```

KeyError Traceback (most recent call last) <ipython-input-8-c0035883fc68> in <module> 3 ds3 = da3.to_dataset() 4 ds4 = da4.to_dataset() ----> 5 xr.combine_nested([[ds1, ds2], [ds3, ds4]], concat_dim=["x", "y"])

~/repos/contribute/xarray/xarray/core/combine.py in combine_nested(datasets, concat_dim, compat, data_vars, coords, fill_value, join) 462 ids=False, 463 fill_value=fill_value, --> 464 join=join, 465 ) 466

~/repos/contribute/xarray/xarray/core/combine.py in _nested_combine(datasets, concat_dims, compat, data_vars, coords, ids, fill_value, join) 305 coords=coords, 306 fill_value=fill_value, --> 307 join=join, 308 ) 309 return combined

~/repos/contribute/xarray/xarray/core/combine.py in _combine_nd(combined_ids, concat_dims, data_vars, coords, compat, fill_value, join) 196 compat=compat, 197 fill_value=fill_value, --> 198 join=join, 199 ) 200 (combined_ds,) = combined_ids.values()

~/repos/contribute/xarray/xarray/core/combine.py in _combine_all_along_first_dim(combined_ids, dim, data_vars, coords, compat, fill_value, join) 218 datasets = combined_ids.values() 219 new_combined_ids[new_id] = _combine_1d( --> 220 datasets, dim, compat, data_vars, coords, fill_value, join 221 ) 222 return new_combined_ids

~/repos/contribute/xarray/xarray/core/combine.py in _combine_1d(datasets, concat_dim, compat, data_vars, coords, fill_value, join) 246 compat=compat, 247 fill_value=fill_value, --> 248 join=join, 249 ) 250 except ValueError as err:

~/repos/contribute/xarray/xarray/core/concat.py in concat(objs, dim, data_vars, coords, compat, positions, fill_value, join) 131 "objects, got %s" % type(first_obj) 132 ) --> 133 return f(objs, dim, data_vars, coords, compat, positions, fill_value, join) 134 135

~/repos/contribute/xarray/xarray/core/concat.py in _dataset_concat(datasets, dim, data_vars, coords, compat, positions, fill_value, join) 363 for k in datasets[0].variables: 364 if k in concat_over: --> 365 vars = ensure_common_dims([ds.variables[k] for ds in datasets]) 366 combined = concat_vars(vars, dim, positions) 367 assert isinstance(combined, Variable)

~/repos/contribute/xarray/xarray/core/concat.py in <listcomp>(.0) 363 for k in datasets[0].variables: 364 if k in concat_over: --> 365 vars = ensure_common_dims([ds.variables[k] for ds in datasets]) 366 combined = concat_vars(vars, dim, positions) 367 assert isinstance(combined, Variable)

~/repos/contribute/xarray/xarray/core/utils.py in getitem(self, key) 383 384 def getitem(self, key: K) -> V: --> 385 return self.mapping[key] 386 387 def iter(self) -> Iterator[K]:

KeyError: 'a' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3315/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
212561278 MDU6SXNzdWUyMTI1NjEyNzg= 1301 open_mfdataset() significantly slower on 0.9.1 vs. 0.8.2 friedrichknuth 10554254 closed 0     17 2017-03-07T21:16:53Z 2017-11-16T15:02:48Z 2017-11-16T15:02:00Z NONE      

I noticed a big speed discrepancy between xarray versions 0.8.2 and 0.9.1 when using open_mfdataset() on a dataset ~ 1.2 GB in size, consisting of 3 files and using netcdf4 as the engine. 0.8.2 was run first, so this is probably not a disk caching issue.

Test ``` import xarray as xr import time

start_time = time.time() ds0 = xr.open_mfdataset('./*.nc') print("--- %s seconds ---" % (time.time() - start_time)) ```

Result

xarray==0.8.2, dask==0.11.1, netcdf4==1.2.4 --- 0.736030101776 seconds --- xarray==0.9.1, dask==0.13.0, netcdf4==1.2.4

--- 52.2800869942 seconds ---

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1301/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 21.5ms · About: xarray-datasette