home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

4 rows where type = "issue" and user = 17951292 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 4 ✖

state 1

  • closed 4

repo 1

  • xarray 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
163414759 MDU6SXNzdWUxNjM0MTQ3NTk= 894 Dataset variable reference fails after renaming dgoldwx2112 17951292 closed 0     4 2016-07-01T15:04:15Z 2016-08-02T17:39:11Z 2016-08-02T17:39:11Z NONE      

Using xarray v0.7.2, I've encountered what looks to be a bug whereby referencing a re-named variable raises a KeyError:

compute anomalies

climo = ds.groupby('time.dayofyear').mean('time') anom = ds.groupby('time.dayofyear') - climo anom.rename({'air': 't2manom'})

# Output

<xarray.Dataset> Dimensions: (lat: 37, level: 1, lon: 192, time: 13514) Coordinates: - lat (lat) float32 88.542 86.6531 84.7532 82.8508 80.9473 79.0435 ... - level (level) float32 2.0 - lon (lon) float32 0.0 1.875 3.75 5.625 7.5 9.375 11.25 13.125 ... dayofyear (time) int32 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ... - time (time) datetime64[ns] 1979-01-01 1979-01-02 1979-01-03 ... Data variables: t2manom (time, level, lat, lon) float64 -6.342 -6.476 -6.522 -6.579 ...

anom['t2manom'].values[0:5]

Output:

_--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-10-05206d711de3> in <module>() 8 #del normalized['dayofyear'] 9 #normalized ---> 10 anom['t2manom'].values[0:5]

/usr/local/lib/python2.7/dist-packages/xarray/core/dataset.pyc in getitem(self, key) 527 528 if hashable(key): --> 529 return self._construct_dataarray(key) 530 else: 531 return self._copy_listed(np.asarray(key))

/usr/local/lib/python2.7/dist-packages/xarray/core/dataset.pyc in construct_dataarray(self, name) 476 variable = self._variables[name] 477 except KeyError: --> 478 , name, variable = _get_virtual_variable(self._variables, name) 479 480 coords = OrderedDict()

/usr/local/lib/python2.7/dist-packages/xarray/core/dataset.pyc in _get_virtual_variable(variables, key) 43 split_key = key.split('.', 1) 44 if len(split_key) != 2: ---> 45 raise KeyError(key) 46 47 ref_name, var_name = split_key

KeyError: 't2manom'_

However, referencing the OLD variable name ('air' in this case) produces the output expected when referencing the NEW variable name:

anom['air'].values[0:5]

Output:

array([[[[ -6.34243229, -6.47621607, -6.52189175, ..., -6.3105404 , -6.33972959, -6.30999986], [ -9.18891871, -9.32432412, -9.57216195, ..., -8.67081062, -8.89675656, -9.09081061], [ -8.8072971 , -9.27054033, -9.69972951, ..., -7.53378362, -7.93783766, -8.3737836 ], ..., [ -4.30864855, -3.5656756 , -1.10594592, ..., -1.70405402, -2.53135129, -2.92999993], [ -1.197027 , -1.34837835, -0.5481081 , ..., 0.01810811, 0.01540541, -0.62756755], [ 0.47135134, 0.22297297, -0.50756756, ..., 0.63297296, 0.65837836, 0.58891891]]],

[...omitted values...]

Am I doing something wrong here?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/894/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
167684282 MDU6SXNzdWUxNjc2ODQyODI= 919 ValueError: encountered unexpected variable nbnds dgoldwx2112 17951292 closed 0     3 2016-07-26T18:59:02Z 2016-07-27T22:37:02Z 2016-07-27T22:37:02Z NONE      

Using the recipe presented near the bottom of http://xarray.pydata.org/en/stable/io.html for reading in multiple files via opendap, I encounter the error referenced in the subject line.

Here is the call to read_netcdfs:

base = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.dailyavgs/surface_gauss/air.2m.gauss.%4u.nc' files = [base % d for d in range(1948,2016,1)] ds_combined = read_netcdfs(files,dim='time',transform_func=None)

I am using option decode_cf=False in xr.open_dataset

Here is the entire content of the error message:

ValueError Traceback (most recent call last) <ipython-input-5-024a3f9ac990> in <module>() 2 base = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.dailyavgs/surface_gauss/air.2m.gauss.%4u.nc' 3 files = [base % d for d in range(1948,2016,1)] ----> 4 ds_combined = read_netcdfs(files,dim='time',transform_func=None)

<ipython-input-4-36b274dcd63c> in read_netcdfs(files, dim, transform_func) 18 paths = files 19 datasets = [process_one_path(p) for p in paths] ---> 20 combined = xr.concat(datasets, dim) 21 return combined

/usr/local/lib/python2.7/dist-packages/xarray/core/combine.pyc in concat(objs, dim, data_vars, coords, compat, positions, indexers, mode, concat_over) 112 raise TypeError('can only concatenate xarray Dataset and DataArray ' 113 'objects, got %s' % type(first_obj)) --> 114 return f(objs, dim, data_vars, coords, compat, positions) 115 116

/usr/local/lib/python2.7/dist-packages/xarray/core/combine.pyc in _dataset_concat(datasets, dim, data_vars, coords, compat, positions) 231 for k, v in iteritems(ds.variables): 232 if k not in result_vars and k not in concat_over: --> 233 raise ValueError('encountered unexpected variable %r' % k) 234 elif (k in result_coord_names) != (k in ds.coords): 235 raise ValueError('%r is a coordinate in some datasets but not '

ValueError: encountered unexpected variable u'nbnds'

Any ideas as to how I can avoid triggering this error?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/919/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
162726984 MDU6SXNzdWUxNjI3MjY5ODQ= 891 BUG: Test for dask version erroneously fails when calling xr.mfdataset() dgoldwx2112 17951292 closed 0     2 2016-06-28T16:17:16Z 2016-07-20T04:29:34Z 2016-07-20T04:29:34Z NONE      

I have latest version of dask installed but get the following exception when using xr.mfdataset(): 'ImportError: xarray requires dask version 0.6 or newer'

Problem appears to be due to the fact that the latest version is 0.10 (i.e., dask.version = '0.10.0') but exception is raised since 0.1 < 0.6

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/891/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
166195300 MDU6SXNzdWUxNjYxOTUzMDA= 900 How to apply function to two (or more) variables simultaneously dgoldwx2112 17951292 closed 0     3 2016-07-18T21:14:57Z 2016-07-19T01:31:46Z 2016-07-19T01:31:40Z NONE      

I have two datasets: fcst and obs, each with dimensions (time, lat, lon). The first contains predicted values on a lat x lon grid for a given lead time and the second the corresponding verifying observations. I want to compute skill scores at each time but this obviously involves applying a function involving variables from both datasets. Moreover, the two fields (fcst and obs) need to be cosine weighted first (involves coordinate 'lat'). Furthermore, I wish to align the forecasts and obs in time; there may be missing values of each at different times. I tried aligning the datasets using 'xr.align' but got all kinds of errors when trying to use the resulting new dataset. I suppose I could extract the values from each dataset, taking care to use indexing to extract the common times and then using standard numpy operations and the like but before I go that route is there a methodology for using xarray to do such a computation? If I were doing a single variable computation, it would be easy to use the groupby and apply methods. TIA for any advice!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/900/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 3606.881ms · About: xarray-datasette