home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where state = "closed" and user = 9655353 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 3

state 1

  • closed · 3 ✖

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
307224717 MDU6SXNzdWUzMDcyMjQ3MTc= 2002 Unexpected decoded time in xarray >= 0.10.1 JanisGailis 9655353 closed 0   0.10.3 3008859 8 2018-03-21T12:28:54Z 2018-03-31T01:16:14Z 2018-03-31T01:16:14Z NONE      

Problem description

Given the original time dimension: python ds = xr.open_mfdataset("C:\\Users\\janis\\.cate\\data_stores\\local\\local.SST_should_fail\\*.nc", decode_cf=False) <xarray.DataArray 'time' (time: 32)> array([788961600, 789048000, 789134400, 789220800, 789307200, 789393600, 789480000, 789566400, 789652800, 789739200, 789825600, 789912000, 789998400, 790084800, 790171200, 790257600, 790344000, 790430400, 790516800, 790603200, 790689600, 790776000, 790862400, 790948800, 791035200, 791121600, 791208000, 791294400, 791380800, 791467200, 791553600, 791640000], dtype=int64) Coordinates: * time (time) int64 788961600 789048000 789134400 789220800 789307200 ... Attributes: standard_name: time axis: T comment: bounds: time_bnds long_name: reference time of sst file _ChunkSizes: 1 units: seconds since 1981-01-01 calendar: gregorian Produces this decoded time dimension with xarray >= 0.10.1: python ds = xr.open_mfdataset("C:\\Users\\janis\\.cate\\data_stores\\local\\local.SST_should_fail\\*.nc", decode_cf=True) <xarray.DataArray 'time' (time: 32)> array(['1981-01-01T00:00:00.627867648', '1980-12-31T23:59:58.770774016', '1981-01-01T00:00:01.208647680', '1980-12-31T23:59:59.351554048', '1981-01-01T00:00:01.789427712', '1980-12-31T23:59:59.932334080', '1980-12-31T23:59:58.075240448', '1981-01-01T00:00:00.513114112', '1980-12-31T23:59:58.656020480', '1981-01-01T00:00:01.093894144', '1980-12-31T23:59:59.236800512', '1981-01-01T00:00:01.674674176', '1980-12-31T23:59:59.817580544', '1980-12-31T23:59:57.960486912', '1981-01-01T00:00:00.398360576', '1980-12-31T23:59:58.541266944', '1981-01-01T00:00:00.979140608', '1980-12-31T23:59:59.122046976', '1981-01-01T00:00:01.559920640', '1980-12-31T23:59:59.702827008', '1981-01-01T00:00:02.140700672', '1981-01-01T00:00:00.283607040', '1980-12-31T23:59:58.426513408', '1981-01-01T00:00:00.864387072', '1980-12-31T23:59:59.007293440', '1981-01-01T00:00:01.445167104', '1980-12-31T23:59:59.588073472', '1981-01-01T00:00:02.025947136', '1981-01-01T00:00:00.168853504', '1980-12-31T23:59:58.311759872', '1981-01-01T00:00:00.749633536', '1980-12-31T23:59:58.892539904'], dtype='datetime64[ns]') Coordinates: * time (time) datetime64[ns] 1981-01-01T00:00:00.627867648 ... Attributes: standard_name: time axis: T comment: bounds: time_bnds long_name: reference time of sst file _ChunkSizes: 1

Expected Output

With xarray == 0.10.0 the output is as expected: python ds = xr.open_mfdataset("C:\\Users\\janis\\.cate\\data_stores\\local\\local.SST_should_fail\\*.nc", decode_cf=True) <xarray.DataArray 'time' (time: 32)> array(['2006-01-01T12:00:00.000000000', '2006-01-02T12:00:00.000000000', '2006-01-03T12:00:00.000000000', '2006-01-04T12:00:00.000000000', '2006-01-05T12:00:00.000000000', '2006-01-06T12:00:00.000000000', '2006-01-07T12:00:00.000000000', '2006-01-08T12:00:00.000000000', '2006-01-09T12:00:00.000000000', '2006-01-10T12:00:00.000000000', '2006-01-11T12:00:00.000000000', '2006-01-12T12:00:00.000000000', '2006-01-13T12:00:00.000000000', '2006-01-14T12:00:00.000000000', '2006-01-15T12:00:00.000000000', '2006-01-16T12:00:00.000000000', '2006-01-17T12:00:00.000000000', '2006-01-18T12:00:00.000000000', '2006-01-19T12:00:00.000000000', '2006-01-20T12:00:00.000000000', '2006-01-21T12:00:00.000000000', '2006-01-22T12:00:00.000000000', '2006-01-23T12:00:00.000000000', '2006-01-24T12:00:00.000000000', '2006-01-25T12:00:00.000000000', '2006-01-26T12:00:00.000000000', '2006-01-27T12:00:00.000000000', '2006-01-28T12:00:00.000000000', '2006-01-29T12:00:00.000000000', '2006-01-30T12:00:00.000000000', '2006-01-31T12:00:00.000000000', '2006-02-01T12:00:00.000000000'], dtype='datetime64[ns]') Coordinates: * time (time) datetime64[ns] 2006-01-01T12:00:00 2006-01-02T12:00:00 ... Attributes: standard_name: time axis: T comment: bounds: time_bnds long_name: reference time of sst file _ChunkSizes: 1

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 32 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 69 Stepping 1, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None xarray: 0.10.1 pandas: 0.22.0 numpy: 1.14.2 scipy: 0.19.1 netCDF4: 1.3.1 h5netcdf: 0.5.0 h5py: 2.7.1 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.17.1 distributed: 1.21.3 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: None setuptools: 39.0.1 pip: 9.0.2 conda: None pytest: 3.1.3 IPython: 6.2.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2002/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
260569191 MDU6SXNzdWUyNjA1NjkxOTE= 1592 groupby() fails with a stack trace when Dask 0.15.3 is used JanisGailis 9655353 closed 0     2 2017-09-26T10:15:46Z 2017-10-04T21:42:52Z 2017-10-04T21:42:52Z NONE      

Hi xarray team!

Our unit tests broke when Dask got updated to 0.15.3, after a quick investigation it became clear that groupby operation on an xarray Dataset fails with this Dask version.

The following example: ```python import xarray as xr import numpy as np import dask

def plus_one(da): return da + 1

print(xr.version) print(dask.version)

ds = xr.Dataset({ 'first': (['time', 'lat', 'lon'], np.array([np.eye(4, 8), np.eye(4, 8)])), 'second': (['time', 'lat', 'lon'], np.array([np.eye(4, 8), np.eye(4, 8)])), 'lat': np.linspace(-67.5, 67.5, 4), 'lon': np.linspace(-157.5, 157.5, 8), 'time': np.array([1, 2])}).chunk(chunks={'lat': 2, 'lon': 4})

ds_new = ds.groupby('time').apply(plus_one) Produces the following output when Dask 0.15.3 is used: 0.9.6 0.15.3 Traceback (most recent call last): File "/home/ccitbx/Development/dask_test/dask_test.py", line 20, in <module> ds_new = ds.groupby('time').apply(plus_one) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/groupby.py", line 617, in apply return self._combine(applied) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/groupby.py", line 621, in _combine applied_example, applied = peek_at(applied) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/utils.py", line 114, in peek_at peek = next(gen) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/groupby.py", line 616, in <genexpr> applied = (func(ds, kwargs) for ds in self._iter_grouped()) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/groupby.py", line 298, in _iter_grouped yield self._obj.isel({self._group_dim: indices}) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/dataset.py", line 1143, in isel new_var = var.isel(**var_indexers) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/variable.py", line 570, in isel return self[tuple(key)] File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/variable.py", line 400, in getitem values = self._indexable_data[key] File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/xarray/core/indexing.py", line 498, in getitem value = self.array[key] File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/dask/array/core.py", line 1222, in getitem index2 = normalize_index(index, self.shape) File "/home/ccitbx/miniconda3/envs/cate-new-dask/lib/python3.6/site-packages/dask/array/slicing.py", line 762, in normalize_index raise IndexError("Too many indices for array") IndexError: Too many indices for array ``` It works as expected when Dask < 0.15.3 is used.

I don't have enough understanding regarding what's really going on in Dask-land, so I leave it to you guys to open an issue in their Issue tracker if needed!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1592/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
216010508 MDU6SXNzdWUyMTYwMTA1MDg= 1316 ValueError not raised when doing difference of two non-intersecting datasets JanisGailis 9655353 closed 0     3 2017-03-22T10:09:43Z 2017-03-23T16:20:23Z 2017-03-23T16:20:23Z NONE      

From the documentation I infer that when doing binary arithmetic operations, a ValueError should be raised when the datasets' variables don't intersect. However, the following happily returns a dataset with empty variable arrays:

python import xarray as xr import numpy as np from datetime import datetime ds = xr.Dataset({ 'first': (['lat', 'lon', 'time'], np.ones([45, 90, 12])), 'second': (['lat', 'lon', 'time'], np.ones([45, 90, 12])), 'lat': np.linspace(-88, 88, 45), 'lon': np.linspace(-178, 178, 90), 'time': [datetime(2000, x, 1) for x in range(1, 13)]}) ds1 = xr.Dataset({ 'first': (['lat', 'lon', 'time'], np.ones([45, 90, 12])), 'second': (['lat', 'lon', 'time'], np.ones([45, 90, 12])), 'lat': np.linspace(-88, 88, 45), 'lon': np.linspace(-178, 178, 90), 'time': [datetime(2003, x, 1) for x in range(1, 13)]}) print(ds-ds1) <xarray.Dataset> Dimensions: (lat: 45, lon: 90, time: 0) Coordinates: * time (time) datetime64[ns] * lat (lat) float64 -88.0 -84.0 -80.0 -76.0 -72.0 -68.0 -64.0 -60.0 ... * lon (lon) float64 -178.0 -174.0 -170.0 -166.0 -162.0 -158.0 -154.0 ... Data variables: first (lat, lon, time) float64 second (lat, lon, time) float64

Feel free to close right away if this is the desired behavior.

EDIT: Xarray version is '0.9.1'

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1316/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 17.55ms · About: xarray-datasette