home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

1 row where repo = 13221727, type = "issue" and user = 24368151 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue · 1 ✖

state 1

  • open 1

repo 1

  • xarray · 1 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
427768540 MDU6SXNzdWU0Mjc3Njg1NDA= 2862 cannot properly .close() a dataset opened with `chunks` argument? lorenzori 24368151 open 0     2 2019-04-01T15:23:06Z 2019-04-08T12:29:48Z   NONE      

I want to do operations on a copy of a dataset and then overwrite the NetCDF it was read from: python f = xr.open_dataset('dataset.nc') n = f.copy(deep=True) f.close() n.to_netcdf(path='dataset.nc')

Problem description

The above works, however if I use the chunks argument while opening the dataset, a KeyError: 'tcw' is thrown. The NetCDF on disk is also corrupted. It happens both with deep=True or deep=False .

Expected Output

Although not an expert of Dask, it kind of makes sense that close doesn't really close when evaluated lazily if I still need things from it afterwards, so maybe that is the correct behaviour and we should just handle the exception ? or should I compute() after closing? really not sure.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None xarray: 0.11.0 pandas: 0.24.1 numpy: 1.15.4 scipy: None netCDF4: 1.4.2 h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 PseudonetCDF: None rasterio: 1.0.13 iris: None bottleneck: None cyordereddict: None dask: 1.1.4 distributed: 1.26.0 matplotlib: None cartopy: None seaborn: None setuptools: 40.7.3 pip: 19.0.1 conda: None pytest: 4.2.1 IPython: 7.2.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2862/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 85.943ms · About: xarray-datasette