home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

1 row where "closed_at" is on date 2018-05-18, comments = 4 and repo = 13221727 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: state, repo, type

id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
310819233 MDU6SXNzdWUzMTA4MTkyMzM= 2036 better error message for to_netcdf -> unlimited_dims mathause 10194086 closed 0     4 2018-04-03T12:39:21Z 2018-05-18T14:48:32Z 2018-05-18T14:48:32Z MEMBER      

Code Sample, a copy-pastable example if possible

```python

Your code here

import numpy as np import xarray as xr x = np.arange(10) da = xr.Dataset(data_vars=dict(data=('dim1', x)), coords=dict(dim1=('dim1', x), dim2=('dim2', x))) da.to_netcdf('tst.nc', format='NETCDF4_CLASSIC', unlimited_dims='dim1')

```

Problem description

This creates the error RuntimeError: NetCDF: NC_UNLIMITED size already in use. With format='NETCDF4' silently creates the dimensions d, i, m, and \1.

The correct syntax is unlimited_dims=['dim1'].

With format='NETCDF4_CLASSIC' and unlimited_dims=['dim1', 'dim2'], still raises the not-so-helpful NC_UNLIMITED error.

I only tested with netCDF4 as backend.

Expected Output

  • better error message
  • work with unlimited_dims='dim1'

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Linux OS-release: 4.4.120-45-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 xarray: 0.10.2 pandas: 0.22.0 numpy: 1.14.2 scipy: 1.0.1 netCDF4: 1.3.1 h5netcdf: 0.5.0 h5py: 2.7.1 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: 1.0.0 dask: 0.17.2 distributed: 1.21.5 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: 0.8.1 setuptools: 39.0.1 pip: 9.0.3 conda: None pytest: 3.5.0 IPython: 6.3.0 sphinx: 1.7.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2036/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 8406.182ms · About: xarray-datasette