home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where comments = 4 and "updated_at" is on date 2022-04-28 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: author_association, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 2
  • pull 1

state 2

  • open 2
  • closed 1

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1218094019 PR_kwDOAMm_X8426dL2 6534 Attempt to improve CI caching max-sixty 5635139 closed 0     4 2022-04-28T02:16:29Z 2022-04-28T23:55:16Z 2022-04-28T06:30:25Z MEMBER   0 pydata/xarray/pulls/6534

Currently about 40% of the time is taken by installing things, hopefully we can cut that down

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6534/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
620468256 MDU6SXNzdWU2MjA0NjgyNTY= 4076 Zarr ZipStore versus DirectoryStore: ZipStore requires .close() Huite 13662783 open 0     4 2020-05-18T19:58:21Z 2022-04-28T22:37:48Z   CONTRIBUTOR      

I was saving my dataset into a ZipStore -- apparently succesfully -- but then I couldn't reopen them.

The issue appears to be that a regular DirectoryStore behaves a little differently: it doesn't need to be closed, while a ZipStore.

(I'm not sure how this relates to #2586, the remarks there don't appear to be applicable anymore.)

MCVE Code Sample

This errors: ```python import xarray as xr import zarr

works as expected

ds = xr.Dataset({'foo': [2,3,4], 'bar': ('x', [1, 2]), 'baz': 3.14}) ds.to_zarr(zarr.DirectoryStore("test.zarr")) print(xr.open_zarr(zarr.DirectoryStore("test.zarr")))

error with ValueError "group not found at path ''

ds.to_zarr(zarr.ZipStore("test.zip")) print(xr.open_zarr(zarr.ZipStore("test.zip"))) ```

Calling close, or using with does the trick:

```python store = zarr.ZipStore("test2.zip") ds.to_zarr(store) store.close() print(xr.open_zarr(zarr.ZipStore("test2.zip")))

with zarr.ZipStore("test3.zip") as store: ds.to_zarr(store) print(xr.open_zarr(zarr.ZipStore("test3.zip"))) ```

Expected Output

I think it would be preferable to close the ZipStore in this case. But I might be missing something?

Problem Description

Because to_zarr works in this situation with a DirectoryStore, it's easy to assume a ZipStore will work similarly. However, I couldn't get it to read my data back in this case.

Versions

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 21:48:41) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: None.None libhdf5: 1.10.5 libnetcdf: 4.7.3 xarray: 0.15.2.dev41+g8415eefa.d20200419 pandas: 0.25.3 numpy: 1.17.5 scipy: 1.3.1 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.4.0 cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.2 cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.14.0+23.gbea4c9a2 distributed: 2.14.0 matplotlib: 3.1.2 cartopy: None seaborn: 0.10.0 numbagg: None pint: None setuptools: 46.1.3.post20200325 pip: 20.0.2 conda: None pytest: 5.3.4 IPython: 7.13.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4076/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
412180435 MDU6SXNzdWU0MTIxODA0MzU= 2780 Automatic dtype encoding in to_netcdf nedclimaterisk 43126798 open 0     4 2019-02-19T23:56:48Z 2022-04-28T19:01:34Z   CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

Example from https://stackoverflow.com/questions/49053692/csv-to-netcdf-produces-nc-files-4x-larger-than-the-original-csv

```{python} import pandas as pd import xarray as xr import numpy as np import os

Create pandas DataFrame

df = pd.DataFrame(np.random.randint(low=0, high=10, size=(100000,5)), columns=['a', 'b', 'c', 'd', 'e'])

Make 'e' a column of strings

df['e'] = df['e'].astype(str)

Save to csv

df.to_csv('df.csv')

Convert to an xarray's Dataset

ds = xr.Dataset.from_dataframe(df)

Save NetCDF file

ds.to_netcdf('ds.nc')

Compute stats

stats1 = os.stat('df.csv') stats2 = os.stat('ds.nc') print('csv=',str(stats1.st_size)) print('nc =',str(stats2.st_size)) print('nc/csv=',str(stats2.st_size/stats1.st_size)) ```

The result:

```

csv = 1688902 bytes nc = 6432441 bytes nc/csv = 3.8086526038811015 ```

Problem description

NetCDF can store numerical data, as well as some other data, such as categorical data, in a much more efficient way than CSV, do to it's ability to store numbers (integers, limited precision floats) in smaller encodings (e.g. 8 bit integers), as well as it's ability to compress data using zlib.

The answers in the stack exchange link at the top of the page give some examples of how this can be done. The second one is particularly useful, and it would be nice if xarray provided an encoding={'dtype': 'auto'} option to automatically select the most compact dtype able to store a given DataArray before saving the file.

Expected Output

NetCDF from should be equal to or smaller than a CSV full of numerical data in most cases.

Output of xr.show_versions()

``` INSTALLED VERSIONS ------------------ commit: None python: 3.6.7 | packaged by conda-forge | (default, Nov 21 2018, 03:09:43) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.18.0-15-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_AU.UTF-8 LOCALE: en_AU.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.11.3 pandas: 0.24.1 netCDF4: 1.4.2 h5netcdf: 0.6.2 h5py: 2.9.0 dask: 1.0.0 ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2780/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1283.852ms · About: xarray-datasette