home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

1 row where state = "open", type = "issue" and user = 43126798 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue · 1 ✖

state 1

  • open · 1 ✖

repo 1

  • xarray 1
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
412180435 MDU6SXNzdWU0MTIxODA0MzU= 2780 Automatic dtype encoding in to_netcdf nedclimaterisk 43126798 open 0     4 2019-02-19T23:56:48Z 2022-04-28T19:01:34Z   CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

Example from https://stackoverflow.com/questions/49053692/csv-to-netcdf-produces-nc-files-4x-larger-than-the-original-csv

```{python} import pandas as pd import xarray as xr import numpy as np import os

Create pandas DataFrame

df = pd.DataFrame(np.random.randint(low=0, high=10, size=(100000,5)), columns=['a', 'b', 'c', 'd', 'e'])

Make 'e' a column of strings

df['e'] = df['e'].astype(str)

Save to csv

df.to_csv('df.csv')

Convert to an xarray's Dataset

ds = xr.Dataset.from_dataframe(df)

Save NetCDF file

ds.to_netcdf('ds.nc')

Compute stats

stats1 = os.stat('df.csv') stats2 = os.stat('ds.nc') print('csv=',str(stats1.st_size)) print('nc =',str(stats2.st_size)) print('nc/csv=',str(stats2.st_size/stats1.st_size)) ```

The result:

```

csv = 1688902 bytes nc = 6432441 bytes nc/csv = 3.8086526038811015 ```

Problem description

NetCDF can store numerical data, as well as some other data, such as categorical data, in a much more efficient way than CSV, do to it's ability to store numbers (integers, limited precision floats) in smaller encodings (e.g. 8 bit integers), as well as it's ability to compress data using zlib.

The answers in the stack exchange link at the top of the page give some examples of how this can be done. The second one is particularly useful, and it would be nice if xarray provided an encoding={'dtype': 'auto'} option to automatically select the most compact dtype able to store a given DataArray before saving the file.

Expected Output

NetCDF from should be equal to or smaller than a CSV full of numerical data in most cases.

Output of xr.show_versions()

``` INSTALLED VERSIONS ------------------ commit: None python: 3.6.7 | packaged by conda-forge | (default, Nov 21 2018, 03:09:43) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.18.0-15-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_AU.UTF-8 LOCALE: en_AU.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.11.3 pandas: 0.24.1 netCDF4: 1.4.2 h5netcdf: 0.6.2 h5py: 2.9.0 dask: 1.0.0 ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2780/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 25.874ms · About: xarray-datasette